WorldWideScience

Sample records for models reproduce observed

  1. Can Computational Sediment Transport Models Reproduce the Observed Variability of Channel Networks in Modern Deltas?

    Science.gov (United States)

    Nesvold, E.; Mukerji, T.

    2017-12-01

    River deltas display complex channel networks that can be characterized through the framework of graph theory, as shown by Tejedor et al. (2015). Deltaic patterns may also be useful in a Bayesian approach to uncertainty quantification of the subsurface, but this requires a prior distribution of the networks of ancient deltas. By considering subaerial deltas, one can at least obtain a snapshot in time of the channel network spectrum across deltas. In this study, the directed graph structure is semi-automatically extracted from satellite imagery using techniques from statistical processing and machine learning. Once the network is labeled with vertices and edges, spatial trends and width and sinuosity distributions can also be found easily. Since imagery is inherently 2D, computational sediment transport models can serve as a link between 2D network structure and 3D depositional elements; the numerous empirical rules and parameters built into such models makes it necessary to validate the output with field data. For this purpose we have used a set of 110 modern deltas, with average water discharge ranging from 10 - 200,000 m3/s, as a benchmark for natural variability. Both graph theoretic and more general distributions are established. A key question is whether it is possible to reproduce this deltaic network spectrum with computational models. Delft3D was used to solve the shallow water equations coupled with sediment transport. The experimental setup was relatively simple; incoming channelized flow onto a tilted plane, with varying wave and tidal energy, sediment types and grain size distributions, river discharge and a few other input parameters. Each realization was run until a delta had fully developed: between 50 and 500 years (with a morphology acceleration factor). It is shown that input parameters should not be sampled independently from the natural ranges, since this may result in deltaic output that falls well outside the natural spectrum. Since we are

  2. Can CFMIP2 models reproduce the leading modes of cloud vertical structure in the CALIPSO-GOCCP observations?

    Science.gov (United States)

    Wang, Fang; Yang, Song

    2018-02-01

    Using principal component (PC) analysis, three leading modes of cloud vertical structure (CVS) are revealed by the GCM-Oriented CALIPSO Cloud Product (GOCCP), i.e. tropical high, subtropical anticyclonic and extratropical cyclonic cloud modes (THCM, SACM and ECCM, respectively). THCM mainly reflect the contrast between tropical high clouds and clouds in middle/high latitudes. SACM is closely associated with middle-high clouds in tropical convective cores, few-cloud regimes in subtropical anticyclonic clouds and stratocumulus over subtropical eastern oceans. ECCM mainly corresponds to clouds along extratropical cyclonic regions. Models of phase 2 of Cloud Feedback Model Intercomparison Project (CFMIP2) well reproduce the THCM, but SACM and ECCM are generally poorly simulated compared to GOCCP. Standardized PCs corresponding to CVS modes are generally captured, whereas original PCs (OPCs) are consistently underestimated (overestimated) for THCM (SACM and ECCM) by CFMIP2 models. The effects of CVS modes on relative cloud radiative forcing (RSCRF/RLCRF) (RSCRF being calculated at the surface while RLCRF at the top of atmosphere) are studied in terms of principal component regression method. Results show that CFMIP2 models tend to overestimate (underestimated or simulate the opposite sign) RSCRF/RLCRF radiative effects (REs) of ECCM (THCM and SACM) in unit global mean OPC compared to observations. These RE biases may be attributed to two factors, one of which is underestimation (overestimation) of low/middle clouds (high clouds) (also known as stronger (weaker) REs in unit low/middle (high) clouds) in simulated global mean cloud profiles, the other is eigenvector biases in CVS modes (especially for SACM and ECCM). It is suggested that much more attention should be paid on improvement of CVS, especially cloud parameterization associated with particular physical processes (e.g. downwelling regimes with the Hadley circulation, extratropical storm tracks and others), which

  3. Reproducing Electric Field Observations during Magnetic Storms by means of Rigorous 3-D Modelling and Distortion Matrix Co-estimation

    Science.gov (United States)

    Püthe, Christoph; Manoj, Chandrasekharan; Kuvshinov, Alexey

    2015-04-01

    Electric fields induced in the conducting Earth during magnetic storms drive currents in power transmission grids, telecommunication lines or buried pipelines. These geomagnetically induced currents (GIC) can cause severe service disruptions. The prediction of GIC is thus of great importance for public and industry. A key step in the prediction of the hazard to technological systems during magnetic storms is the calculation of the geoelectric field. To address this issue for mid-latitude regions, we developed a method that involves 3-D modelling of induction processes in a heterogeneous Earth and the construction of a model of the magnetospheric source. The latter is described by low-degree spherical harmonics; its temporal evolution is derived from observatory magnetic data. Time series of the electric field can be computed for every location on Earth's surface. The actual electric field however is known to be perturbed by galvanic effects, arising from very local near-surface heterogeneities or topography, which cannot be included in the conductivity model. Galvanic effects are commonly accounted for with a real-valued time-independent distortion matrix, which linearly relates measured and computed electric fields. Using data of various magnetic storms that occurred between 2000 and 2003, we estimated distortion matrices for observatory sites onshore and on the ocean bottom. Strong correlations between modellings and measurements validate our method. The distortion matrix estimates prove to be reliable, as they are accurately reproduced for different magnetic storms. We further show that 3-D modelling is crucial for a correct separation of galvanic and inductive effects and a precise prediction of electric field time series during magnetic storms. Since the required computational resources are negligible, our approach is suitable for a real-time prediction of GIC. For this purpose, a reliable forecast of the source field, e.g. based on data from satellites

  4. Reproducibility in Computational Neuroscience Models and Simulations

    Science.gov (United States)

    McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.

    2016-01-01

    Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845

  5. Dysplastic naevus: histological criteria and their inter-observer reproducibility.

    Science.gov (United States)

    Hastrup, N; Clemmensen, O J; Spaun, E; Søndergaard, K

    1994-06-01

    Forty melanocytic lesions were examined in a pilot study, which was followed by a final series of 100 consecutive melanocytic lesions, in order to evaluate the inter-observer reproducibility of the histological criteria proposed for the dysplastic naevus. The specimens were examined in a blind fashion by four observers. Analysis by kappa statistics showed poor reproducibility of nuclear features, while reproducibility of architectural features was acceptable, improving in the final series. Consequently, we cannot apply the combined criteria of cytological and architectural features with any confidence in the diagnosis of dysplastic naevus, and, until further studies have documented that architectural criteria alone will suffice in the diagnosis of dysplastic naevus, we, as pathologists, shall avoid this term.

  6. Modeling reproducibility of porescale multiphase flow experiments

    Science.gov (United States)

    Ling, B.; Tartakovsky, A. M.; Bao, J.; Oostrom, M.; Battiato, I.

    2017-12-01

    Multi-phase flow in porous media is widely encountered in geological systems. Understanding immiscible fluid displacement is crucial for processes including, but not limited to, CO2 sequestration, non-aqueous phase liquid contamination and oil recovery. Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  7. Towards reproducible descriptions of neuronal network models.

    Directory of Open Access Journals (Sweden)

    Eilen Nordlie

    2009-08-01

    Full Text Available Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.

  8. Guidelines for Reproducibly Building and Simulating Systems Biology Models.

    Science.gov (United States)

    Medley, J Kyle; Goldberg, Arthur P; Karr, Jonathan R

    2016-10-01

    Reproducibility is the cornerstone of the scientific method. However, currently, many systems biology models cannot easily be reproduced. This paper presents methods that address this problem. We analyzed the recent Mycoplasma genitalium whole-cell (WC) model to determine the requirements for reproducible modeling. We determined that reproducible modeling requires both repeatable model building and repeatable simulation. New standards and simulation software tools are needed to enhance and verify the reproducibility of modeling. New standards are needed to explicitly document every data source and assumption, and new deterministic parallel simulation tools are needed to quickly simulate large, complex models. We anticipate that these new standards and software will enable researchers to reproducibly build and simulate more complex models, including WC models.

  9. From alginate impressions to digital virtual models: accuracy and reproducibility.

    Science.gov (United States)

    Dalstra, Michel; Melsen, Birte

    2009-03-01

    To compare the accuracy and reproducibility of measurements performed on digital virtual models with those taken on plaster casts from models poured immediately after the impression was taken, the 'gold standard', and from plaster models poured following a 3-5 day shipping procedure of the alginate impression. Direct comparison of two measuring techniques. The study was conducted at the Department of Orthodontics, School of Dentistry, University of Aarhus, Denmark in 2006/2007. Twelve randomly selected orthodontic graduate students with informed consent. Three sets of alginate impressions were taken from the participants within 1 hour. Plaster models were poured immediately from two of the sets, while the third set was kept in transit in the mail for 3-5 days. Upon return a plaster model was poured as well. Finally digital models were made from the plaster models. A number of measurements were performed on the plaster casts with a digital calliper and on the corresponding digital models using the virtual measuring tool of the accompanying software. Afterwards these measurements were compared statistically. No statistical differences were found between the three sets of plaster models. The intra- and inter-observer variability are smaller for the measurements performed on the digital models. Sending alginate impressions by mail does not affect the quality and accuracy of plaster casts poured from them afterwards. Virtual measurements performed on digital models display less variability than the corresponding measurements performed with a calliper on the actual models.

  10. REPRODUCING THE OBSERVED ABUNDANCES IN RCB AND HdC STARS WITH POST-DOUBLE-DEGENERATE MERGER MODELS-CONSTRAINTS ON MERGER AND POST-MERGER SIMULATIONS AND PHYSICS PROCESSES

    Energy Technology Data Exchange (ETDEWEB)

    Menon, Athira; Herwig, Falk; Denissenkov, Pavel A. [Department of Physics and Astronomy, University of Victoria, Victoria, BC V8P5C2 (Canada); Clayton, Geoffrey C.; Staff, Jan [Department of Physics and Astronomy, Louisiana State University, 202 Nicholson Hall, Tower Dr., Baton Rouge, LA 70803-4001 (United States); Pignatari, Marco [Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel (Switzerland); Paxton, Bill [Kavli Institute for Theoretical Physics and Department of Physics, Kohn Hall, University of California, Santa Barbara, CA 93106 (United States)

    2013-07-20

    The R Coronae Borealis (RCB) stars are hydrogen-deficient, variable stars that are most likely the result of He-CO WD mergers. They display extremely low oxygen isotopic ratios, {sup 16}O/{sup 18}O {approx_equal} 1-10, {sup 12}C/{sup 13}C {>=} 100, and enhancements up to 2.6 dex in F and in s-process elements from Zn to La, compared to solar. These abundances provide stringent constraints on the physical processes during and after the double-degenerate merger. As shown previously, O-isotopic ratios observed in RCB stars cannot result from the dynamic double-degenerate merger phase, and we now investigate the role of the long-term one-dimensional spherical post-merger evolution and nucleosynthesis based on realistic hydrodynamic merger progenitor models. We adopt a model for extra envelope mixing to represent processes driven by rotation originating in the dynamical merger. Comprehensive nucleosynthesis post-processing simulations for these stellar evolution models reproduce, for the first time, the full range of the observed abundances for almost all the elements measured in RCB stars: {sup 16}O/{sup 18}O ratios between 9 and 15, C-isotopic ratios above 100, and {approx}1.4-2.35 dex F enhancements, along with enrichments in s-process elements. The nucleosynthesis processes in our models constrain the length and temperature in the dynamic merger shell-of-fire feature as well as the envelope mixing in the post-merger phase. s-process elements originate either in the shell-of-fire merger feature or during the post-merger evolution, but the contribution from the asymptotic giant branch progenitors is negligible. The post-merger envelope mixing must eventually cease {approx}10{sup 6} yr after the dynamic merger phase before the star enters the RCB phase.

  11. Can a coupled meteorology–chemistry model reproduce the ...

    Science.gov (United States)

    The ability of a coupled meteorology–chemistry model, i.e., Weather Research and Forecast and Community Multiscale Air Quality (WRF-CMAQ), to reproduce the historical trend in aerosol optical depth (AOD) and clear-sky shortwave radiation (SWR) over the Northern Hemisphere has been evaluated through a comparison of 21-year simulated results with observation-derived records from 1990 to 2010. Six satellite-retrieved AOD products including AVHRR, TOMS, SeaWiFS, MISR, MODIS-Terra and MODIS-Aqua as well as long-term historical records from 11 AERONET sites were used for the comparison of AOD trends. Clear-sky SWR products derived by CERES at both the top of atmosphere (TOA) and surface as well as surface SWR data derived from seven SURFRAD sites were used for the comparison of trends in SWR. The model successfully captured increasing AOD trends along with the corresponding increased TOA SWR (upwelling) and decreased surface SWR (downwelling) in both eastern China and the northern Pacific. The model also captured declining AOD trends along with the corresponding decreased TOA SWR (upwelling) and increased surface SWR (downwelling) in the eastern US, Europe and the northern Atlantic for the period of 2000–2010. However, the model underestimated the AOD over regions with substantial natural dust aerosol contributions, such as the Sahara Desert, Arabian Desert, central Atlantic and northern Indian Ocean. Estimates of the aerosol direct radiative effect (DRE) at TOA a

  12. Modeling and evaluating repeatability and reproducibility of ordinal classifications

    NARCIS (Netherlands)

    de Mast, J.; van Wieringen, W.N.

    2010-01-01

    This paper argues that currently available methods for the assessment of the repeatability and reproducibility of ordinal classifications are not satisfactory. The paper aims to study whether we can modify a class of models from Item Response Theory, well established for the study of the reliability

  13. Cyberinfrastructure to Support Collaborative and Reproducible Computational Hydrologic Modeling

    Science.gov (United States)

    Goodall, J. L.; Castronova, A. M.; Bandaragoda, C.; Morsy, M. M.; Sadler, J. M.; Essawy, B.; Tarboton, D. G.; Malik, T.; Nijssen, B.; Clark, M. P.; Liu, Y.; Wang, S. W.

    2017-12-01

    Creating cyberinfrastructure to support reproducibility of computational hydrologic models is an important research challenge. Addressing this challenge requires open and reusable code and data with machine and human readable metadata, organized in ways that allow others to replicate results and verify published findings. Specific digital objects that must be tracked for reproducible computational hydrologic modeling include (1) raw initial datasets, (2) data processing scripts used to clean and organize the data, (3) processed model inputs, (4) model results, and (5) the model code with an itemization of all software dependencies and computational requirements. HydroShare is a cyberinfrastructure under active development designed to help users store, share, and publish digital research products in order to improve reproducibility in computational hydrology, with an architecture supporting hydrologic-specific resource metadata. Researchers can upload data required for modeling, add hydrology-specific metadata to these resources, and use the data directly within HydroShare.org for collaborative modeling using tools like CyberGIS, Sciunit-CLI, and JupyterHub that have been integrated with HydroShare to run models using notebooks, Docker containers, and cloud resources. Current research aims to implement the Structure For Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model within HydroShare to support hypothesis-driven hydrologic modeling while also taking advantage of the HydroShare cyberinfrastructure. The goal of this integration is to create the cyberinfrastructure that supports hypothesis-driven model experimentation, education, and training efforts by lowering barriers to entry, reducing the time spent on informatics technology and software development, and supporting collaborative research within and across research groups.

  14. Reproducibility and Transparency in Ocean-Climate Modeling

    Science.gov (United States)

    Hannah, N.; Adcroft, A.; Hallberg, R.; Griffies, S. M.

    2015-12-01

    Reproducibility is a cornerstone of the scientific method. Within geophysical modeling and simulation achieving reproducibility can be difficult, especially given the complexity of numerical codes, enormous and disparate data sets, and variety of supercomputing technology. We have made progress on this problem in the context of a large project - the development of new ocean and sea ice models, MOM6 and SIS2. Here we present useful techniques and experience.We use version control not only for code but the entire experiment working directory, including configuration (run-time parameters, component versions), input data and checksums on experiment output. This allows us to document when the solutions to experiments change, whether due to code updates or changes in input data. To avoid distributing large input datasets we provide the tools for generating these from the sources, rather than provide raw input data.Bugs can be a source of non-determinism and hence irreproducibility, e.g. reading from or branching on uninitialized memory. To expose these we routinely run system tests, using a memory debugger, multiple compilers and different machines. Additional confidence in the code comes from specialised tests, for example automated dimensional analysis and domain transformations. This has entailed adopting a code style where we deliberately restrict what a compiler can do when re-arranging mathematical expressions.In the spirit of open science, all development is in the public domain. This leads to a positive feedback, where increased transparency and reproducibility makes using the model easier for external collaborators, who in turn provide valuable contributions. To facilitate users installing and running the model we provide (version controlled) digital notebooks that illustrate and record analysis of output. This has the dual role of providing a gross, platform-independent, testing capability and a means to documents model output and analysis.

  15. Paleomagnetic analysis of curved thrust belts reproduced by physical models

    Science.gov (United States)

    Costa, Elisabetta; Speranza, Fabio

    2003-12-01

    This paper presents a new methodology for studying the evolution of curved mountain belts by means of paleomagnetic analyses performed on analogue models. Eleven models were designed aimed at reproducing various tectonic settings in thin-skinned tectonics. Our models analyze in particular those features reported in the literature as possible causes for peculiar rotational patterns in the outermost as well as in the more internal fronts. In all the models the sedimentary cover was reproduced by frictional low-cohesion materials (sand and glass micro-beads), which detached either on frictional or on viscous layers. These latter were reproduced in the models by silicone. The sand forming the models has been previously mixed with magnetite-dominated powder. Before deformation, the models were magnetized by means of two permanent magnets generating within each model a quasi-linear magnetic field of intensity variable between 20 and 100 mT. After deformation, the models were cut into closely spaced vertical sections and sampled by means of 1×1-cm Plexiglas cylinders at several locations along curved fronts. Care was taken to collect paleomagnetic samples only within virtually undeformed thrust sheets, avoiding zones affected by pervasive shear. Afterwards, the natural remanent magnetization of these samples was measured, and alternating field demagnetization was used to isolate the principal components. The characteristic components of magnetization isolated were used to estimate the vertical-axis rotations occurring during model deformation. We find that indenters pushing into deforming belts from behind form non-rotational curved outer fronts. The more internal fronts show oroclinal-type rotations of a smaller magnitude than that expected for a perfect orocline. Lateral symmetrical obstacles in the foreland colliding with forward propagating belts produce non-rotational outer curved fronts as well, whereas in between and inside the obstacles a perfect orocline forms

  16. Intra-observer reproducibility and diagnostic performance of breast shear-wave elastography in Asian women.

    Science.gov (United States)

    Park, Hye Young; Han, Kyung Hwa; Yoon, Jung Hyun; Moon, Hee Jung; Kim, Min Jung; Kim, Eun-Kyung

    2014-06-01

    Our aim was to evaluate intra-observer reproducibility of shear-wave elastography (SWE) in Asian women. Sixty-four breast masses (24 malignant, 40 benign) were examined with SWE in 53 consecutive Asian women (mean age, 44.9 y old). Two SWE images were obtained for each of the lesions. The intra-observer reproducibility was assessed by intra-class correlation coefficients (ICC). We also evaluated various clinicoradiologic factors that can influence reproducibility in SWE. The ICC of intra-observer reproducibility was 0.789. In clinicoradiologic factor evaluation, masses surrounded by mixed fatty and glandular tissue (ICC: 0.619) showed lower intra-observer reproducibility compared with lesions that were surrounded by glandular tissue alone (ICC: 0.937; p breast SWE was excellent in Asian women. However, it may decrease when breast tissue is in a heterogeneous background. Therefore, SWE should be performed carefully in these cases. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  17. A reproducible brain tumour model established from human glioblastoma biopsies

    International Nuclear Information System (INIS)

    Wang, Jian; Chekenya, Martha; Bjerkvig, Rolf; Enger, Per Ø; Miletic, Hrvoje; Sakariassen, Per Ø; Huszthy, Peter C; Jacobsen, Hege; Brekkå, Narve; Li, Xingang; Zhao, Peng; Mørk, Sverre

    2009-01-01

    Establishing clinically relevant animal models of glioblastoma multiforme (GBM) remains a challenge, and many commonly used cell line-based models do not recapitulate the invasive growth patterns of patient GBMs. Previously, we have reported the formation of highly invasive tumour xenografts in nude rats from human GBMs. However, implementing tumour models based on primary tissue requires that these models can be sufficiently standardised with consistently high take rates. In this work, we collected data on growth kinetics from a material of 29 biopsies xenografted in nude rats, and characterised this model with an emphasis on neuropathological and radiological features. The tumour take rate for xenografted GBM biopsies were 96% and remained close to 100% at subsequent passages in vivo, whereas only one of four lower grade tumours engrafted. Average time from transplantation to the onset of symptoms was 125 days ± 11.5 SEM. Histologically, the primary xenografts recapitulated the invasive features of the parent tumours while endothelial cell proliferations and necrosis were mostly absent. After 4-5 in vivo passages, the tumours became more vascular with necrotic areas, but also appeared more circumscribed. MRI typically revealed changes related to tumour growth, several months prior to the onset of symptoms. In vivo passaging of patient GBM biopsies produced tumours representative of the patient tumours, with high take rates and a reproducible disease course. The model provides combinations of angiogenic and invasive phenotypes and represents a good alternative to in vitro propagated cell lines for dissecting mechanisms of brain tumour progression

  18. Evaluation of Oceanic Surface Observation for Reproducing the Upper Ocean Structure in ECHAM5/MPI-OM

    Science.gov (United States)

    Luo, Hao; Zheng, Fei; Zhu, Jiang

    2017-12-01

    Better constraints of initial conditions from data assimilation are necessary for climate simulations and predictions, and they are particularly important for the ocean due to its long climate memory; as such, ocean data assimilation (ODA) is regarded as an effective tool for seasonal to decadal predictions. In this work, an ODA system is established for a coupled climate model (ECHAM5/MPI-OM), which can assimilate all available oceanic observations using an ensemble optimal interpolation approach. To validate and isolate the performance of different surface observations in reproducing air-sea climate variations in the model, a set of observing system simulation experiments (OSSEs) was performed over 150 model years. Generally, assimilating sea surface temperature, sea surface salinity, and sea surface height (SSH) can reasonably reproduce the climate variability and vertical structure of the upper ocean, and assimilating SSH achieves the best results compared to the true states. For the El Niño-Southern Oscillation (ENSO), assimilating different surface observations captures true aspects of ENSO well, but assimilating SSH can further enhance the accuracy of ENSO-related feedback processes in the coupled model, leading to a more reasonable ENSO evolution and air-sea interaction over the tropical Pacific. For ocean heat content, there are still limitations in reproducing the long time-scale variability in the North Atlantic, even if SSH has been taken into consideration. These results demonstrate the effectiveness of assimilating surface observations in capturing the interannual signal and, to some extent, the decadal signal but still highlight the necessity of assimilating profile data to reproduce specific decadal variability.

  19. Modelling soil erosion at European scale: towards harmonization and reproducibility

    Science.gov (United States)

    Bosco, C.; de Rigo, D.; Dewitte, O.; Poesen, J.; Panagos, P.

    2015-02-01

    Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water-holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the prediction value of existing models is still limited, especially at regional and continental scale, because a systematic knowledge of local climatological and soil parameters is often unavailable. A new approach for modelling soil erosion at regional scale is here proposed. It is based on the joint use of low-data-demanding models and innovative techniques for better estimating model inputs. The proposed modelling architecture has at its basis the semantic array programming paradigm and a strong effort towards computational reproducibility. An extended version of the Revised Universal Soil Loss Equation (RUSLE) has been implemented merging different empirical rainfall-erosivity equations within a climatic ensemble model and adding a new factor for a better consideration of soil stoniness within the model. Pan-European soil erosion rates by water have been estimated through the use of publicly available data sets and locally reliable empirical relationships. The accuracy of the results is corroborated by a visual plausibility check (63% of a random sample of grid cells are accurate, 83% at least moderately accurate, bootstrap p ≤ 0.05). A comparison with country-level statistics of pre-existing European soil erosion maps is also provided.

  20. A reproducible brain tumour model established from human glioblastoma biopsies

    Directory of Open Access Journals (Sweden)

    Li Xingang

    2009-12-01

    Full Text Available Abstract Background Establishing clinically relevant animal models of glioblastoma multiforme (GBM remains a challenge, and many commonly used cell line-based models do not recapitulate the invasive growth patterns of patient GBMs. Previously, we have reported the formation of highly invasive tumour xenografts in nude rats from human GBMs. However, implementing tumour models based on primary tissue requires that these models can be sufficiently standardised with consistently high take rates. Methods In this work, we collected data on growth kinetics from a material of 29 biopsies xenografted in nude rats, and characterised this model with an emphasis on neuropathological and radiological features. Results The tumour take rate for xenografted GBM biopsies were 96% and remained close to 100% at subsequent passages in vivo, whereas only one of four lower grade tumours engrafted. Average time from transplantation to the onset of symptoms was 125 days ± 11.5 SEM. Histologically, the primary xenografts recapitulated the invasive features of the parent tumours while endothelial cell proliferations and necrosis were mostly absent. After 4-5 in vivo passages, the tumours became more vascular with necrotic areas, but also appeared more circumscribed. MRI typically revealed changes related to tumour growth, several months prior to the onset of symptoms. Conclusions In vivo passaging of patient GBM biopsies produced tumours representative of the patient tumours, with high take rates and a reproducible disease course. The model provides combinations of angiogenic and invasive phenotypes and represents a good alternative to in vitro propagated cell lines for dissecting mechanisms of brain tumour progression.

  1. Development of a Consistent and Reproducible Porcine Scald Burn Model

    Science.gov (United States)

    Kempf, Margit; Kimble, Roy; Cuttle, Leila

    2016-01-01

    There are very few porcine burn models that replicate scald injuries similar to those encountered by children. We have developed a robust porcine burn model capable of creating reproducible scald burns for a wide range of burn conditions. The study was conducted with juvenile Large White pigs, creating replicates of burn combinations; 50°C for 1, 2, 5 and 10 minutes and 60°C, 70°C, 80°C and 90°C for 5 seconds. Visual wound examination, biopsies and Laser Doppler Imaging were performed at 1, 24 hours and at 3 and 7 days post-burn. A consistent water temperature was maintained within the scald device for long durations (49.8 ± 0.1°C when set at 50°C). The macroscopic and histologic appearance was consistent between replicates of burn conditions. For 50°C water, 10 minute duration burns showed significantly deeper tissue injury than all shorter durations at 24 hours post-burn (p ≤ 0.0001), with damage seen to increase until day 3 post-burn. For 5 second duration burns, by day 7 post-burn the 80°C and 90°C scalds had damage detected significantly deeper in the tissue than the 70°C scalds (p ≤ 0.001). A reliable and safe model of porcine scald burn injury has been successfully developed. The novel apparatus with continually refreshed water improves consistency of scald creation for long exposure times. This model allows the pathophysiology of scald burn wound creation and progression to be examined. PMID:27612153

  2. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    osteosarcoma model was shown to be feasible: the take rate was high, surgical mortality was negligible and the procedure was simple to perform and easily reproduced. It may be a useful tool in the investigation of antiangiogenic and anticancer therapeutics. Ultrasound was found to be a highly accurate tool for tumor diagnosis, localization and measurement and may be recommended for monitoring tumor growth in this model.

  3. Do detailed simulations with size-resolved microphysics reproduce basic features of observed cirrus ice size distributions?

    Science.gov (United States)

    Fridlind, A. M.; Atlas, R.; van Diedenhoven, B.; Ackerman, A. S.; Rind, D. H.; Harrington, J. Y.; McFarquhar, G. M.; Um, J.; Jackson, R.; Lawson, P.

    2017-12-01

    It has recently been suggested that seeding synoptic cirrus could have desirable characteristics as a geoengineering approach, but surprisingly large uncertainties remain in the fundamental parameters that govern cirrus properties, such as mass accommodation coefficient, ice crystal physical properties, aggregation efficiency, and ice nucleation rate from typical upper tropospheric aerosol. Only one synoptic cirrus model intercomparison study has been published to date, and studies that compare the shapes of observed and simulated ice size distributions remain sparse. Here we amend a recent model intercomparison setup using observations during two 2010 SPARTICUS campaign flights. We take a quasi-Lagrangian column approach and introduce an ensemble of gravity wave scenarios derived from collocated Doppler cloud radar retrievals of vertical wind speed. We use ice crystal properties derived from in situ cloud particle images, for the first time allowing smoothly varying and internally consistent treatments of nonspherical ice capacitance, fall speed, gravitational collection, and optical properties over all particle sizes in our model. We test two new parameterizations for mass accommodation coefficient as a function of size, temperature and water vapor supersaturation, and several ice nucleation scenarios. Comparison of results with in situ ice particle size distribution data, corrected using state-of-the-art algorithms to remove shattering artifacts, indicate that poorly constrained uncertainties in the number concentration of crystals smaller than 100 µm in maximum dimension still prohibit distinguishing which parameter combinations are more realistic. When projected area is concentrated at such sizes, the only parameter combination that reproduces observed size distribution properties uses a fixed mass accommodation coefficient of 0.01, on the low end of recently reported values. No simulations reproduce the observed abundance of such small crystals when the

  4. NRFixer: Sentiment Based Model for Predicting the Fixability of Non-Reproducible Bugs

    Directory of Open Access Journals (Sweden)

    Anjali Goyal

    2017-08-01

    Full Text Available Software maintenance is an essential step in software development life cycle. Nowadays, software companies spend approximately 45\\% of total cost in maintenance activities. Large software projects maintain bug repositories to collect, organize and resolve bug reports. Sometimes it is difficult to reproduce the reported bug with the information present in a bug report and thus this bug is marked with resolution non-reproducible (NR. When NR bugs are reconsidered, a few of them might get fixed (NR-to-fix leaving the others with the same resolution (NR. To analyse the behaviour of developers towards NR-to-fix and NR bugs, the sentiment analysis of NR bug report textual contents has been conducted. The sentiment analysis of bug reports shows that NR bugs' sentiments incline towards more negativity than reproducible bugs. Also, there is a noticeable opinion drift found in the sentiments of NR-to-fix bug reports. Observations driven from this analysis were an inspiration to develop a model that can judge the fixability of NR bugs. Thus a framework, {NRFixer,} which predicts the probability of NR bug fixation, is proposed. {NRFixer} was evaluated with two dimensions. The first dimension considers meta-fields of bug reports (model-1 and the other dimension additionally incorporates the sentiments (model-2 of developers for prediction. Both models were compared using various machine learning classifiers (Zero-R, naive Bayes, J48, random tree and random forest. The bug reports of Firefox and Eclipse projects were used to test {NRFixer}. In Firefox and Eclipse projects, J48 and Naive Bayes classifiers achieve the best prediction accuracy, respectively. It was observed that the inclusion of sentiments in the prediction model shows a rise in the prediction accuracy ranging from 2 to 5\\% for various classifiers.

  5. Reproducing Phenomenology of Peroxidation Kinetics via Model Optimization

    Science.gov (United States)

    Ruslanov, Anatole D.; Bashylau, Anton V.

    2010-06-01

    We studied mathematical modeling of lipid peroxidation using a biochemical model system of iron (II)-ascorbate-dependent lipid peroxidation of rat hepatocyte mitochondrial fractions. We found that antioxidants extracted from plants demonstrate a high intensity of peroxidation inhibition. We simplified the system of differential equations that describes the kinetics of the mathematical model to a first order equation, which can be solved analytically. Moreover, we endeavor to algorithmically and heuristically recreate the processes and construct an environment that closely resembles the corresponding natural system. Our results demonstrate that it is possible to theoretically predict both the kinetics of oxidation and the intensity of inhibition without resorting to analytical and biochemical research, which is important for cost-effective discovery and development of medical agents with antioxidant action from the medicinal plants.

  6. Using a 1-D model to reproduce diurnal SST signals

    DEFF Research Database (Denmark)

    Karagali, Ioanna; Høyer, Jacob L.

    2014-01-01

    The diurnal variability of SST has been extensively studied as it poses challenges for validating and calibrating satellite sensors, merging SST time series, oceanic and atmospheric modelling. As heat is significantly trapped close to the surface, the diurnal signal’s maximum amplitude is best...... captured by radiometers. The availability of infra-red retrievals from a geostationary orbit allows the hourly monitoring of the diurnal SST evolution. When infra-red SSTs are validated with in situ measurements a general mismatch is found, associated with the different reference depth of each type...... of measurement. A generally preferred approach to bridge the gap between in situ and remotely obtained measurements is through modelling of the upper ocean temperature. This ESA supported study focuses on the implementation of the 1 dimensional General Ocean Turbulence Model (GOTM), in order to resolve...

  7. Hydrological Modeling Reproducibility Through Data Management and Adaptors for Model Interoperability

    Science.gov (United States)

    Turner, M. A.

    2015-12-01

    Because of a lack of centralized planning and no widely-adopted standards among hydrological modeling research groups, research communities, and the data management teams meant to support research, there is chaos when it comes to data formats, spatio-temporal resolutions, ontologies, and data availability. All this makes true scientific reproducibility and collaborative integrated modeling impossible without some glue to piece it all together. Our Virtual Watershed Integrated Modeling System provides the tools and modeling framework hydrologists need to accelerate and fortify new scientific investigations by tracking provenance and providing adaptors for integrated, collaborative hydrologic modeling and data management. Under global warming trends where water resources are under increasing stress, reproducible hydrological modeling will be increasingly important to improve transparency and understanding of the scientific facts revealed through modeling. The Virtual Watershed Data Engine is capable of ingesting a wide variety of heterogeneous model inputs, outputs, model configurations, and metadata. We will demonstrate one example, starting from real-time raw weather station data packaged with station metadata. Our integrated modeling system will then create gridded input data via geostatistical methods along with error and uncertainty estimates. These gridded data are then used as input to hydrological models, all of which are available as web services wherever feasible. Models may be integrated in a data-centric way where the outputs too are tracked and used as inputs to "downstream" models. This work is part of an ongoing collaborative Tri-state (New Mexico, Nevada, Idaho) NSF EPSCoR Project, WC-WAVE, comprised of researchers from multiple universities in each of the three states. The tools produced and presented here have been developed collaboratively alongside watershed scientists to address specific modeling problems with an eye on the bigger picture of

  8. Intra- and inter-observer reproducibility and generalizability of first trimester uterine artery pulsatility index by transabdominal and transvaginal ultrasound

    NARCIS (Netherlands)

    Marchi, Laura; Zwertbroek, Eva; Snelder, Judith; Kloosterman, Maaike; Bilardo, Caterina Maddalena

    2016-01-01

    Objectives The primary aim of the study was to assess intra-observer and inter-observer reproducibility and generalizability (general reliability) of first trimester Doppler measurements of uterine arteries (UtA) performed both transabdominally (TA) and transvaginally (TV). Secondary aims were to

  9. Why are models unable to reproduce multi-decadal trends in lower tropospheric baseline ozone levels?

    Science.gov (United States)

    Hu, L.; Liu, J.; Mickley, L. J.; Strahan, S. E.; Steenrod, S.

    2017-12-01

    Assessments of tropospheric ozone radiative forcing rely on accurate model simulations. Parrish et al (2014) found that three chemistry-climate models (CCMs) overestimate present-day O3 mixing ratios and capture only 50% of the observed O3 increase over the last five decades at 12 baseline sites in the northern mid-latitudes, indicating large uncertainties in our understanding of the ozone trends and their implications for radiative forcing. Here we present comparisons of outputs from two chemical transport models (CTMs) - GEOS-Chem and the Global Modeling Initiative model - with O3 observations from the same sites and from the global ozonesonde network. Both CTMs are driven by reanalysis meteorological data (MERRA or MERRA2) and thus are expected to be different in atmospheric transport processes relative to those freely running CCMs. We test whether recent model developments leading to more active ozone chemistry affect the computed ozone sensitivity to perturbations in emissions. Preliminary results suggest these CTMs can reproduce present-day ozone levels but fail to capture the multi-decadal trend since 1980. Both models yield widespread overpredictions of free tropospheric ozone in the 1980s. Sensitivity studies in GEOS-Chem suggest that the model estimate of natural background ozone is too high. We discuss factors that contribute to the variability and trends of tropospheric ozone over the last 30 years, with a focus on intermodel differences in spatial resolution and in the representation of stratospheric chemistry, stratosphere-troposphere exchange, halogen chemistry, and biogenic VOC emissions and chemistry. We also discuss uncertainty in the historical emission inventories used in models, and how these affect the simulated ozone trends.

  10. COMBINE archive and OMEX format : One file to share all information to reproduce a modeling project

    NARCIS (Netherlands)

    Bergmann, Frank T.; Olivier, Brett G.; Soiland-Reyes, Stian

    2014-01-01

    Background: With the ever increasing use of computational models in the biosciences, the need to share models and reproduce the results of published studies efficiently and easily is becoming more important. To this end, various standards have been proposed that can be used to describe models,

  11. Prospective validation of pathologic complete response models in rectal cancer: Transferability and reproducibility.

    Science.gov (United States)

    van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre

    2017-09-01

    Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.

  12. Inter-observer reproducibility in reporting on renal drainage in children with hydronephrosis: a large collaborative study

    International Nuclear Information System (INIS)

    Tondeur, Marianne; Piepsz, Amy; De Palma, Diego; Roca, Isabel; Ham, Hamphrey

    2008-01-01

    The goal of this study was to evaluate the inter-observer reproducibility in reporting on renal drainage obtained during 99m Tc MAG3 renography in children, when already processed data are offered to the observers. Because web site facilities were used for communication, 57 observers from five continents participated in the study. Twenty-three renograms, including furosemide stimulation and posterect postmicturition views, covering various patterns of drainage, were submitted to the observers. Images, curves and quantitative parameters were provided. Good or almost good drainage, partial drainage and poor or no drainage were the three possible responses for each kidney. An important bias was observed among the observers, some of them more systematically reporting the drainage as being good, while others had a general tendency to consider the drainage as poor. This resulted in rather poor inter-observer reproducibility, as for more than half of the kidneys, less than 80% of the observers agreed on one of the three responses. Analysis of the individual cases identified some obvious causes of discrepancy: the absence of a clear limit between partial and good or almost good drainage, the fact of including or neglecting the effect of micturition and change of patient's position, the underestimation of drainage in the case of a flat renographic curve, and the difficulties of interpretation in the case of a small, not well functioning kidney. There is an urgent need for better standardisation in estimating the quality of drainage. (orig.)

  13. Inter-observer reproducibility in reporting on renal drainage in children with hydronephrosis: a large collaborative study

    Energy Technology Data Exchange (ETDEWEB)

    Tondeur, Marianne; Piepsz, Amy [CHU Saint-Pierre, Departement des Radio-Isotopes, Brussels (Belgium); De Palma, Diego [Ospedale di Circolo, Nuclear Medicine, Varese (Italy); Roca, Isabel [Vall d' Hebron Hospital, Nuclear Medicine, Barcelona (Spain); Ham, Hamphrey [University Hospital, Department Nuclear Medicine, Ghent (Belgium)

    2008-03-15

    The goal of this study was to evaluate the inter-observer reproducibility in reporting on renal drainage obtained during {sup 99m}Tc MAG3 renography in children, when already processed data are offered to the observers. Because web site facilities were used for communication, 57 observers from five continents participated in the study. Twenty-three renograms, including furosemide stimulation and posterect postmicturition views, covering various patterns of drainage, were submitted to the observers. Images, curves and quantitative parameters were provided. Good or almost good drainage, partial drainage and poor or no drainage were the three possible responses for each kidney. An important bias was observed among the observers, some of them more systematically reporting the drainage as being good, while others had a general tendency to consider the drainage as poor. This resulted in rather poor inter-observer reproducibility, as for more than half of the kidneys, less than 80% of the observers agreed on one of the three responses. Analysis of the individual cases identified some obvious causes of discrepancy: the absence of a clear limit between partial and good or almost good drainage, the fact of including or neglecting the effect of micturition and change of patient's position, the underestimation of drainage in the case of a flat renographic curve, and the difficulties of interpretation in the case of a small, not well functioning kidney. There is an urgent need for better standardisation in estimating the quality of drainage. (orig.)

  14. A stable and reproducible human blood-brain barrier model derived from hematopoietic stem cells.

    Directory of Open Access Journals (Sweden)

    Romeo Cecchelli

    Full Text Available The human blood brain barrier (BBB is a selective barrier formed by human brain endothelial cells (hBECs, which is important to ensure adequate neuronal function and protect the central nervous system (CNS from disease. The development of human in vitro BBB models is thus of utmost importance for drug discovery programs related to CNS diseases. Here, we describe a method to generate a human BBB model using cord blood-derived hematopoietic stem cells. The cells were initially differentiated into ECs followed by the induction of BBB properties by co-culture with pericytes. The brain-like endothelial cells (BLECs express tight junctions and transporters typically observed in brain endothelium and maintain expression of most in vivo BBB properties for at least 20 days. The model is very reproducible since it can be generated from stem cells isolated from different donors and in different laboratories, and could be used to predict CNS distribution of compounds in human. Finally, we provide evidence that Wnt/β-catenin signaling pathway mediates in part the BBB inductive properties of pericytes.

  15. How well do CMIP5 Climate Models Reproduce the Hydrologic Cycle of the Colorado River Basin?

    Science.gov (United States)

    Gautam, J.; Mascaro, G.

    2017-12-01

    The Colorado River, which is the primary source of water for nearly 40 million people in the arid Southwestern states of the United States, has been experiencing an extended drought since 2000, which has led to a significant reduction in water supply. As the water demands increase, one of the major challenges for water management in the region has been the quantification of uncertainties associated with streamflow predictions in the Colorado River Basin (CRB) under potential changes of future climate. Hence, testing the reliability of model predictions in the CRB is critical in addressing this challenge. In this study, we evaluated the performances of 17 General Circulation Models (GCMs) from the Coupled Model Intercomparison Project Phase Five (CMIP5) and 4 Regional Climate Models (RCMs) in reproducing the statistical properties of the hydrologic cycle in the CRB. We evaluated the water balance components at four nested sub-basins along with the inter-annual and intra-annual changes of precipitation (P), evaporation (E), runoff (R) and temperature (T) from 1979 to 2005. Most of the models captured the net water balance fairly well in the most-upstream basin but simulated a weak hydrological cycle in the evaporation channel at the downstream locations. The simulated monthly variability of P had different patterns, with correlation coefficients ranging from -0.6 to 0.8 depending on the sub-basin and the models from same parent institution clustering together. Apart from the most-upstream sub-basin where the models were mainly characterized by a negative seasonal bias in SON (of up to -50%), most of them had a positive bias in all seasons (of up to +260%) in the other three sub-basins. The models, however, captured the monthly variability of T well at all sites with small inter-model variabilities and a relatively similar range of bias (-7 °C to +5 °C) across all seasons. Mann-Kendall test was applied to the annual P and T time-series where majority of the models

  16. Reproducing tailing in breakthrough curves: Are statistical models equally representative and predictive?

    Science.gov (United States)

    Pedretti, Daniele; Bianchi, Marco

    2018-03-01

    Breakthrough curves (BTCs) observed during tracer tests in highly heterogeneous aquifers display strong tailing. Power laws are popular models for both the empirical fitting of these curves, and the prediction of transport using upscaling models based on best-fitted estimated parameters (e.g. the power law slope or exponent). The predictive capacity of power law based upscaling models can be however questioned due to the difficulties to link model parameters with the aquifers' physical properties. This work analyzes two aspects that can limit the use of power laws as effective predictive tools: (a) the implication of statistical subsampling, which often renders power laws undistinguishable from other heavily tailed distributions, such as the logarithmic (LOG); (b) the difficulties to reconcile fitting parameters obtained from models with different formulations, such as the presence of a late-time cutoff in the power law model. Two rigorous and systematic stochastic analyses, one based on benchmark distributions and the other on BTCs obtained from transport simulations, are considered. It is found that a power law model without cutoff (PL) results in best-fitted exponents (αPL) falling in the range of typical experimental values reported in the literature (1.5 tailing becomes heavier. Strong fluctuations occur when the number of samples is limited, due to the effects of subsampling. On the other hand, when the power law model embeds a cutoff (PLCO), the best-fitted exponent (αCO) is insensitive to the degree of tailing and to the effects of subsampling and tends to a constant αCO ≈ 1. In the PLCO model, the cutoff rate (λ) is the parameter that fully reproduces the persistence of the tailing and is shown to be inversely correlated to the LOG scale parameter (i.e. with the skewness of the distribution). The theoretical results are consistent with the fitting analysis of a tracer test performed during the MADE-5 experiment. It is shown that a simple

  17. MRI assessment of knee osteoarthritis: Knee Osteoarthritis Scoring System (KOSS) - inter-observer and intra-observer reproducibility of a compartment-based scoring system

    International Nuclear Information System (INIS)

    Kornaat, Peter R.; Ceulemans, Ruth Y.T.; Kroon, Herman M.; Bloem, Johan L.; Riyazi, Naghmeh; Kloppenburg, Margreet; Carter, Wayne O.; Woodworth, Thasia G.

    2005-01-01

    To develop a scoring system for quantifying osteoarthritic changes of the knee as identified by magnetic resonance (MR) imaging, and to determine its inter- and intra-observer reproducibility, in order to monitor medical therapy in research studies. Two independent observers evaluated 25 consecutive MR examinations of the knee in patients with previously defined clinical symptoms and radiological signs of osteoarthritis. We acquired on a 1.5 T system: coronal and sagittal proton density- and T2-weighted dual spin echo (SE) images, sagittal three-dimensional T1-weighted gradient echo (GE) images with fat suppression, and axial dual turbo SE images with fat suppression. Images were scored for the presence of cartilaginous lesions, osteophytes, subchondral cysts, bone marrow edema, and for meniscal abnormalities. Presence and size of effusion, synovitis and Baker's cyst were recorded. All parameters were ranked on a previously defined, semiquantitative scale, reflecting increasing severity of findings. Kappa, weighted kappa and intraclass correlation coefficient (ICC) were used to determine inter- and intra-observer variability. Inter-observer reproducibility was good (ICC value 0.77). Inter- and intra-observer reproducibility for individual parameters was good to very good (inter-observer ICC value 0.63-0.91; intra-observer ICC value 0.76-0.96). The presented comprehensive MR scoring system for osteoarthritic changes of the knee has a good to very good inter-observer and intra-observer reproducibility. Thus the score form with its definitions can be used for standardized assessment of osteoarthritic changes to monitor medical therapy in research studies. (orig.)

  18. Investigation of dimensional variation in parts manufactured by fused deposition modeling using Gauge Repeatability and Reproducibility

    Science.gov (United States)

    Mohamed, Omar Ahmed; Hasan Masood, Syed; Lal Bhowmik, Jahar

    2018-02-01

    In the additive manufacturing (AM) market, the question is raised by industry and AM users on how reproducible and repeatable the fused deposition modeling (FDM) process is in providing good dimensional accuracy. This paper aims to investigate and evaluate the repeatability and reproducibility of the FDM process through a systematic approach to answer this frequently asked question. A case study based on the statistical gage repeatability and reproducibility (gage R&R) technique is proposed to investigate the dimensional variations in the printed parts of the FDM process. After running the simulation and analysis of the data, the FDM process capability is evaluated, which would help the industry for better understanding the performance of FDM technology.

  19. Reproducibility of summertime diurnal precipitation over northern Eurasia simulated by CMIP5 climate models

    Science.gov (United States)

    Hirota, N.; Takayabu, Y. N.

    2015-12-01

    Reproducibility of diurnal precipitation over northern Eurasia simulated by CMIP5 climate models in their historical runs were evaluated, in comparison with station data (NCDC-9813) and satellite data (GSMaP-V5). We first calculated diurnal cycles by averaging precipitation at each local solar time (LST) in June-July-August during 1981-2000 over the continent of northern Eurasia (0-180E, 45-90N). Then we examined occurrence time of maximum precipitation and a contribution of diurnally varying precipitation to the total precipitation.The contribution of diurnal precipitation was about 21% in both NCDC-9813 and GSMaP-V5. The maximum precipitation occurred at 18LST in NCDC-9813 but 16LST in GSMaP-V5, indicating some uncertainties even in the observational datasets. The diurnal contribution of the CMIP5 models varied largely from 11% to 62%, and their timing of the precipitation maximum ranged from 11LST to 20LST. Interestingly, the contribution and the timing had strong negative correlation of -0.65. The models with larger diurnal precipitation showed precipitation maximum earlier around noon. Next, we compared sensitivity of precipitation to surface temperature and tropospheric humidity between 5 models with large diurnal precipitation (LDMs) and 5 models with small diurnal precipitation (SDMs). Precipitation in LDMs showed high sensitivity to surface temperature, indicating its close relationship with local instability. On the other hand, synoptic disturbances were more active in SDMs with a dominant role of the large scale condensation, and precipitation in SDMs was more related with tropospheric moisture. Therefore, the relative importance of the local instability and the synoptic disturbances was suggested to be an important factor in determining the contribution and timing of the diurnal precipitation. Acknowledgment: This study is supported by Green Network of Excellence (GRENE) Program by the Ministry of Education, Culture, Sports, Science and Technology

  20. The intra-observer reproducibility of cardiovascular magnetic resonance myocardial feature tracking strain assessment is independent of field strength

    International Nuclear Information System (INIS)

    Schuster, Andreas; Morton, Geraint; Hussain, Shazia T.

    2013-01-01

    Background: Cardiovascular magnetic resonance myocardial feature tracking (CMR-FT) is a promising novel method for quantification of myocardial wall mechanics from standard steady-state free precession (SSFP) images. We sought to determine whether magnetic field strength affects the intra-observer reproducibility of CMR-FT strain analysis. Methods: We studied 2 groups, each consisting of 10 healthy subjects, at 1.5 T or 3 T Analysis was performed at baseline and after 4 weeks using dedicated CMR-FT prototype software (Tomtec, Germany) to analyze standard SSFP cine images. Right ventricular (RV) and left ventricular (LV) longitudinal strain (Ell RV and Ell LV ) and LV long-axis radial strain (Err LAX ) were derived from the 4-chamber cine, and LV short-axis circumferential and radial strains (Ecc SAX , Err SAX ) from the short-axis orientation. Strain parameters were assessed together with LV ejection fraction (EF) and volumes. Intra-observer reproducibility was determined by comparing the first and the second analysis in both groups. Results: In all volunteers resting strain parameters were successfully derived from the SSFP images. There was no difference in strain parameters, volumes and EF between field strengths (p > 0.05). In general Ecc SAX was the most reproducible strain parameter as determined by the coefficient of variation (CV) at 1.5 T (CV 13.3% and 46% global and segmental respectively) and 3 T (CV 17.2% and 31.1% global and segmental respectively). The least reproducible parameter was Ell RV (CV 1.5 T 28.7% and 53.2%; 3 T 43.5% and 63.3% global and segmental respectively). Conclusions: CMR-FT results are similar with reasonable intra-observer reproducibility in different groups of volunteers at 1.5 T and 3 T. CMR-FT is a promising novel technique and our data indicate that results might be transferable between field strengths. However there is a considerable amount of segmental variability indicating that further refinements are needed before CMR

  1. Assessment of the potential forecasting skill of a global hydrological model in reproducing the occurrence of monthly flow extremes

    Directory of Open Access Journals (Sweden)

    N. Candogan Yossef

    2012-11-01

    Full Text Available As an initial step in assessing the prospect of using global hydrological models (GHMs for hydrological forecasting, this study investigates the skill of the GHM PCR-GLOBWB in reproducing the occurrence of past extremes in monthly discharge on a global scale. Global terrestrial hydrology from 1958 until 2001 is simulated by forcing PCR-GLOBWB with daily meteorological data obtained by downscaling the CRU dataset to daily fields using the ERA-40 reanalysis. Simulated discharge values are compared with observed monthly streamflow records for a selection of 20 large river basins that represent all continents and a wide range of climatic zones.

    We assess model skill in three ways all of which contribute different information on the potential forecasting skill of a GHM. First, the general skill of the model in reproducing hydrographs is evaluated. Second, model skill in reproducing significantly higher and lower flows than the monthly normals is assessed in terms of skill scores used for forecasts of categorical events. Third, model skill in reproducing flood and drought events is assessed by constructing binary contingency tables for floods and droughts for each basin. The skill is then compared to that of a simple estimation of discharge from the water balance (PE.

    The results show that the model has skill in all three types of assessments. After bias correction the model skill in simulating hydrographs is improved considerably. For most basins it is higher than that of the climatology. The skill is highest in reproducing monthly anomalies. The model also has skill in reproducing floods and droughts, with a markedly higher skill in floods. The model skill far exceeds that of the water balance estimate. We conclude that the prospect for using PCR-GLOBWB for monthly and seasonal forecasting of the occurrence of hydrological extremes is positive. We argue that this conclusion applies equally to other similar GHMs and

  2. Using the mouse to model human disease: increasing validity and reproducibility

    Directory of Open Access Journals (Sweden)

    Monica J. Justice

    2016-02-01

    Full Text Available Experiments that use the mouse as a model for disease have recently come under scrutiny because of the repeated failure of data, particularly derived from preclinical studies, to be replicated or translated to humans. The usefulness of mouse models has been questioned because of irreproducibility and poor recapitulation of human conditions. Newer studies, however, point to bias in reporting results and improper data analysis as key factors that limit reproducibility and validity of preclinical mouse research. Inaccurate and incomplete descriptions of experimental conditions also contribute. Here, we provide guidance on best practice in mouse experimentation, focusing on appropriate selection and validation of the model, sources of variation and their influence on phenotypic outcomes, minimum requirements for control sets, and the importance of rigorous statistics. Our goal is to raise the standards in mouse disease modeling to enhance reproducibility, reliability and clinical translation of findings.

  3. Anatomical Reproducibility of a Head Model Molded by a Three-dimensional Printer.

    Science.gov (United States)

    Kondo, Kosuke; Nemoto, Masaaki; Masuda, Hiroyuki; Okonogi, Shinichi; Nomoto, Jun; Harada, Naoyuki; Sugo, Nobuo; Miyazaki, Chikao

    2015-01-01

    We prepared rapid prototyping models of heads with unruptured cerebral aneurysm based on image data of computed tomography angiography (CTA) using a three-dimensional (3D) printer. The objective of this study was to evaluate the anatomical reproducibility and accuracy of these models by comparison with the CTA images on a monitor. The subjects were 22 patients with unruptured cerebral aneurysm who underwent preoperative CTA. Reproducibility of the microsurgical anatomy of skull bone and arteries, the length and thickness of the main arteries, and the size of cerebral aneurysm were compared between the CTA image and rapid prototyping model. The microsurgical anatomy and arteries were favorably reproduced, apart from a few minute regions, in the rapid prototyping models. No significant difference was noted in the measured lengths of the main arteries between the CTA image and rapid prototyping model, but errors were noted in their thickness (p printer. It was concluded that these models are useful tools for neurosurgical simulation. The thickness of the main arteries and size of cerebral aneurysm should be comprehensively judged including other neuroimaging in consideration of errors.

  4. The Accuracy and Reproducibility of Linear Measurements Made on CBCT-derived Digital Models.

    Science.gov (United States)

    Maroua, Ahmad L; Ajaj, Mowaffak; Hajeer, Mohammad Y

    2016-04-01

    To evaluate the accuracy and reproducibility of linear measurements made on cone-beam computed tomography (CBCT)-derived digital models. A total of 25 patients (44% female, 18.7 ± 4 years) who had CBCT images for diagnostic purposes were included. Plaster models were obtained and digital models were extracted from CBCT scans. Seven linear measurements from predetermined landmarks were measured and analyzed on plaster models and the corresponding digital models. The measurements included arch length and width at different sites. Paired t test and Bland-Altman analysis were used to evaluate the accuracy of measurements on digital models compared to the plaster models. Also, intraclass correlation coefficients (ICCs) were used to evaluate the reproducibility of the measurements in order to assess the intraobserver reliability. The statistical analysis showed significant differences on 5 out of 14 variables, and the mean differences ranged from -0.48 to 0.51 mm. The Bland-Altman analysis revealed that the mean difference between variables was (0.14 ± 0.56) and (0.05 ± 0.96) mm and limits of agreement between the two methods ranged from -1.2 to 0.96 and from -1.8 to 1.9 mm in the maxilla and the mandible, respectively. The intraobserver reliability values were determined for all 14 variables of two types of models separately. The mean ICC value for the plaster models was 0.984 (0.924-0.999), while it was 0.946 for the CBCT models (range from 0.850 to 0.985). Linear measurements obtained from the CBCT-derived models appeared to have a high level of accuracy and reproducibility.

  5. Pharmacokinetic Modelling to Predict FVIII:C Response to Desmopressin and Its Reproducibility in Nonsevere Haemophilia A Patients.

    Science.gov (United States)

    Schütte, Lisette M; van Hest, Reinier M; Stoof, Sara C M; Leebeek, Frank W G; Cnossen, Marjon H; Kruip, Marieke J H A; Mathôt, Ron A A

    2018-04-01

     Nonsevere haemophilia A (HA) patients can be treated with desmopressin. Response of factor VIII activity (FVIII:C) differs between patients and is difficult to predict.  Our aims were to describe FVIII:C response after desmopressin and its reproducibility by population pharmacokinetic (PK) modelling.  Retrospective data of 128 nonsevere HA patients (age 7-75 years) receiving an intravenous or intranasal dose of desmopressin were used. PK modelling of FVIII:C was performed by nonlinear mixed effect modelling. Reproducibility of FVIII:C response was defined as less than 25% difference in peak FVIII:C between administrations.  A total of 623 FVIII:C measurements from 142 desmopressin administrations were available; 14 patients had received two administrations at different occasions. The FVIII:C time profile was best described by a two-compartment model with first-order absorption and elimination. Interindividual variability of the estimated baseline FVIII:C, central volume of distribution and clearance were 37, 43 and 50%, respectively. The most recently measured FVIII:C (FVIII-recent) was significantly associated with FVIII:C response to desmopressin ( p  C increase of 0.47 IU/mL (median, interquartile range: 0.32-0.65 IU/mL, n  = 142). C response was reproducible in 6 out of 14 patients receiving two desmopressin administrations.  FVIII:C response to desmopressin in nonsevere HA patients was adequately described by a population PK model. Large variability in FVIII:C response was observed, which could only partially be explained by FVIII-recent. C response was not reproducible in a small subset of patients. Therefore, monitoring FVIII:C around surgeries or bleeding might be considered. Research is needed to study this further. Schattauer Stuttgart.

  6. Cellular automaton model in the fundamental diagram approach reproducing the synchronized outflow of wide moving jams

    International Nuclear Information System (INIS)

    Tian, Jun-fang; Yuan, Zhen-zhou; Jia, Bin; Fan, Hong-qiang; Wang, Tao

    2012-01-01

    Velocity effect and critical velocity are incorporated into the average space gap cellular automaton model [J.F. Tian, et al., Phys. A 391 (2012) 3129], which was able to reproduce many spatiotemporal dynamics reported by the three-phase theory except the synchronized outflow of wide moving jams. The physics of traffic breakdown has been explained. Various congested patterns induced by the on-ramp are reproduced. It is shown that the occurrence of synchronized outflow, free outflow of wide moving jams is closely related with drivers time delay in acceleration at the downstream jam front and the critical velocity, respectively. -- Highlights: ► Velocity effect is added into average space gap cellular automaton model. ► The physics of traffic breakdown has been explained. ► The probabilistic nature of traffic breakdown is simulated. ► Various congested patterns induced by the on-ramp are reproduced. ► The occurrence of synchronized outflow of jams depends on drivers time delay.

  7. COMBINE archive and OMEX format: one file to share all information to reproduce a modeling project.

    Science.gov (United States)

    Bergmann, Frank T; Adams, Richard; Moodie, Stuart; Cooper, Jonathan; Glont, Mihai; Golebiewski, Martin; Hucka, Michael; Laibe, Camille; Miller, Andrew K; Nickerson, David P; Olivier, Brett G; Rodriguez, Nicolas; Sauro, Herbert M; Scharm, Martin; Soiland-Reyes, Stian; Waltemath, Dagmar; Yvon, Florent; Le Novère, Nicolas

    2014-12-14

    With the ever increasing use of computational models in the biosciences, the need to share models and reproduce the results of published studies efficiently and easily is becoming more important. To this end, various standards have been proposed that can be used to describe models, simulations, data or other essential information in a consistent fashion. These constitute various separate components required to reproduce a given published scientific result. We describe the Open Modeling EXchange format (OMEX). Together with the use of other standard formats from the Computational Modeling in Biology Network (COMBINE), OMEX is the basis of the COMBINE Archive, a single file that supports the exchange of all the information necessary for a modeling and simulation experiment in biology. An OMEX file is a ZIP container that includes a manifest file, listing the content of the archive, an optional metadata file adding information about the archive and its content, and the files describing the model. The content of a COMBINE Archive consists of files encoded in COMBINE standards whenever possible, but may include additional files defined by an Internet Media Type. Several tools that support the COMBINE Archive are available, either as independent libraries or embedded in modeling software. The COMBINE Archive facilitates the reproduction of modeling and simulation experiments in biology by embedding all the relevant information in one file. Having all the information stored and exchanged at once also helps in building activity logs and audit trails. We anticipate that the COMBINE Archive will become a significant help for modellers, as the domain moves to larger, more complex experiments such as multi-scale models of organs, digital organisms, and bioengineering.

  8. Contrasting response to nutrient manipulation in Arctic mesocosms are reproduced by a minimum microbial food web model.

    Science.gov (United States)

    Larsen, Aud; Egge, Jorun K; Nejstgaard, Jens C; Di Capua, Iole; Thyrhaug, Runar; Bratbak, Gunnar; Thingstad, T Frede

    2015-03-01

    A minimum mathematical model of the marine pelagic microbial food web has previously shown to be able to reproduce central aspects of observed system response to different bottom-up manipulations in a mesocosm experiment Microbial Ecosystem Dynamics (MEDEA) in Danish waters. In this study, we apply this model to two mesocosm experiments (Polar Aquatic Microbial Ecology (PAME)-I and PAME-II) conducted at the Arctic location Kongsfjorden, Svalbard. The different responses of the microbial community to similar nutrient manipulation in the three mesocosm experiments may be described as diatom-dominated (MEDEA), bacteria-dominated (PAME-I), and flagellated-dominated (PAME-II). When allowing ciliates to be able to feed on small diatoms, the model describing the diatom-dominated MEDEA experiment give a bacteria-dominated response as observed in PAME I in which the diatom community comprised almost exclusively small-sized cells. Introducing a high initial mesozooplankton stock as observed in PAME-II, the model gives a flagellate-dominated response in accordance with the observed response also of this experiment. The ability of the model originally developed for temperate waters to reproduce population dynamics in a 10°C colder Arctic fjord, does not support the existence of important shifts in population balances over this temperature range. Rather, it suggests a quite resilient microbial food web when adapted to in situ temperature. The sensitivity of the model response to its mesozooplankton component suggests, however, that the seasonal vertical migration of Arctic copepods may be a strong forcing factor on Arctic microbial food webs.

  9. Validation of EURO-CORDEX regional climate models in reproducing the variability of precipitation extremes in Romania

    Science.gov (United States)

    Dumitrescu, Alexandru; Busuioc, Aristita

    2016-04-01

    EURO-CORDEX is the European branch of the international CORDEX initiative that aims to provide improved regional climate change projections for Europe. The main objective of this paper is to document the performance of the individual models in reproducing the variability of precipitation extremes in Romania. Here three EURO-CORDEX regional climate models (RCMs) ensemble (scenario RCP4.5) are analysed and inter-compared: DMI-HIRHAM5, KNMI-RACMO2.2 and MPI-REMO. Compared to previous studies, when the RCM validation regarding the Romanian climate has mainly been made on mean state and at station scale, a more quantitative approach of precipitation extremes is proposed. In this respect, to have a more reliable comparison with observation, a high resolution daily precipitation gridded data set was used as observational reference (CLIMHYDEX project). The comparison between the RCM outputs and observed grid point values has been made by calculating three extremes precipitation indices, recommended by the Expert Team on Climate Change Detection Indices (ETCCDI), for the 1976-2005 period: R10MM, annual count of days when precipitation ≥10mm; RX5DAY, annual maximum 5-day precipitation and R95P%, precipitation fraction of annual total precipitation due to daily precipitation > 95th percentile. The RCMs capability to reproduce the mean state for these variables, as well as the main modes of their spatial variability (given by the first three EOF patterns), are analysed. The investigation confirms the ability of RCMs to simulate the main features of the precipitation extreme variability over Romania, but some deficiencies in reproducing of their regional characteristics were found (for example, overestimation of the mea state, especially over the extra Carpathian regions). This work has been realised within the research project "Changes in climate extremes and associated impact in hydrological events in Romania" (CLIMHYDEX), code PN II-ID-2011-2-0073, financed by the Romanian

  10. A novel, comprehensive, and reproducible porcine model for determining the timing of bruises in forensic pathology

    DEFF Research Database (Denmark)

    Barington, Kristiane; Jensen, Henrik Elvang

    2016-01-01

    Purpose Calculating the timing of bruises is crucial in forensic pathology but is a challenging discipline in both human and veterinary medicine. A mechanical device for inflicting bruises in pigs was developed and validated, and the pathological reactions in the bruises were studied over time......-dependent response. Combining these parameters, bruises could be grouped as being either less than 4 h old or between 4 and 10 h of age. Gross lesions and changes in the epidermis and dermis were inconclusive with respect to time determination. Conclusions The model was reproducible and resembled forensic cases...

  11. A novel highly reproducible and lethal nonhuman primate model for orthopox virus infection.

    Directory of Open Access Journals (Sweden)

    Marit Kramski

    Full Text Available The intentional re-introduction of Variola virus (VARV, the agent of smallpox, into the human population is of great concern due its bio-terroristic potential. Moreover, zoonotic infections with Cowpox (CPXV and Monkeypox virus (MPXV cause severe diseases in humans. Smallpox vaccines presently available can have severe adverse effects that are no longer acceptable. The efficacy and safety of new vaccines and antiviral drugs for use in humans can only be demonstrated in animal models. The existing nonhuman primate models, using VARV and MPXV, need very high viral doses that have to be applied intravenously or intratracheally to induce a lethal infection in macaques. To overcome these drawbacks, the infectivity and pathogenicity of a particular CPXV was evaluated in the common marmoset (Callithrix jacchus.A CPXV named calpox virus was isolated from a lethal orthopox virus (OPV outbreak in New World monkeys. We demonstrated that marmosets infected with calpox virus, not only via the intravenous but also the intranasal route, reproducibly develop symptoms resembling smallpox in humans. Infected animals died within 1-3 days after onset of symptoms, even when very low infectious viral doses of 5x10(2 pfu were applied intranasally. Infectious virus was demonstrated in blood, saliva and all organs analyzed.We present the first characterization of a new OPV infection model inducing a disease in common marmosets comparable to smallpox in humans. Intranasal virus inoculation mimicking the natural route of smallpox infection led to reproducible infection. In vivo titration resulted in an MID(50 (minimal monkey infectious dose 50% of 8.3x10(2 pfu of calpox virus which is approximately 10,000-fold lower than MPXV and VARV doses applied in the macaque models. Therefore, the calpox virus/marmoset model is a suitable nonhuman primate model for the validation of vaccines and antiviral drugs. Furthermore, this model can help study mechanisms of OPV pathogenesis.

  12. Improving the Pattern Reproducibility of Multiple-Point-Based Prior Models Using Frequency Matching

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus

    2014-01-01

    Some multiple-point-based sampling algorithms, such as the snesim algorithm, rely on sequential simulation. The conditional probability distributions that are used for the simulation are based on statistics of multiple-point data events obtained from a training image. During the simulation, data...... events with zero probability in the training image statistics may occur. This is handled by pruning the set of conditioning data until an event with non-zero probability is found. The resulting probability distribution sampled by such algorithms is a pruned mixture model. The pruning strategy leads...... to a probability distribution that lacks some of the information provided by the multiple-point statistics from the training image, which reduces the reproducibility of the training image patterns in the outcome realizations. When pruned mixture models are used as prior models for inverse problems, local re...

  13. Stratospheric dryness: model simulations and satellite observations

    Directory of Open Access Journals (Sweden)

    J. Lelieveld

    2007-01-01

    Full Text Available The mechanisms responsible for the extreme dryness of the stratosphere have been debated for decades. A key difficulty has been the lack of comprehensive models which are able to reproduce the observations. Here we examine results from the coupled lower-middle atmosphere chemistry general circulation model ECHAM5/MESSy1 together with satellite observations. Our model results match observed temperatures in the tropical lower stratosphere and realistically represent the seasonal and inter-annual variability of water vapor. The model reproduces the very low water vapor mixing ratios (below 2 ppmv periodically observed at the tropical tropopause near 100 hPa, as well as the characteristic tape recorder signal up to about 10 hPa, providing evidence that the dehydration mechanism is well-captured. Our results confirm that the entry of tropospheric air into the tropical stratosphere is forced by large-scale wave dynamics, whereas radiative cooling regionally decelerates upwelling and can even cause downwelling. Thin cirrus forms in the cold air above cumulonimbus clouds, and the associated sedimentation of ice particles between 100 and 200 hPa reduces water mass fluxes by nearly two orders of magnitude compared to air mass fluxes. Transport into the stratosphere is supported by regional net radiative heating, to a large extent in the outer tropics. During summer very deep monsoon convection over Southeast Asia, centered over Tibet, moistens the stratosphere.

  14. Intra- and inter-observer reproducibility of global and regional magnetic resonance feature tracking derived strain parameters of the left and right ventricle

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Björn, E-mail: bjoernschmidt1989@gmx.de [Department of Radiology, University Hospital of Cologne, Kerpener Str. 62, D-50937, Cologne (Germany); Dick, Anastasia, E-mail: anastasia-dick@web.de [Department of Radiology, University Hospital of Cologne, Kerpener Str. 62, D-50937, Cologne (Germany); Treutlein, Melanie, E-mail: melanie-treutlein@web.de [Department of Radiology, University Hospital of Cologne, Kerpener Str. 62, D-50937, Cologne (Germany); Schiller, Petra, E-mail: petra.schiller@uni-koeln.de [Institute of Medical Statistics, Informatics and Epidemiology, University of Cologne, Kerpener Str. 62, D-50937, Cologne (Germany); Bunck, Alexander C., E-mail: alexander.bunck@uk-koeln.de [Department of Radiology, University Hospital of Cologne, Kerpener Str. 62, D-50937, Cologne (Germany); Maintz, David, E-mail: david.maintz@uk-koeln.de [Department of Radiology, University Hospital of Cologne, Kerpener Str. 62, D-50937, Cologne (Germany); Baeßler, Bettina, E-mail: bettina.baessler@uk-koeln.de [Department of Radiology, University Hospital of Cologne, Kerpener Str. 62, D-50937, Cologne (Germany)

    2017-04-15

    Highlights: • Left and right ventricular CMR feature tracking is highly reproducible. • The only exception is radial strain and strain rate. • Sample size estimations are presented as a practical reference for future studies. - Abstract: Objectives: To investigate the reproducibility of regional and global strain and strain rate (SR) parameters of both ventricles and to determine sample sizes for all investigated strain and SR parameters in order to generate a practical reference for future studies. Materials and methods: The study population consisted of 20 healthy individuals and 20 patients with acute myocarditis. Cine sequences in three horizontal long axis views and a stack of short axis views covering the entire left and right ventricle (LV, RV) were retrospectively analysed using a dedicated feature tracking (FT) software algorithm (TOMTEC). For intra-observer analysis, one observer analysed CMR images of all patients and volunteers twice. For inter-observer analysis, three additional blinded observers analysed the same datasets once. Intra- and inter-observer reproducibility were tested in all patients and controls using Bland-Altman analyses, intra-class correlation coefficients (ICCs) and coefficients of variation. Results: Intra-observer reproducibility of global LV strain and SR parameters was excellent (range of ICCs: 0.81–1.00), the only exception being global radial SR with a poor reproducibility (ICC 0.23). On a regional level, basal and midventricular strain and SR parameters were more reproducible when compared to apical parameters. Inter-observer reproducibility of all LV parameters was slightly lower than intra-observer reproducibility, yet still good to excellent for all global and regional longitudinal and circumferential strain and SR parameters (range of ICCs: 0.66–0.93). Similar to the LV, all global RV longitudinal and circumferential strain and SR parameters showed an excellent reproducibility, (range of ICCs: 0.75–0

  15. Intra- and inter-observer reproducibility of global and regional magnetic resonance feature tracking derived strain parameters of the left and right ventricle

    International Nuclear Information System (INIS)

    Schmidt, Björn; Dick, Anastasia; Treutlein, Melanie; Schiller, Petra; Bunck, Alexander C.; Maintz, David; Baeßler, Bettina

    2017-01-01

    Highlights: • Left and right ventricular CMR feature tracking is highly reproducible. • The only exception is radial strain and strain rate. • Sample size estimations are presented as a practical reference for future studies. - Abstract: Objectives: To investigate the reproducibility of regional and global strain and strain rate (SR) parameters of both ventricles and to determine sample sizes for all investigated strain and SR parameters in order to generate a practical reference for future studies. Materials and methods: The study population consisted of 20 healthy individuals and 20 patients with acute myocarditis. Cine sequences in three horizontal long axis views and a stack of short axis views covering the entire left and right ventricle (LV, RV) were retrospectively analysed using a dedicated feature tracking (FT) software algorithm (TOMTEC). For intra-observer analysis, one observer analysed CMR images of all patients and volunteers twice. For inter-observer analysis, three additional blinded observers analysed the same datasets once. Intra- and inter-observer reproducibility were tested in all patients and controls using Bland-Altman analyses, intra-class correlation coefficients (ICCs) and coefficients of variation. Results: Intra-observer reproducibility of global LV strain and SR parameters was excellent (range of ICCs: 0.81–1.00), the only exception being global radial SR with a poor reproducibility (ICC 0.23). On a regional level, basal and midventricular strain and SR parameters were more reproducible when compared to apical parameters. Inter-observer reproducibility of all LV parameters was slightly lower than intra-observer reproducibility, yet still good to excellent for all global and regional longitudinal and circumferential strain and SR parameters (range of ICCs: 0.66–0.93). Similar to the LV, all global RV longitudinal and circumferential strain and SR parameters showed an excellent reproducibility, (range of ICCs: 0.75–0

  16. Reproducing the nonlinear dynamic behavior of a structured beam with a generalized continuum model

    Science.gov (United States)

    Vila, J.; Fernández-Sáez, J.; Zaera, R.

    2018-04-01

    In this paper we study the coupled axial-transverse nonlinear vibrations of a kind of one dimensional structured solids by application of the so called Inertia Gradient Nonlinear continuum model. To show the accuracy of this axiomatic model, previously proposed by the authors, its predictions are compared with numeric results from a previously defined finite discrete chain of lumped masses and springs, for several number of particles. A continualization of the discrete model equations based on Taylor series allowed us to set equivalent values of the mechanical properties in both discrete and axiomatic continuum models. Contrary to the classical continuum model, the inertia gradient nonlinear continuum model used herein is able to capture scale effects, which arise for modes in which the wavelength is comparable to the characteristic distance of the structured solid. The main conclusion of the work is that the proposed generalized continuum model captures the scale effects in both linear and nonlinear regimes, reproducing the behavior of the 1D nonlinear discrete model adequately.

  17. A computational model incorporating neural stem cell dynamics reproduces glioma incidence across the lifespan in the human population.

    Directory of Open Access Journals (Sweden)

    Roman Bauer

    Full Text Available Glioma is the most common form of primary brain tumor. Demographically, the risk of occurrence increases until old age. Here we present a novel computational model to reproduce the probability of glioma incidence across the lifespan. Previous mathematical models explaining glioma incidence are framed in a rather abstract way, and do not directly relate to empirical findings. To decrease this gap between theory and experimental observations, we incorporate recent data on cellular and molecular factors underlying gliomagenesis. Since evidence implicates the adult neural stem cell as the likely cell-of-origin of glioma, we have incorporated empirically-determined estimates of neural stem cell number, cell division rate, mutation rate and oncogenic potential into our model. We demonstrate that our model yields results which match actual demographic data in the human population. In particular, this model accounts for the observed peak incidence of glioma at approximately 80 years of age, without the need to assert differential susceptibility throughout the population. Overall, our model supports the hypothesis that glioma is caused by randomly-occurring oncogenic mutations within the neural stem cell population. Based on this model, we assess the influence of the (experimentally indicated decrease in the number of neural stem cells and increase of cell division rate during aging. Our model provides multiple testable predictions, and suggests that different temporal sequences of oncogenic mutations can lead to tumorigenesis. Finally, we conclude that four or five oncogenic mutations are sufficient for the formation of glioma.

  18. Shelter models and observations

    DEFF Research Database (Denmark)

    Peña, Alfredo; Bechmann, Andreas; Conti, Davide

    This report documents part of the work performed by work package (WP) 3 of the ‘Online WAsP’ project funded by the Danish Energy Technology and Demonstration Program (EUDP). WP3 initially identified the shortcomings of the current WAsP engine for small and medium wind turbines (Peña et al., 2014b......), adapted the WAsP engine to OnlineWAsP (www.wasponline.dk), and made an effort to quantify the error and the uncertainty, first of the obstacle model in WAsP and later ofthe WAsP model chain. This report documents the work done for the obstacle model. In addition, EUDP supports the IEA task 27 on ‘small...... in the wake of a fence. The experiment is the basis of the study of the error and uncertainty of the obstacle models....

  19. Mouse Models of Diet-Induced Nonalcoholic Steatohepatitis Reproduce the Heterogeneity of the Human Disease

    Science.gov (United States)

    Machado, Mariana Verdelho; Michelotti, Gregory Alexander; Xie, Guanhua; de Almeida, Thiago Pereira; Boursier, Jerome; Bohnic, Brittany; Guy, Cynthia D.; Diehl, Anna Mae

    2015-01-01

    Background and aims Non-alcoholic steatohepatitis (NASH), the potentially progressive form of nonalcoholic fatty liver disease (NAFLD), is the pandemic liver disease of our time. Although there are several animal models of NASH, consensus regarding the optimal model is lacking. We aimed to compare features of NASH in the two most widely-used mouse models: methionine-choline deficient (MCD) diet and Western diet. Methods Mice were fed standard chow, MCD diet for 8 weeks, or Western diet (45% energy from fat, predominantly saturated fat, with 0.2% cholesterol, plus drinking water supplemented with fructose and glucose) for 16 weeks. Liver pathology and metabolic profile were compared. Results The metabolic profile associated with human NASH was better mimicked by Western diet. Although hepatic steatosis (i.e., triglyceride accumulation) was also more severe, liver non-esterified fatty acid content was lower than in the MCD diet group. NASH was also less severe and less reproducible in the Western diet model, as evidenced by less liver cell death/apoptosis, inflammation, ductular reaction, and fibrosis. Various mechanisms implicated in human NASH pathogenesis/progression were also less robust in the Western diet model, including oxidative stress, ER stress, autophagy deregulation, and hedgehog pathway activation. Conclusion Feeding mice a Western diet models metabolic perturbations that are common in humans with mild NASH, whereas administration of a MCD diet better models the pathobiological mechanisms that cause human NAFLD to progress to advanced NASH. PMID:26017539

  20. Mouse models of diet-induced nonalcoholic steatohepatitis reproduce the heterogeneity of the human disease.

    Directory of Open Access Journals (Sweden)

    Mariana Verdelho Machado

    Full Text Available Non-alcoholic steatohepatitis (NASH, the potentially progressive form of nonalcoholic fatty liver disease (NAFLD, is the pandemic liver disease of our time. Although there are several animal models of NASH, consensus regarding the optimal model is lacking. We aimed to compare features of NASH in the two most widely-used mouse models: methionine-choline deficient (MCD diet and Western diet.Mice were fed standard chow, MCD diet for 8 weeks, or Western diet (45% energy from fat, predominantly saturated fat, with 0.2% cholesterol, plus drinking water supplemented with fructose and glucose for 16 weeks. Liver pathology and metabolic profile were compared.The metabolic profile associated with human NASH was better mimicked by Western diet. Although hepatic steatosis (i.e., triglyceride accumulation was also more severe, liver non-esterified fatty acid content was lower than in the MCD diet group. NASH was also less severe and less reproducible in the Western diet model, as evidenced by less liver cell death/apoptosis, inflammation, ductular reaction, and fibrosis. Various mechanisms implicated in human NASH pathogenesis/progression were also less robust in the Western diet model, including oxidative stress, ER stress, autophagy deregulation, and hedgehog pathway activation.Feeding mice a Western diet models metabolic perturbations that are common in humans with mild NASH, whereas administration of a MCD diet better models the pathobiological mechanisms that cause human NAFLD to progress to advanced NASH.

  1. Reproducibility analysis of measurements with a mechanical semiautomatic eye model for evaluation of intraocular lenses

    Science.gov (United States)

    Rank, Elisabet; Traxler, Lukas; Bayer, Natascha; Reutterer, Bernd; Lux, Kirsten; Drauschke, Andreas

    2014-03-01

    Mechanical eye models are used to validate ex vivo the optical quality of intraocular lenses (IOLs). The quality measurement and test instructions for IOLs are defined in the ISO 11979-2. However, it was mentioned in literature that these test instructions could lead to inaccurate measurements in case of some modern IOL designs. Reproducibility of alignment and measurement processes are presented, performed with a semiautomatic mechanical ex vivo eye model based on optical properties published by Liou and Brennan in the scale 1:1. The cornea, the iris aperture and the IOL itself are separately changeable within the eye model. The adjustment of the IOL can be manipulated by automatic decentration and tilt of the IOL in reference to the optical axis of the whole system, which is defined by the connection line of the central point of the artificial cornea and the iris aperture. With the presented measurement setup two quality criteria are measurable: the modulation transfer function (MTF) and the Strehl ratio. First the reproducibility of the alignment process for definition of initial conditions of the lateral position and tilt in reference to the optical axis of the system is investigated. Furthermore, different IOL holders are tested related to the stable holding of the IOL. The measurement is performed by a before-after comparison of the lens position using a typical decentration and tilt tolerance analysis path. Modulation transfer function MTF and Strehl ratio S before and after this tolerance analysis are compared and requirements for lens holder construction are deduced from the presented results.

  2. Observational Constraints for Modeling Diffuse Molecular Clouds

    Science.gov (United States)

    Federman, S. R.

    2014-02-01

    Ground-based and space-borne observations of diffuse molecular clouds suggest a number of areas where further improvements to modeling efforts is warranted. I will highlight those that have the widest applicability. The range in CO fractionation caused by selective isotope photodissociation, in particular the large 12C16O/13C16O ratios observed toward stars in Ophiuchus, is not reproduced well by current models. Our ongoing laboratory measurements of oscillator strengths and predissociation rates for Rydberg transitions in CO isotopologues may help clarify the situtation. The CH+ abundance continues to draw attention. Small scale structure seen toward ζ Per may provide additional constraints on the possible synthesis routes. The connection between results from optical transitions and those from radio and sub-millimeter wave transitions requires further effort. A study of OH+ and OH toward background stars reveals that these species favor different environments. This brings to focus the need to model each cloud along the line of sight separately, and to allow the physical conditions to vary within an individual cloud, in order to gain further insight into the chemistry. Now that an extensive set of data on molecular excitation is available, the models should seek to reproduce these data to place further constraints on the modeling results.

  3. The construction of a two-dimensional reproducing kernel function and its application in a biomedical model.

    Science.gov (United States)

    Guo, Qi; Shen, Shu-Ting

    2016-04-29

    There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.

  4. Hippocampal Astrocyte Cultures from Adult and Aged Rats Reproduce Changes in Glial Functionality Observed in the Aging Brain.

    Science.gov (United States)

    Bellaver, Bruna; Souza, Débora Guerini; Souza, Diogo Onofre; Quincozes-Santos, André

    2017-05-01

    Astrocytes are dynamic cells that maintain brain homeostasis, regulate neurotransmitter systems, and process synaptic information, energy metabolism, antioxidant defenses, and inflammatory response. Aging is a biological process that is closely associated with hippocampal astrocyte dysfunction. In this sense, we demonstrated that hippocampal astrocytes from adult and aged Wistar rats reproduce the glial functionality alterations observed in aging by evaluating several senescence, glutamatergic, oxidative and inflammatory parameters commonly associated with the aging process. Here, we show that the p21 senescence-associated gene and classical astrocyte markers, such as glial fibrillary acidic protein (GFAP), vimentin, and actin, changed their expressions in adult and aged astrocytes. Age-dependent changes were also observed in glutamate transporters (glutamate aspartate transporter (GLAST) and glutamate transporter-1 (GLT-1)) and glutamine synthetase immunolabeling and activity. Additionally, according to in vivo aging, astrocytes from adult and aged rats showed an increase in oxidative/nitrosative stress with mitochondrial dysfunction, an increase in RNA oxidation, NADPH oxidase (NOX) activity, superoxide levels, and inducible nitric oxide synthase (iNOS) expression levels. Changes in antioxidant defenses were also observed. Hippocampal astrocytes also displayed age-dependent inflammatory response with augmentation of proinflammatory cytokine levels, such as TNF-α, IL-1β, IL-6, IL-18, and messenger RNA (mRNA) levels of cyclo-oxygenase 2 (COX-2). Furthermore, these cells secrete neurotrophic factors, including glia-derived neurotrophic factor (GDNF), brain-derived neurotrophic factor (BDNF), S100 calcium-binding protein B (S100B) protein, and transforming growth factor-β (TGF-β), which changed in an age-dependent manner. Classical signaling pathways associated with aging, such as nuclear factor erythroid-derived 2-like 2 (Nrf2), nuclear factor kappa B (NFκ

  5. Demography-based adaptive network model reproduces the spatial organization of human linguistic groups

    Science.gov (United States)

    Capitán, José A.; Manrubia, Susanna

    2015-12-01

    The distribution of human linguistic groups presents a number of interesting and nontrivial patterns. The distributions of the number of speakers per language and the area each group covers follow log-normal distributions, while population and area fulfill an allometric relationship. The topology of networks of spatial contacts between different linguistic groups has been recently characterized, showing atypical properties of the degree distribution and clustering, among others. Human demography, spatial conflicts, and the construction of networks of contacts between linguistic groups are mutually dependent processes. Here we introduce an adaptive network model that takes all of them into account and successfully reproduces, using only four model parameters, not only those features of linguistic groups already described in the literature, but also correlations between demographic and topological properties uncovered in this work. Besides their relevance when modeling and understanding processes related to human biogeography, our adaptive network model admits a number of generalizations that broaden its scope and make it suitable to represent interactions between agents based on population dynamics and competition for space.

  6. Acute multi-sgRNA knockdown of KEOPS complex genes reproduces the microcephaly phenotype of the stable knockout zebrafish model.

    Directory of Open Access Journals (Sweden)

    Tilman Jobst-Schwan

    Full Text Available Until recently, morpholino oligonucleotides have been widely employed in zebrafish as an acute and efficient loss-of-function assay. However, off-target effects and reproducibility issues when compared to stable knockout lines have compromised their further use. Here we employed an acute CRISPR/Cas approach using multiple single guide RNAs targeting simultaneously different positions in two exemplar genes (osgep or tprkb to increase the likelihood of generating mutations on both alleles in the injected F0 generation and to achieve a similar effect as morpholinos but with the reproducibility of stable lines. This multi single guide RNA approach resulted in median likelihoods for at least one mutation on each allele of >99% and sgRNA specific insertion/deletion profiles as revealed by deep-sequencing. Immunoblot showed a significant reduction for Osgep and Tprkb proteins. For both genes, the acute multi-sgRNA knockout recapitulated the microcephaly phenotype and reduction in survival that we observed previously in stable knockout lines, though milder in the acute multi-sgRNA knockout. Finally, we quantify the degree of mutagenesis by deep sequencing, and provide a mathematical model to quantitate the chance for a biallelic loss-of-function mutation. Our findings can be generalized to acute and stable CRISPR/Cas targeting for any zebrafish gene of interest.

  7. Stochastic model of financial markets reproducing scaling and memory in volatility return intervals

    Science.gov (United States)

    Gontis, V.; Havlin, S.; Kononovicius, A.; Podobnik, B.; Stanley, H. E.

    2016-11-01

    We investigate the volatility return intervals in the NYSE and FOREX markets. We explain previous empirical findings using a model based on the interacting agent hypothesis instead of the widely-used efficient market hypothesis. We derive macroscopic equations based on the microscopic herding interactions of agents and find that they are able to reproduce various stylized facts of different markets and different assets with the same set of model parameters. We show that the power-law properties and the scaling of return intervals and other financial variables have a similar origin and could be a result of a general class of non-linear stochastic differential equations derived from a master equation of an agent system that is coupled by herding interactions. Specifically, we find that this approach enables us to recover the volatility return interval statistics as well as volatility probability and spectral densities for the NYSE and FOREX markets, for different assets, and for different time-scales. We find also that the historical S&P500 monthly series exhibits the same volatility return interval properties recovered by our proposed model. Our statistical results suggest that human herding is so strong that it persists even when other evolving fluctuations perturbate the financial system.

  8. Reproducibility and consistency of proteomic experiments on natural populations of a non-model aquatic insect.

    Science.gov (United States)

    Hidalgo-Galiana, Amparo; Monge, Marta; Biron, David G; Canals, Francesc; Ribera, Ignacio; Cieslak, Alexandra

    2014-01-01

    Population proteomics has a great potential to address evolutionary and ecological questions, but its use in wild populations of non-model organisms is hampered by uncontrolled sources of variation. Here we compare the response to temperature extremes of two geographically distant populations of a diving beetle species (Agabus ramblae) using 2-D DIGE. After one week of acclimation in the laboratory under standard conditions, a third of the specimens of each population were placed at either 4 or 27°C for 12 h, with another third left as a control. We then compared the protein expression level of three replicated samples of 2-3 specimens for each treatment. Within each population, variation between replicated samples of the same treatment was always lower than variation between treatments, except for some control samples that retained a wider range of expression levels. The two populations had a similar response, without significant differences in the number of protein spots over- or under-expressed in the pairwise comparisons between treatments. We identified exemplary proteins among those differently expressed between treatments, which proved to be proteins known to be related to thermal response or stress. Overall, our results indicate that specimens collected in the wild are suitable for proteomic analyses, as the additional sources of variation were not enough to mask the consistency and reproducibility of the response to the temperature treatments.

  9. An improved cost-effective, reproducible method for evaluation of bone loss in a rodent model.

    Science.gov (United States)

    Fine, Daniel H; Schreiner, Helen; Nasri-Heir, Cibele; Greenberg, Barbara; Jiang, Shuying; Markowitz, Kenneth; Furgang, David

    2009-02-01

    This study was designed to investigate the utility of two "new" definitions for assessment of bone loss in a rodent model of periodontitis. Eighteen rats were divided into three groups. Group 1 was infected by Aggregatibacter actinomycetemcomitans (Aa), group 2 was infected with an Aa leukotoxin knock-out, and group 3 received no Aa (controls). Microbial sampling and antibody titres were determined. Initially, two examiners measured the distance from the cemento-enamel-junction to alveolar bone crest using the three following methods; (1) total area of bone loss by radiograph, (2) linear bone loss by radiograph, (3) a direct visual measurement (DVM) of horizontal bone loss. Two "new" definitions were adopted; (1) any site in infected animals showing bone loss >2 standard deviations above the mean seen at that site in control animals was recorded as bone loss, (2) any animal with two or more sites in any quadrant affected by bone loss was considered as diseased. Using the "new" definitions both evaluators independently found that infected animals had significantly more disease than controls (DVM system; p<0.05). The DVM method provides a simple, cost effective, and reproducible method for studying periodontal disease in rodents.

  10. Observation models in radiocarbon calibration

    International Nuclear Information System (INIS)

    Jones, M.D.; Nicholls, G.K.

    2001-01-01

    The observation model underlying any calibration process dictates the precise mathematical details of the calibration calculations. Accordingly it is important that an appropriate observation model is used. Here this is illustrated with reference to the use of reservoir offsets where the standard calibration approach is based on a different model to that which the practitioners clearly believe is being applied. This sort of error can give rise to significantly erroneous calibration results. (author). 12 refs., 1 fig

  11. Evaluation of NASA's MERRA Precipitation Product in Reproducing the Observed Trend and Distribution of Extreme Precipitation Events in the United States

    Science.gov (United States)

    Ashouri, Hamed; Sorooshian, Soroosh; Hsu, Kuo-Lin; Bosilovich, Michael G.; Lee, Jaechoul; Wehner, Michael F.; Collow, Allison

    2016-01-01

    This study evaluates the performance of NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) precipitation product in reproducing the trend and distribution of extreme precipitation events. Utilizing the extreme value theory, time-invariant and time-variant extreme value distributions are developed to model the trends and changes in the patterns of extreme precipitation events over the contiguous United States during 1979-2010. The Climate Prediction Center (CPC) U.S.Unified gridded observation data are used as the observational dataset. The CPC analysis shows that the eastern and western parts of the United States are experiencing positive and negative trends in annual maxima, respectively. The continental-scale patterns of change found in MERRA seem to reasonably mirror the observed patterns of change found in CPC. This is not previously expected, given the difficulty in constraining precipitation in reanalysis products. MERRA tends to overestimate the frequency at which the 99th percentile of precipitation is exceeded because this threshold tends to be lower in MERRA, making it easier to be exceeded. This feature is dominant during the summer months. MERRA tends to reproduce spatial patterns of the scale and location parameters of the generalized extreme value and generalized Pareto distributions. However, MERRA underestimates these parameters, particularly over the Gulf Coast states, leading to lower magnitudes in extreme precipitation events. Two issues in MERRA are identified: 1) MERRA shows a spurious negative trend in Nebraska and Kansas, which is most likely related to the changes in the satellite observing system over time that has apparently affected the water cycle in the central United States, and 2) the patterns of positive trend over the Gulf Coast states and along the East Coast seem to be correlated with the tropical cyclones in these regions. The analysis of the trends in the seasonal precipitation extremes indicates that

  12. Intra-observer reproducibility and interobserver reliability of the radiographic parameters in the Spinal Deformity Study Group's AIS Radiographic Measurement Manual.

    Science.gov (United States)

    Dang, Natasha Radhika; Moreau, Marc J; Hill, Douglas L; Mahood, James K; Raso, James

    2005-05-01

    Retrospective cross-sectional assessment of the reproducibility and reliability of radiographic parameters. To measure the intra-examiner and interexaminer reproducibility and reliability of salient radiographic features. The management and treatment of adolescent idiopathic scoliosis (AIS) depends on accurate and reproducible radiographic measurements of the deformity. Ten sets of radiographs were randomly selected from a sample of patients with AIS, with initial curves between 20 degrees and 45 degrees. Fourteen measures of the deformity were measured from posteroanterior and lateral radiographs by 2 examiners, and were repeated 5 times at intervals of 3-5 days. Intra-examiner and interexaminer differences were examined. The parameters include measures of curve size, spinal imbalance, sagittal kyphosis and alignment, maximum apical vertebral rotation, T1 tilt, spondylolysis/spondylolisthesis, and skeletal age. Intra-examiner reproducibility was generally excellent for parameters measured from the posteroanterior radiographs but only fair to good for parameters from the lateral radiographs, in which some landmarks were not clearly visible. Of the 13 parameters observed, 7 had excellent interobserver reliability. The measurements from the lateral radiograph were less reproducible and reliable and, thus, may not add value to the assessment of AIS. Taking additional measures encourages a systematic and comprehensive assessment of spinal radiographs.

  13. Validation of the 3D Skin Comet assay using full thickness skin models: Transferability and reproducibility.

    Science.gov (United States)

    Reisinger, Kerstin; Blatz, Veronika; Brinkmann, Joep; Downs, Thomas R; Fischer, Anja; Henkler, Frank; Hoffmann, Sebastian; Krul, Cyrille; Liebsch, Manfred; Luch, Andreas; Pirow, Ralph; Reus, Astrid A; Schulz, Markus; Pfuhler, Stefan

    2018-03-01

    Recently revised OECD Testing Guidelines highlight the importance of considering the first site-of-contact when investigating the genotoxic hazard. Thus far, only in vivo approaches are available to address the dermal route of exposure. The 3D Skin Comet and Reconstructed Skin Micronucleus (RSMN) assays intend to close this gap in the in vitro genotoxicity toolbox by investigating DNA damage after topical application. This represents the most relevant route of exposure for a variety of compounds found in household products, cosmetics, and industrial chemicals. The comet assay methodology is able to detect both chromosomal damage and DNA lesions that may give rise to gene mutations, thereby complementing the RSMN which detects only chromosomal damage. Here, the comet assay was adapted to two reconstructed full thickness human skin models: the EpiDerm™- and Phenion ® Full-Thickness Skin Models. First, tissue-specific protocols for the isolation of single cells and the general comet assay were transferred to European and US-American laboratories. After establishment of the assay, the protocol was then further optimized with appropriate cytotoxicity measurements and the use of aphidicolin, a DNA repair inhibitor, to improve the assay's sensitivity. In the first phase of an ongoing validation study eight chemicals were tested in three laboratories each using the Phenion ® Full-Thickness Skin Model, informing several validation modules. Ultimately, the 3D Skin Comet assay demonstrated a high predictive capacity and good intra- and inter-laboratory reproducibility with four laboratories reaching a 100% predictivity and the fifth yielding 70%. The data are intended to demonstrate the use of the 3D Skin Comet assay as a new in vitro tool for following up on positive findings from the standard in vitro genotoxicity test battery for dermally applied chemicals, ultimately helping to drive the regulatory acceptance of the assay. To expand the database, the validation will

  14. Observable cosmology and cosmological models

    International Nuclear Information System (INIS)

    Kardashev, N.S.; Lukash, V.N.; Novikov, I.D.

    1987-01-01

    Modern state of observation cosmology is briefly discussed. Among other things, a problem, related to Hibble constant and slowdown constant determining is considered. Within ''pancake'' theory hot (neutrino) cosmological model explains well the large-scale structure of the Universe, but does not explain the galaxy formation. A cold cosmological model explains well light object formation, but contradicts data on large-scale structure

  15. Reproducing the Wechsler Intelligence Scale for Children-Fifth Edition: Factor Model Results

    Science.gov (United States)

    Beaujean, A. Alexander

    2016-01-01

    One of the ways to increase the reproducibility of research is for authors to provide a sufficient description of the data analytic procedures so that others can replicate the results. The publishers of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V) do not follow these guidelines when reporting their confirmatory factor…

  16. Can a coupled meteorology–chemistry model reproduce the historical trend in aerosol direct radiative effects over the Northern Hemisphere?

    Science.gov (United States)

    The ability of a coupled meteorology–chemistry model, i.e., Weather Research and Forecast and Community Multiscale Air Quality (WRF-CMAQ), to reproduce the historical trend in aerosol optical depth (AOD) and clear-sky shortwave radiation (SWR) over the Northern Hemisphere h...

  17. Failure of Stadard Optical Models to Reproduce Neutron Total Cross Section Difference in the W Isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, J D; Bauer, R W; Dietrich, F S; Grimes, S M; Finlay, R W; Abfalterer, W P; Bateman, F B; Haight, R C; Morgan, G L; Bauge, E; Delaroche, J P; Romain, P

    2001-11-01

    Recently cross section differences among the isotopes{sup 182,184,186}W have been measured as part of a study of total cross sections in the 5-560 MeV energy range. These measurements show oscillations up to 150 mb between 5 and 100 MeV. Spherical and deformed phenomenological optical potentials with typical radial and isospin dependences show very small oscillations, in disagreement with the data. In a simple Ramsauer model, this discrepancy can be traced to a cancellation between radial and isospin effects. Understanding this problem requires a more detailed model that incorporates a realistic description of the neutron and proton density distributions. This has been done with results of Hartree-Fock-Bogolyubov calculations using the Gogny force, together with a microscopic folding model employing a modification of the JLM potential as an effective interaction. This treatment yields a satisfactory interpretation of the observed total cross section differences.

  18. Observations involving broadband impedance modelling

    Energy Technology Data Exchange (ETDEWEB)

    Berg, J S [Stanford Linear Accelerator Center, Menlo Park, CA (United States)

    1996-08-01

    Results for single- and multi-bunch instabilities can be significantly affected by the precise model that is used for the broadband impedance. This paper discusses three aspects of broadband impedance modelling. The first is an observation of the effect that a seemingly minor change in an impedance model has on the single-bunch mode coupling threshold. The second is a successful attempt to construct a model for the high-frequency tails of an r.f. cavity. The last is a discussion of requirements for the mathematical form of an impedance which follow from the general properties of impedances. (author)

  19. Observations involving broadband impedance modelling

    International Nuclear Information System (INIS)

    Berg, J.S.

    1995-08-01

    Results for single- and multi-bunch instabilities can be significantly affected by the precise model that is used for the broadband impendance. This paper discusses three aspects of broadband impendance modeling. The first is an observation of the effect that a seemingly minor change in an impedance model has on the single-bunch mode coupling threshold. The second is a successful attempt to construct a model for the high-frequency tails of an r.f cavity. The last is a discussion of requirements for the mathematical form of an impendance which follow from the general properties of impendances

  20. Observational modeling of topological spaces

    International Nuclear Information System (INIS)

    Molaei, M.R.

    2009-01-01

    In this paper a model for a multi-dimensional observer by using of the fuzzy theory is presented. Relative form of Tychonoff theorem is proved. The notion of topological entropy is extended. The persistence of relative topological entropy under relative conjugate relation is proved.

  1. A rat model of post-traumatic stress disorder reproduces the hippocampal deficits seen in the human syndrome

    Directory of Open Access Journals (Sweden)

    Sonal eGoswami

    2012-06-01

    Full Text Available Despite recent progress, the causes and pathophysiology of post-traumatic stress disorder (PTSD remain poorly understood, partly because of ethical limitations inherent to human studies. One approach to circumvent this obstacle is to study PTSD in a valid animal model of the human syndrome. In one such model, extreme and long-lasting behavioral manifestations of anxiety develop in a subset of Lewis rats after exposure to an intense predatory threat that mimics the type of life-and-death situation known to precipitate PTSD in humans. This study aimed to assess whether the hippocampus-associated deficits observed in the human syndrome are reproduced in this rodent model. Prior to predatory threat, different groups of rats were each tested on one of three object recognition memory tasks that varied in the types of contextual clues (i.e. that require the hippocampus or not the rats could use to identify novel items. After task completion, the rats were subjected to predatory threat and, one week later, tested on the elevated plus maze. Based on their exploratory behavior in the plus maze, rats were then classified as resilient or PTSD-like and their performance on the pre-threat object recognition tasks compared. The performance of PTSD-like rats was inferior to that of resilient rats but only when subjects relied on an allocentric frame of reference to identify novel items, a process thought to be critically dependent on the hippocampus. Therefore, these results suggest that even prior to trauma, PTSD-like rats show a deficit in hippocampal-dependent functions, as reported in twin studies of human PTSD.

  2. A rat model of post-traumatic stress disorder reproduces the hippocampal deficits seen in the human syndrome.

    Science.gov (United States)

    Goswami, Sonal; Samuel, Sherin; Sierra, Olga R; Cascardi, Michele; Paré, Denis

    2012-01-01

    Despite recent progress, the causes and pathophysiology of post-traumatic stress disorder (PTSD) remain poorly understood, partly because of ethical limitations inherent to human studies. One approach to circumvent this obstacle is to study PTSD in a valid animal model of the human syndrome. In one such model, extreme and long-lasting behavioral manifestations of anxiety develop in a subset of Lewis rats after exposure to an intense predatory threat that mimics the type of life-and-death situation known to precipitate PTSD in humans. This study aimed to assess whether the hippocampus-associated deficits observed in the human syndrome are reproduced in this rodent model. Prior to predatory threat, different groups of rats were each tested on one of three object recognition memory tasks that varied in the types of contextual clues (i.e., that require the hippocampus or not) the rats could use to identify novel items. After task completion, the rats were subjected to predatory threat and, one week later, tested on the elevated plus maze (EPM). Based on their exploratory behavior in the plus maze, rats were then classified as resilient or PTSD-like and their performance on the pre-threat object recognition tasks compared. The performance of PTSD-like rats was inferior to that of resilient rats but only when subjects relied on an allocentric frame of reference to identify novel items, a process thought to be critically dependent on the hippocampus. Therefore, these results suggest that even prior to trauma PTSD-like rats show a deficit in hippocampal-dependent functions, as reported in twin studies of human PTSD.

  3. Two-Finger Tightness: What Is It? Measuring Torque and Reproducibility in a Simulated Model.

    Science.gov (United States)

    Acker, William B; Tai, Bruce L; Belmont, Barry; Shih, Albert J; Irwin, Todd A; Holmes, James R

    2016-05-01

    Residents in training are often directed to insert screws using "two-finger tightness" to impart adequate torque but minimize the chance of a screw stripping in bone. This study seeks to quantify and describe two-finger tightness and to assess the variability of its application by residents in training. Cortical bone was simulated using a polyurethane foam block (30-pcf density) that was prepared with predrilled holes for tightening 3.5 × 14-mm long cortical screws and mounted to a custom-built apparatus on a load cell to capture torque data. Thirty-three residents in training, ranging from the first through fifth years of residency, along with 8 staff members, were directed to tighten 6 screws to two-finger tightness in the test block, and peak torque values were recorded. The participants were blinded to their torque values. Stripping torque (2.73 ± 0.56 N·m) was determined from 36 trials and served as a threshold for failed screw placement. The average torques varied substantially with regard to absolute torque values, thus poorly defining two-finger tightness. Junior residents less consistently reproduced torque compared with other groups (0.29 and 0.32, respectively). These data quantify absolute values of two-finger tightness but demonstrate considerable variability in absolute torque values, percentage of stripping torque, and ability to consistently reproduce given torque levels. Increased years in training are weakly correlated with reproducibility, but experience does not seem to affect absolute torque levels. These results question the usefulness of two-finger tightness as a teaching tool and highlight the need for improvement in resident motor skill training and development within a teaching curriculum. Torque measuring devices may be a useful simulation tools for this purpose.

  4. Reproducibility of the heat/capsaicin skin sensitization model in healthy volunteers

    Directory of Open Access Journals (Sweden)

    Cavallone LF

    2013-11-01

    Full Text Available Laura F Cavallone,1 Karen Frey,1 Michael C Montana,1 Jeremy Joyal,1 Karen J Regina,1 Karin L Petersen,2 Robert W Gereau IV11Department of Anesthesiology, Washington University in St Louis, School of Medicine, St Louis, MO, USA; 2California Pacific Medical Center Research Institute, San Francisco, CA, USAIntroduction: Heat/capsaicin skin sensitization is a well-characterized human experimental model to induce hyperalgesia and allodynia. Using this model, gabapentin, among other drugs, was shown to significantly reduce cutaneous hyperalgesia compared to placebo. Since the larger thermal probes used in the original studies to produce heat sensitization are now commercially unavailable, we decided to assess whether previous findings could be replicated with a currently available smaller probe (heated area 9 cm2 versus 12.5–15.7 cm2.Study design and methods: After Institutional Review Board approval, 15 adult healthy volunteers participated in two study sessions, scheduled 1 week apart (Part A. In both sessions, subjects were exposed to the heat/capsaicin cutaneous sensitization model. Areas of hypersensitivity to brush stroke and von Frey (VF filament stimulation were measured at baseline and after rekindling of skin sensitization. Another group of 15 volunteers was exposed to an identical schedule and set of sensitization procedures, but, in each session, received either gabapentin or placebo (Part B.Results: Unlike previous reports, a similar reduction of areas of hyperalgesia was observed in all groups/sessions. Fading of areas of hyperalgesia over time was observed in Part A. In Part B, there was no difference in area reduction after gabapentin compared to placebo.Conclusion: When using smaller thermal probes than originally proposed, modifications of other parameters of sensitization and/or rekindling process may be needed to allow the heat/capsaicin sensitization protocol to be used as initially intended. Standardization and validation of

  5. Intestinal microdialysis--applicability, reproducibility and local tissue response in a pig model

    DEFF Research Database (Denmark)

    Emmertsen, K J; Wara, P; Sørensen, Flemming Brandt

    2005-01-01

    BACKGROUND AND AIMS: Microdialysis has been applied to the intestinal wall for the purpose of monitoring local ischemia. The aim of this study was to investigate the applicability, reproducibility and local response to microdialysis in the intestinal wall. MATERIALS AND METHODS: In 12 pigs two...... the probes were processed for histological examination. RESULTS: Large intra- and inter-group differences in the relative recovery were found between all locations. Absolute values of metabolites showed no significant changes during the study period. The lactate in blood was 25-30% of the intra-tissue values...

  6. A computational model for histone mark propagation reproduces the distribution of heterochromatin in different human cell types.

    Science.gov (United States)

    Schwämmle, Veit; Jensen, Ole Nørregaard

    2013-01-01

    Chromatin is a highly compact and dynamic nuclear structure that consists of DNA and associated proteins. The main organizational unit is the nucleosome, which consists of a histone octamer with DNA wrapped around it. Histone proteins are implicated in the regulation of eukaryote genes and they carry numerous reversible post-translational modifications that control DNA-protein interactions and the recruitment of chromatin binding proteins. Heterochromatin, the transcriptionally inactive part of the genome, is densely packed and contains histone H3 that is methylated at Lys 9 (H3K9me). The propagation of H3K9me in nucleosomes along the DNA in chromatin is antagonizing by methylation of H3 Lysine 4 (H3K4me) and acetylations of several lysines, which is related to euchromatin and active genes. We show that the related histone modifications form antagonized domains on a coarse scale. These histone marks are assumed to be initiated within distinct nucleation sites in the DNA and to propagate bi-directionally. We propose a simple computer model that simulates the distribution of heterochromatin in human chromosomes. The simulations are in agreement with previously reported experimental observations from two different human cell lines. We reproduced different types of barriers between heterochromatin and euchromatin providing a unified model for their function. The effect of changes in the nucleation site distribution and of propagation rates were studied. The former occurs mainly with the aim of (de-)activation of single genes or gene groups and the latter has the power of controlling the transcriptional programs of entire chromosomes. Generally, the regulatory program of gene transcription is controlled by the distribution of nucleation sites along the DNA string.

  7. Trampoline Effect: Observations and Modeling

    Science.gov (United States)

    Guyer, R.; Larmat, C. S.; Ulrich, T. J.

    2009-12-01

    The Iwate-Miyagi earthquake at site IWTH25 (14 June 2008) had large, asymmetric at surface vertical accelerations prompting the sobriquet trampoline effect (Aoi et. al. 2008). In addition the surface acceleration record showed long-short waiting time correlations and vertical-horizontal acceleration correlations. A lumped element model, deduced from the equations of continuum elasticity, is employed to describe the behavior at this site in terms of a surface layer and substrate. Important ingredients in the model are the nonlinear vertical coupling between the surface layer and the substrate and the nonlinear horizontal frictional coupling between the surface layer and the substrate. The model produces results in qualitative accord with observations: acceleration asymmetry, Fourier spectrum, waiting time correlations and vertical acceleration-horizontal acceleration correlations. [We gratefully acknowledge the support of the U. S. Department of Energy through the LANL/LDRD Program for this work].

  8. A discrete particle model reproducing collective dynamics of a bee swarm.

    Science.gov (United States)

    Bernardi, Sara; Colombi, Annachiara; Scianna, Marco

    2018-02-01

    In this article, we present a microscopic discrete mathematical model describing collective dynamics of a bee swarm. More specifically, each bee is set to move according to individual strategies and social interactions, the former involving the desire to reach a target destination, the latter accounting for repulsive/attractive stimuli and for alignment processes. The insects tend in fact to remain sufficiently close to the rest of the population, while avoiding collisions, and they are able to track and synchronize their movement to the flight of a given set of neighbors within their visual field. The resulting collective behavior of the bee cloud therefore emerges from non-local short/long-range interactions. Differently from similar approaches present in the literature, we here test different alignment mechanisms (i.e., based either on an Euclidean or on a topological neighborhood metric), which have an impact also on the other social components characterizing insect behavior. A series of numerical realizations then shows the phenomenology of the swarm (in terms of pattern configuration, collective productive movement, and flight synchronization) in different regions of the space of free model parameters (i.e., strength of attractive/repulsive forces, extension of the interaction regions). In this respect, constraints in the possible variations of such coefficients are here given both by reasonable empirical observations and by analytical results on some stability characteristics of the defined pairwise interaction kernels, which have to assure a realistic crystalline configuration of the swarm. An analysis of the effect of unconscious random fluctuations of bee dynamics is also provided. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  10. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  11. The diverse broad-band light-curves of Swift GRBs reproduced with the cannonball model

    CERN Document Server

    Dado, Shlomo; De Rújula, A

    2009-01-01

    Two radiation mechanisms, inverse Compton scattering (ICS) and synchrotron radiation (SR), suffice within the cannonball (CB) model of long gamma ray bursts (LGRBs) and X-ray flashes (XRFs) to provide a very simple and accurate description of their observed prompt emission and afterglows. Simple as they are, the two mechanisms and the burst environment generate the rich structure of the light curves at all frequencies and times. This is demonstrated for 33 selected Swift LGRBs and XRFs, which are well sampled from early time until late time and well represent the entire diversity of the broad band light curves of Swift LGRBs and XRFs. Their prompt gamma-ray and X-ray emission is dominated by ICS of glory light. During their fast decline phase, ICS is taken over by SR which dominates their broad band afterglow. The pulse shape and spectral evolution of the gamma-ray peaks and the early-time X-ray flares, and even the delayed optical `humps' in XRFs, are correctly predicted. The canonical and non-canonical X-ra...

  12. Intra- and inter-observer reproducibility study of gestational age estimation using three common foetal biometric parameters: Experienced versus inexperienced sonographer

    International Nuclear Information System (INIS)

    Ohagwu, C.C.; Onoduagu, H.I.; Eze, C.U.; Ochie, K.; Ohagwu, C.I.

    2015-01-01

    Aim: To assess reproducibility of estimating gestational age (GA) of foetus using femur length (FL), biparietal diameter (BPD) and abdominal circumference (AC) within experienced and inexperienced sonographers and between the two. Patients and methods: Two sets of GA estimates each were obtained for FL, BPD and AC by the two observers in 20 normal singleton foetuses. The first estimates for the three biometric parameters were made by the experienced sonographer. Subsequently, the inexperienced sonographer, blind to the estimates of the first observer obtained his own estimates for the same biometric parameters. After a time interval of ten minutes the process was repeated for the second set of GA estimates. All the gestational age estimates were made following standard protocol. Statistical analysis was performed by Pearson's and intraclass correlations, coefficient of variation and Bland–Altman plots. Statistical inferences were drawn at p < 0.05. Results: The Pearson's and intraclass correlations of between GA estimates within and between both observers from measurement of FL, BPD and AC were very high and statistically significant (p < 0.05). Coefficient of variation for duplicate measurements for GA estimates within observers and between observers were quite negligible. Between observers, the first and second GA estimates from FL measurements showed the least variation. Estimates from BPD and AC measurements showed greater degree of variation between the observers. Conclusion: Reproducibility of GA estimation using FL, BPD and AC within experienced and inexperienced sonographers and between the two was excellent. Therefore, a fresh Nigerian radiography graduate with adequate exposure in obstetric ultrasound can correctly determine the gestational age of foetus in routine obstetric ultrasound without supervision

  13. Three-dimensional surgical modelling with an open-source software protocol: study of precision and reproducibility in mandibular reconstruction with the fibula free flap.

    Science.gov (United States)

    Ganry, L; Quilichini, J; Bandini, C M; Leyder, P; Hersant, B; Meningaud, J P

    2017-08-01

    Very few surgical teams currently use totally independent and free solutions to perform three-dimensional (3D) surgical modelling for osseous free flaps in reconstructive surgery. This study assessed the precision and technical reproducibility of a 3D surgical modelling protocol using free open-source software in mandibular reconstruction with fibula free flaps and surgical guides. Precision was assessed through comparisons of the 3D surgical guide to the sterilized 3D-printed guide, determining accuracy to the millimetre level. Reproducibility was assessed in three surgical cases by volumetric comparison to the millimetre level. For the 3D surgical modelling, a difference of less than 0.1mm was observed. Almost no deformations (free flap modelling was between 0.1mm and 0.4mm, and the average precision of the complete reconstructed mandible was less than 1mm. The open-source software protocol demonstrated high accuracy without complications. However, the precision of the surgical case depends on the surgeon's 3D surgical modelling. Therefore, surgeons need training on the use of this protocol before applying it to surgical cases; this constitutes a limitation. Further studies should address the transfer of expertise. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  14. The Computable Catchment: An executable document for model-data software sharing, reproducibility and interactive visualization

    Science.gov (United States)

    Gil, Y.; Duffy, C.

    2015-12-01

    This paper proposes the concept of a "Computable Catchment" which is used to develop a collaborative platform for watershed modeling and data analysis. The object of the research is a sharable, executable document similar to a pdf, but one that includes documentation of the underlying theoretical concepts, interactive computational/numerical resources, linkage to essential data repositories and the ability for interactive model-data visualization and analysis. The executable document for each catchment is stored in the cloud with automatic provisioning and a unique identifier allowing collaborative model and data enhancements for historical hydroclimatic reconstruction and/or future landuse or climate change scenarios to be easily reconstructed or extended. The Computable Catchment adopts metadata standards for naming all variables in the model and the data. The a-priori or initial data is derived from national data sources for soils, hydrogeology, climate, and land cover available from the www.hydroterre.psu.edu data service (Leonard and Duffy, 2015). The executable document is based on Wolfram CDF or Computable Document Format with an interactive open-source reader accessible by any modern computing platform. The CDF file and contents can be uploaded to a website or simply shared as a normal document maintaining all interactive features of the model and data. The Computable Catchment concept represents one application for Geoscience Papers of the Future representing an extensible document that combines theory, models, data and analysis that are digitally shared, documented and reused among research collaborators, students, educators and decision makers.

  15. Martian Ionospheric Observation and Modeling

    Science.gov (United States)

    González-Galindo, Francisco

    2018-02-01

    measurements by different space missions. Numerical simulations by computational models able to simulate the processes that shape the ionosphere have also been commonly employed to obtain information about this region, to provide an interpretation of the observations and to fill their gaps. As a result, the Martian ionosphere is today the best known one after that of the Earth. However, there are still areas for which our knowledge is far from being complete. Examples are the details and balance of the mechanisms populating the nightside ionosphere, or a good understanding of the meteoric ionospheric layer and its variability.

  16. A simple branching model that reproduces language family and language population distributions

    Science.gov (United States)

    Schwämmle, Veit; de Oliveira, Paulo Murilo Castro

    2009-07-01

    Human history leaves fingerprints in human languages. Little is known about language evolution and its study is of great importance. Here we construct a simple stochastic model and compare its results to statistical data of real languages. The model is based on the recent finding that language changes occur independently of the population size. We find agreement with the data additionally assuming that languages may be distinguished by having at least one among a finite, small number of different features. This finite set is also used in order to define the distance between two languages, similarly to linguistics tradition since Swadesh.

  17. Interpretative intra- and interobserver reproducibility of Stress/Rest 99m Tc-steamboat's myocardial perfusion SPECT using semi quantitative 20-segment model

    International Nuclear Information System (INIS)

    Fazeli, M.; Firoozi, F.

    2002-01-01

    It well established that myocardial perfusion SPECT with 201 T L or 99 mTc-se sta mi bi play an important role diagnosis and risk assessment in patients with known or suspected coronary artery disease. Both quantitative and qualitative methods are available for interpretation of images. The use of a semi quantitative scoring system in which each of 20 segments is scored according to a five-point scheme provides an approach to interpretation that is more systematic and reproducible than simple qualitative evaluation. Only a limited number of studies have dealt with the interpretive observer reproducibility of 99 mTc-steamboat's myocardial perfusion imaging. The aim of this study was to assess the intra-and inter observer variability of semi quantitative SPECT performed with this technique. Among 789 patients that underwent myocardial perfusion SPECT during last year 80 patients finally need to coronary angiography as gold standard. In this group of patients a semi quantitative visual interpretation was carried out using short axis and vertical long-axis myocardial tomograms and a 20-segments model. These segments we reassigned on six evenly spaced regions in the apical, mid-ventricular, and basal short-axis view and two apical segments on the mid-ventricular long-axis slice. Uptake in each segment was graded on a 5-point scale (0=normal, 1=equivocal, 2=moderate, 3=severe, 4=absence of uptake). The steamboat's images was interpreted separately w ice by two observers without knowledge of each other's findings or results of angiography. A SPECT study was judged abnormal if there were two or more segments with a stress score equal or more than 2. We con eluded that semi-quantitative visual analysis is a simple and reproducible method of interpretation

  18. The ability of a GCM-forced hydrological model to reproduce global discharge variability

    NARCIS (Netherlands)

    Sperna Weiland, F.C.; Beek, L.P.H. van; Kwadijk, J.C.J.; Bierkens, M.F.P.

    2010-01-01

    Data from General Circulation Models (GCMs) are often used to investigate hydrological impacts of climate change. However GCM data are known to have large biases, especially for precipitation. In this study the usefulness of GCM data for hydrological studies, with focus on discharge variability

  19. Establishing a Reproducible Hypertrophic Scar following Thermal Injury: A Porcine Model

    Directory of Open Access Journals (Sweden)

    Scott J. Rapp, MD

    2015-02-01

    Conclusions: Deep partial-thickness thermal injury to the back of domestic swine produces an immature hypertrophic scar by 10 weeks following burn with thickness appearing to coincide with the location along the dorsal axis. With minimal pig to pig variation, we describe our technique to provide a testable immature scar model.

  20. Reproducibility of a novel model of murine asthma-like pulmonary inflammation.

    Science.gov (United States)

    McKinley, L; Kim, J; Bolgos, G L; Siddiqui, J; Remick, D G

    2004-05-01

    Sensitization to cockroach allergens (CRA) has been implicated as a major cause of asthma, especially among inner-city populations. Endotoxin from Gram-negative bacteria has also been investigated for its role in attenuating or exacerbating the asthmatic response. We have created a novel model utilizing house dust extract (HDE) containing high levels of both CRA and endotoxin to induce pulmonary inflammation (PI) and airway hyperresponsiveness (AHR). A potential drawback of this model is that the HDE is in limited supply and preparation of new HDE will not contain the exact components of the HDE used to define our model system. The present study involved testing HDEs collected from various homes for their ability to cause PI and AHR. Dust collected from five homes was extracted in phosphate buffered saline overnight. The levels of CRA and endotoxin in the supernatants varied from 7.1 to 49.5 mg/ml of CRA and 1.7-6 micro g/ml of endotoxin in the HDEs. Following immunization and two pulmonary exposures to HDE all five HDEs induced AHR, PI and plasma IgE levels substantially higher than normal mice. This study shows that HDE containing high levels of cockroach allergens and endotoxin collected from different sources can induce an asthma-like response in our murine model.

  1. Energy and nutrient deposition and excretion in the reproducing sow: model development and evaluation

    DEFF Research Database (Denmark)

    Hansen, A V; Strathe, A B; Theil, Peter Kappel

    2014-01-01

    requirements for maintenance, and fetal and maternal growth were described. In the lactating module, a factorial approach was used to estimate requirements for maintenance, milk production, and maternal growth. The priority for nutrient partitioning was assumed to be in the order of maintenance, milk...... production, and maternal growth with body tissue losses constrained within biological limits. Global sensitivity analysis showed that nonlinearity in the parameters was small. The model outputs considered were the total protein and fat deposition, average urinary and fecal N excretion, average methane...... emission, manure carbon excretion, and manure production. The model was evaluated using independent data sets from the literature using root mean square prediction error (RMSPE) and concordance correlation coefficients. The gestation module predicted body fat gain better than body protein gain, which...

  2. Evaluation of Nitinol staples for the Lapidus arthrodesis in a reproducible biomechanical model

    Directory of Open Access Journals (Sweden)

    Nicholas Alexander Russell

    2015-12-01

    Full Text Available While the Lapidus procedure is a widely accepted technique for treatment of hallux valgus, the optimal fixation method to maintain joint stability remains controversial. The purpose of this study was to evaluate the biomechanical properties of new Shape Memory Alloy staples arranged in different configurations in a repeatable 1st Tarsometatarsal arthrodesis model. Ten sawbones models of the whole foot (n=5 per group were reconstructed using a single dorsal staple or two staples in a delta configuration. Each construct was mechanically tested in dorsal four-point bending, medial four-point bending, dorsal three-point bending and plantar cantilever bending with the staples activated at 37°C. The peak load, stiffness and plantar gapping were determined for each test. Pressure sensors were used to measure the contact force and area of the joint footprint in each group. There was a significant (p < 0.05 increase in peak load in the two staple constructs compared to the single staple constructs for all testing modalities. Stiffness also increased significantly in all tests except dorsal four-point bending. Pressure sensor readings showed a significantly higher contact force at time zero and contact area following loading in the two staple constructs (p < 0.05. Both groups completely recovered any plantar gapping following unloading and restored their initial contact footprint. The biomechanical integrity and repeatability of the models was demonstrated with no construct failures due to hardware or model breakdown. Shape memory alloy staples provide fixation with the ability to dynamically apply and maintain compression across a simulated arthrodesis following a range of loading conditions.

  3. Evaluation of Nitinol Staples for the Lapidus Arthrodesis in a Reproducible Biomechanical Model.

    Science.gov (United States)

    Russell, Nicholas A; Regazzola, Gianmarco; Aiyer, Amiethab; Nomura, Tomohiro; Pelletier, Matthew H; Myerson, Mark; Walsh, William R

    2015-01-01

    While the Lapidus procedure is a widely accepted technique for treatment of hallux valgus, the optimal fixation method to maintain joint stability remains controversial. The purpose of this study is to evaluate the biomechanical properties of new shape memory alloy (SMA) staples arranged in different configurations in a repeatable first tarsometatarsal arthrodesis model. Ten sawbones models of the whole foot (n = 5 per group) were reconstructed using a single dorsal staple or two staples in a delta configuration. Each construct was mechanically tested non-destructively in dorsal four-point bending, medial four-point bending, dorsal three-point bending, and plantar cantilever bending with the staples activated at 37°C. The peak load (newton), stiffness (newton per millimeter), and plantar gapping (millimeter) were determined for each test. Pressure sensors were used to measure the contact force and area of the joint footprint in each group. There was a statistically significant increase in peak load in the two staple constructs compared to the single staple constructs for all testing modalities with P values range from 0.016 to 0.000. Stiffness also increased significantly in all tests except dorsal four-point bending. Pressure sensor readings showed a significantly higher contact force at time zero (P = 0.037) and contact area following loading in the two staple constructs (P = 0.045). Both groups completely recovered any plantar gapping following unloading and restored their initial contact footprint. The biomechanical integrity and repeatability of the models was demonstrated with no construct failures due to hardware or model breakdown. SMA staples provide fixation with the ability to dynamically apply and maintain compression across a simulated arthrodesis following a range of loading conditions.

  4. Can lagrangian models reproduce the migration time of European eel obtained from otolith analysis?

    Science.gov (United States)

    Rodríguez-Díaz, L.; Gómez-Gesteira, M.

    2017-12-01

    European eel can be found at the Bay of Biscay after a long migration across the Atlantic. The duration of migration, which takes place at larval stage, is of primary importance to understand eel ecology and, hence, its survival. This duration is still a controversial matter since it can range from 7 months to > 4 years depending on the method to estimate duration. The minimum migration duration estimated from our lagrangian model is similar to the duration obtained from the microstructure of eel otoliths, which is typically on the order of 7-9 months. The lagrangian model showed to be sensitive to different conditions like spatial and time resolution, release depth, release area and initial distribution. In general, migration showed to be faster when decreasing the depth and increasing the resolution of the model. In average, the fastest migration was obtained when only advective horizontal movement was considered. However, faster migration was even obtained in some cases when locally oriented random migration was taken into account.

  5. geoKepler Workflow Module for Computationally Scalable and Reproducible Geoprocessing and Modeling

    Science.gov (United States)

    Cowart, C.; Block, J.; Crawl, D.; Graham, J.; Gupta, A.; Nguyen, M.; de Callafon, R.; Smarr, L.; Altintas, I.

    2015-12-01

    The NSF-funded WIFIRE project has developed an open-source, online geospatial workflow platform for unifying geoprocessing tools and models for for fire and other geospatially dependent modeling applications. It is a product of WIFIRE's objective to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. geoKepler includes a set of reusable GIS components, or actors, for the Kepler Scientific Workflow System (https://kepler-project.org). Actors exist for reading and writing GIS data in formats such as Shapefile, GeoJSON, KML, and using OGC web services such as WFS. The actors also allow for calling geoprocessing tools in other packages such as GDAL and GRASS. Kepler integrates functions from multiple platforms and file formats into one framework, thus enabling optimal GIS interoperability, model coupling, and scalability. Products of the GIS actors can be fed directly to models such as FARSITE and WRF. Kepler's ability to schedule and scale processes using Hadoop and Spark also makes geoprocessing ultimately extensible and computationally scalable. The reusable workflows in geoKepler can be made to run automatically when alerted by real-time environmental conditions. Here, we show breakthroughs in the speed of creating complex data for hazard assessments with this platform. We also demonstrate geoKepler workflows that use Data Assimilation to ingest real-time weather data into wildfire simulations, and for data mining techniques to gain insight into environmental conditions affecting fire behavior. Existing machine learning tools and libraries such as R and MLlib are being leveraged for this purpose in Kepler, as well as Kepler's Distributed Data Parallel (DDP) capability to provide a framework for scalable processing. geoKepler workflows can be executed via an iPython notebook as a part of a Jupyter hub at UC San Diego for sharing and reporting of the scientific analysis and results from

  6. Realizing the Living Paper using the ProvONE Model for Reproducible Research

    Science.gov (United States)

    Jones, M. B.; Jones, C. S.; Ludäscher, B.; Missier, P.; Walker, L.; Slaughter, P.; Schildhauer, M.; Cuevas-Vicenttín, V.

    2015-12-01

    Science has advanced through traditional publications that codify research results as a permenant part of the scientific record. But because publications are static and atomic, researchers can only cite and reference a whole work when building on prior work of colleagues. The open source software model has demonstrated a new approach in which strong version control in an open environment can nurture an open ecosystem of software. Developers now commonly fork and extend software giving proper credit, with less repetition, and with confidence in the relationship to original software. Through initiatives like 'Beyond the PDF', an analogous model has been imagined for open science, in which software, data, analyses, and derived products become first class objects within a publishing ecosystem that has evolved to be finer-grained and is realized through a web of linked open data. We have prototyped a Living Paper concept by developing the ProvONE provenance model for scientific workflows, with prototype deployments in DataONE. ProvONE promotes transparency and openness by describing the authenticity, origin, structure, and processing history of research artifacts and by detailing the steps in computational workflows that produce derived products. To realize the Living Paper, we decompose scientific papers into their constituent products and publish these as compound objects in the DataONE federation of archival repositories. Each individual finding and sub-product of a reseach project (such as a derived data table, a workflow or script, a figure, an image, or a finding) can be independently stored, versioned, and cited. ProvONE provenance traces link these fine-grained products within and across versions of a paper, and across related papers that extend an original analysis. This allows for open scientific publishing in which researchers extend and modify findings, creating a dynamic, evolving web of results that collectively represent the scientific enterprise. The

  7. Reproducibility of the pink esthetic score--rating soft tissue esthetics around single-implant restorations with regard to dental observer specialization.

    Science.gov (United States)

    Gehrke, Peter; Lobert, Markus; Dhom, Günter

    2008-01-01

    The pink esthetic score (PES) evaluates the esthetic outcome of soft tissue around implant-supported single crowns in the anterior zone by awarding seven points for the mesial and distal papilla, soft-tissue level, soft-tissue contour, soft-tissue color, soft-tissue texture, and alveolar process deficiency. The aim of this study was to measure the reproducibility of the PES and assess the influence exerted by the examiner's degree of dental specialization. Fifteen examiners (three general dentists, three oral maxillofacial surgeons, three orthodontists, three postgraduate students in implant dentistry, and three lay people) applied the PES to 30 implant-supported single restorations twice at an interval of 4 weeks. Using a 0-1-2 scoring system, 0 being the lowest, 2 being the highest value, the maximum achievable PES was 14. At the second assessment, the photographs were scored in reverse order. Differences between the two assessments were evaluated with the Spearman's rank correlation coefficient (R). The Wilcoxon signed-rank test was used for comparisons of differences between the ratings. A significance level of p esthetic restorations showed the smallest deviations. Orthodontists were found to have assigned significantly poorer ratings than any other group. The assessment of postgraduate students and laypersons were the most favorable. The PES allows for a more objective appraisal of the esthetic short- and long-term results of various surgical and prosthetic implant procedures. It reproducibly evaluates the peri-implant soft tissue around single-implant restorations and results in good intra-examiner agreement. However, an effect of observer specialization on rating soft-tissue esthetics can be shown.

  8. Efficient and reproducible myogenic differentiation from human iPS cells: prospects for modeling Miyoshi Myopathy in vitro.

    Directory of Open Access Journals (Sweden)

    Akihito Tanaka

    Full Text Available The establishment of human induced pluripotent stem cells (hiPSCs has enabled the production of in vitro, patient-specific cell models of human disease. In vitro recreation of disease pathology from patient-derived hiPSCs depends on efficient differentiation protocols producing relevant adult cell types. However, myogenic differentiation of hiPSCs has faced obstacles, namely, low efficiency and/or poor reproducibility. Here, we report the rapid, efficient, and reproducible differentiation of hiPSCs into mature myocytes. We demonstrated that inducible expression of myogenic differentiation1 (MYOD1 in immature hiPSCs for at least 5 days drives cells along the myogenic lineage, with efficiencies reaching 70-90%. Myogenic differentiation driven by MYOD1 occurred even in immature, almost completely undifferentiated hiPSCs, without mesodermal transition. Myocytes induced in this manner reach maturity within 2 weeks of differentiation as assessed by marker gene expression and functional properties, including in vitro and in vivo cell fusion and twitching in response to electrical stimulation. Miyoshi Myopathy (MM is a congenital distal myopathy caused by defective muscle membrane repair due to mutations in DYSFERLIN. Using our induced differentiation technique, we successfully recreated the pathological condition of MM in vitro, demonstrating defective membrane repair in hiPSC-derived myotubes from an MM patient and phenotypic rescue by expression of full-length DYSFERLIN (DYSF. These findings not only facilitate the pathological investigation of MM, but could potentially be applied in modeling of other human muscular diseases by using patient-derived hiPSCs.

  9. Efficient and Reproducible Myogenic Differentiation from Human iPS Cells: Prospects for Modeling Miyoshi Myopathy In Vitro

    Science.gov (United States)

    Tanaka, Akihito; Woltjen, Knut; Miyake, Katsuya; Hotta, Akitsu; Ikeya, Makoto; Yamamoto, Takuya; Nishino, Tokiko; Shoji, Emi; Sehara-Fujisawa, Atsuko; Manabe, Yasuko; Fujii, Nobuharu; Hanaoka, Kazunori; Era, Takumi; Yamashita, Satoshi; Isobe, Ken-ichi; Kimura, En; Sakurai, Hidetoshi

    2013-01-01

    The establishment of human induced pluripotent stem cells (hiPSCs) has enabled the production of in vitro, patient-specific cell models of human disease. In vitro recreation of disease pathology from patient-derived hiPSCs depends on efficient differentiation protocols producing relevant adult cell types. However, myogenic differentiation of hiPSCs has faced obstacles, namely, low efficiency and/or poor reproducibility. Here, we report the rapid, efficient, and reproducible differentiation of hiPSCs into mature myocytes. We demonstrated that inducible expression of myogenic differentiation1 (MYOD1) in immature hiPSCs for at least 5 days drives cells along the myogenic lineage, with efficiencies reaching 70–90%. Myogenic differentiation driven by MYOD1 occurred even in immature, almost completely undifferentiated hiPSCs, without mesodermal transition. Myocytes induced in this manner reach maturity within 2 weeks of differentiation as assessed by marker gene expression and functional properties, including in vitro and in vivo cell fusion and twitching in response to electrical stimulation. Miyoshi Myopathy (MM) is a congenital distal myopathy caused by defective muscle membrane repair due to mutations in DYSFERLIN. Using our induced differentiation technique, we successfully recreated the pathological condition of MM in vitro, demonstrating defective membrane repair in hiPSC-derived myotubes from an MM patient and phenotypic rescue by expression of full-length DYSFERLIN (DYSF). These findings not only facilitate the pathological investigation of MM, but could potentially be applied in modeling of other human muscular diseases by using patient-derived hiPSCs. PMID:23626698

  10. TU-AB-BRC-05: Creation of a Monte Carlo TrueBeam Model by Reproducing Varian Phase Space Data

    International Nuclear Information System (INIS)

    O’Grady, K; Davis, S; Seuntjens, J

    2016-01-01

    Purpose: To create a Varian TrueBeam 6 MV FFF Monte Carlo model using BEAMnrc/EGSnrc that accurately reproduces the Varian representative dataset, followed by tuning the model’s source parameters to accurately reproduce in-house measurements. Methods: A BEAMnrc TrueBeam model for 6 MV FFF has been created by modifying a validated 6 MV Varian CL21EX model. Geometric dimensions and materials were adjusted in a trial and error approach to match the fluence and spectra of TrueBeam phase spaces output by the Varian VirtuaLinac. Once the model’s phase space matched Varian’s counterpart using the default source parameters, it was validated to match 10 × 10 cm"2 Varian representative data obtained with the IBA CC13. The source parameters were then tuned to match in-house 5 × 5 cm"2 PTW microDiamond measurements. All dose to water simulations included detector models to include the effects of volume averaging and the non-water equivalence of the chamber materials, allowing for more accurate source parameter selection. Results: The Varian phase space spectra and fluence were matched with excellent agreement. The in-house model’s PDD agreement with CC13 TrueBeam representative data was within 0.9% local percent difference beyond the first 3 mm. Profile agreement at 10 cm depth was within 0.9% local percent difference and 1.3 mm distance-to-agreement in the central axis and penumbra regions, respectively. Once the source parameters were tuned, PDD agreement with microDiamond measurements was within 0.9% local percent difference beyond 2 mm. The microDiamond profile agreement at 10 cm depth was within 0.6% local percent difference and 0.4 mm distance-to-agreement in the central axis and penumbra regions, respectively. Conclusion: An accurate in-house Monte Carlo model of the Varian TrueBeam was achieved independently of the Varian phase space solution and was tuned to in-house measurements. KO acknowledges partial support by the CREATE Medical Physics Research

  11. Inter-observer reproducibility before and after web-based education in the Gleason grading of the prostate adenocarcinoma among the Iranian pathologists.

    Directory of Open Access Journals (Sweden)

    Alireza Abdollahi

    2014-05-01

    Full Text Available This study was aimed at determining intra and inter-observer concordance rates in the Gleason scoring of prostatic adenocarcinoma, before and after a web-based educational course. In this self-controlled study, 150 tissue samples of prostatic adenocarcinoma are re-examined to be scored according to the Gleason scoring system. Then all pathologists attend a free web-based course. Afterwards, the same 150 samples [with different codes compared to the previous ones] are distributed differently among the pathologists to be assigned Gleason scores. After gathering the data, the concordance rate in the first and second reports of pathologists is determined. In the pre web-education, the mean kappa value of Interobserver agreement was 0.25 [fair agreement]. Post web-education significantly improved with the mean kappa value of 0.52 [moderate agreement]. Using weighted kappa values, significant improvement was observed in inter-observer agreement in higher scores of Gleason grade; Score 10 was achieved for the mean kappa value in post web-education was 0.68 [substantial agreement] compared to 0.25 (fair agreement in pre web-education. Web-based training courses are attractive to pathologists as they do not need to spend much time and money. Therefore, such training courses are strongly recommended for significant pathological issues including the grading of the prostate adenocarcinoma. Through web-based education, pathologists can exchange views and contribute to the rise in the level of reproducibility. Such programs need to be included in post-graduation programs.

  12. Skill of ship-following large-eddy simulations in reproducing MAGIC observations across the northeast Pacific stratocumulus to cumulus transition region

    Science.gov (United States)

    McGibbon, J.; Bretherton, C. S.

    2017-06-01

    During the Marine ARM GPCI Investigation of Clouds (MAGIC) in October 2011 to September 2012, a container ship making periodic cruises between Los Angeles, CA, and Honolulu, HI, was instrumented with surface meteorological, aerosol and radiation instruments, a cloud radar and ceilometer, and radiosondes. Here large-eddy simulation (LES) is performed in a ship-following frame of reference for 13 four day transects from the MAGIC field campaign. The goal is to assess if LES can skillfully simulate the broad range of observed cloud characteristics and boundary layer structure across the subtropical stratocumulus to cumulus transition region sampled during different seasons and meteorological conditions. Results from Leg 15A, which sampled a particularly well-defined stratocumulus to cumulus transition, demonstrate the approach. The LES reproduces the observed timing of decoupling and transition from stratocumulus to cumulus and matches the observed evolution of boundary layer structure, cloud fraction, liquid water path, and precipitation statistics remarkably well. Considering the simulations of all 13 cruises, the LES skillfully simulates the mean diurnal variation of key measured quantities, including liquid water path (LWP), cloud fraction, measures of decoupling, and cloud radar-derived precipitation. The daily mean quantities are well represented, and daily mean LWP and cloud fraction show the expected correlation with estimated inversion strength. There is a -0.6 K low bias in LES near-surface air temperature that results in a high bias of 5.6 W m-2 in sensible heat flux (SHF). Overall, these results build confidence in the ability of LES to represent the northeast Pacific stratocumulus to trade cumulus transition region.Plain Language SummaryDuring the Marine ARM GPCI Investigation of Clouds (MAGIC) field campaign in October 2011 to September 2012, a cargo container ship making regular cruises between Los Angeles, CA, and Honolulu, HI, was fitted with tools to

  13. Eccentric Contraction-Induced Muscle Injury: Reproducible, Quantitative, Physiological Models to Impair Skeletal Muscle's Capacity to Generate Force.

    Science.gov (United States)

    Call, Jarrod A; Lowe, Dawn A

    2016-01-01

    In order to investigate the molecular and cellular mechanisms of muscle regeneration an experimental injury model is required. Advantages of eccentric contraction-induced injury are that it is a controllable, reproducible, and physiologically relevant model to cause muscle injury, with injury being defined as a loss of force generating capacity. While eccentric contractions can be incorporated into conscious animal study designs such as downhill treadmill running, electrophysiological approaches to elicit eccentric contractions and examine muscle contractility, for example before and after the injurious eccentric contractions, allows researchers to circumvent common issues in determining muscle function in a conscious animal (e.g., unwillingness to participate). Herein, we describe in vitro and in vivo methods that are reliable, repeatable, and truly maximal because the muscle contractions are evoked in a controlled, quantifiable manner independent of subject motivation. Both methods can be used to initiate eccentric contraction-induced injury and are suitable for monitoring functional muscle regeneration hours to days to weeks post-injury.

  14. Reproducing the organic matter model of anthropogenic dark earth of Amazonia and testing the ecotoxicity of functionalized charcoal compounds

    Directory of Open Access Journals (Sweden)

    Carolina Rodrigues Linhares

    2012-05-01

    Full Text Available The objective of this work was to obtain organic compounds similar to the ones found in the organic matter of anthropogenic dark earth of Amazonia (ADE using a chemical functionalization procedure on activated charcoal, as well as to determine their ecotoxicity. Based on the study of the organic matter from ADE, an organic model was proposed and an attempt to reproduce it was described. Activated charcoal was oxidized with the use of sodium hypochlorite at different concentrations. Nuclear magnetic resonance was performed to verify if the spectra of the obtained products were similar to the ones of humic acids from ADE. The similarity between spectra indicated that the obtained products were polycondensed aromatic structures with carboxyl groups: a soil amendment that can contribute to soil fertility and to its sustainable use. An ecotoxicological test with Daphnia similis was performed on the more soluble fraction (fulvic acids of the produced soil amendment. Aryl chloride was formed during the synthesis of the organic compounds from activated charcoal functionalization and partially removed through a purification process. However, it is probable that some aryl chloride remained in the final product, since the ecotoxicological test indicated that the chemical functionalized soil amendment is moderately toxic.

  15. Coupled RipCAS-DFLOW (CoRD) Software and Data Management System for Reproducible Floodplain Vegetation Succession Modeling

    Science.gov (United States)

    Turner, M. A.; Miller, S.; Gregory, A.; Cadol, D. D.; Stone, M. C.; Sheneman, L.

    2016-12-01

    We present the Coupled RipCAS-DFLOW (CoRD) modeling system created to encapsulate the workflow to analyze the effects of stream flooding on vegetation succession. CoRD provides an intuitive command-line and web interface to run DFLOW and RipCAS in succession over many years automatically, which is a challenge because, for our application, DFLOW must be run on a supercomputing cluster via the PBS job scheduler. RipCAS is a vegetation succession model, and DFLOW is a 2D open channel flow model. Data adaptors have been developed to seamlessly connect DFLOW output data to be RipCAS inputs, and vice-versa. CoRD provides automated statistical analysis and visualization, plus automatic syncing of input and output files and model run metadata to the hydrological data management system HydroShare using its excellent Python REST client. This combination of technologies and data management techniques allows the results to be shared with collaborators and eventually published. Perhaps most importantly, it allows results to be easily reproduced via either the command-line or web user interface. This system is a result of collaboration between software developers and hydrologists participating in the Western Consortium for Watershed Analysis, Visualization, and Exploration (WC-WAVE). Because of the computing-intensive nature of this particular workflow, including automating job submission/monitoring and data adaptors, software engineering expertise is required. However, the hydrologists provide the software developers with a purpose and ensure a useful, intuitive tool is developed. Our hydrologists contribute software, too: RipCAS was developed from scratch by hydrologists on the team as a specialized, open-source version of the Computer Aided Simulation Model for Instream Flow and Riparia (CASiMiR) vegetation model; our hydrologists running DFLOW provided numerous examples and help with the supercomputing system. This project is written in Python, a popular language in the

  16. Validity, reliability, and reproducibility of linear measurements on digital models obtained from intraoral and cone-beam computed tomography scans of alginate impressions

    NARCIS (Netherlands)

    Wiranto, Matthew G.; Engelbrecht, W. Petrie; Nolthenius, Heleen E. Tutein; van der Meer, W. Joerd; Ren, Yijin

    INTRODUCTION: Digital 3-dimensional models are widely used for orthodontic diagnosis. The aim of this study was to assess the validity, reliability, and reproducibility of digital models obtained from the Lava Chairside Oral scanner (3M ESPE, Seefeld, Germany) and cone-beam computed tomography scans

  17. Reproducibility and accuracy of linear measurements on dental models derived from cone-beam computed tomography compared with digital dental casts

    NARCIS (Netherlands)

    Waard, O. de; Rangel, F.A.; Fudalej, P.S.; Bronkhorst, E.M.; Kuijpers-Jagtman, A.M.; Breuning, K.H.

    2014-01-01

    INTRODUCTION: The aim of this study was to determine the reproducibility and accuracy of linear measurements on 2 types of dental models derived from cone-beam computed tomography (CBCT) scans: CBCT images, and Anatomodels (InVivoDental, San Jose, Calif); these were compared with digital models

  18. Assessing trends in observed and modelled climate extremes over Australia in relation to future projections

    International Nuclear Information System (INIS)

    Alexander, Lisa

    2007-01-01

    Full text: Nine global coupled climate models were assessed for their ability to reproduce observed trends in a set of indices representing temperature and precipitation extremes over Australia. Observed trends for 1957-1999 were compared with individual and multi-modelled trends calculated over the same period. When averaged across Australia the magnitude of trends and interannual variability of temperature extremes were well simulated by most models, particularly for the warm nights index. Except for consecutive dry days, the majority of models also reproduced the correct sign of trend for precipitation extremes. A bootstrapping technique was used to show that most models produce plausible trends when averaged over Australia, although only heavy precipitation days simulated from the multi-model ensemble showed significant skill at reproducing the observed spatial pattern of trends. Two of the models with output from different forcings showed that only with anthropogenic forcing included could the models capture the observed areally averaged trend for some of the temperature indices, but the forcing made little difference to the models' ability to reproduce the spatial pattern of trends over Australia. Future projected changes in extremes using three emissions scenarios were also analysed. Australia shows a shift towards significant warming of temperature extremes with much longer dry spells interspersed with periods of increased extreme precipitation irrespective of the scenario used. More work is required to determine whether regional projected changes over Australia are robust

  19. Magnet stability and reproducibility

    CERN Document Server

    Marks, N

    2010-01-01

    Magnet stability and reproducibility have become increasingly important as greater precision and beams with smaller dimension are required for research, medical and other purpose. The observed causes of mechanical and electrical instability are introduced and the engineering arrangements needed to minimize these problems discussed; the resulting performance of a state-of-the-art synchrotron source (Diamond) is then presented. The need for orbit feedback to obtain best possible beam stability is briefly introduced, but omitting any details of the necessary technical equipment, which is outside the scope of the presentation.

  20. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  1. Assessment of a numerical model to reproduce event‐scale erosion and deposition distributions in a braided river

    Science.gov (United States)

    Measures, R.; Hicks, D. M.; Brasington, J.

    2016-01-01

    Abstract Numerical morphological modeling of braided rivers, using a physics‐based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth‐averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high‐flow event. Evaluation of model performance primarily focused upon using high‐resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach‐scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers. PMID:27708477

  2. Assessment of a numerical model to reproduce event-scale erosion and deposition distributions in a braided river.

    Science.gov (United States)

    Williams, R D; Measures, R; Hicks, D M; Brasington, J

    2016-08-01

    Numerical morphological modeling of braided rivers, using a physics-based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth-averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high-flow event. Evaluation of model performance primarily focused upon using high-resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach-scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers.

  3. Reproducibility of image quality for moving objects using respiratory-gated computed tomography. A study using a phantom model

    International Nuclear Information System (INIS)

    Fukumitsu, Nobuyoshi; Ishida, Masaya; Terunuma, Toshiyuki

    2012-01-01

    To investigate the reproducibility of computed tomography (CT) imaging quality in respiratory-gated radiation treatment planning is essential in radiotherapy of movable tumors. Seven series of regular and six series of irregular respiratory motions were performed using a thorax dynamic phantom. For the regular respiratory motions, the respiratory cycle was changed from 2.5 to 4 s and the amplitude was changed from 4 to 10 mm. For the irregular respiratory motions, a cycle of 2.5 to 4 or an amplitude of 4 to 10 mm was added to the base data (id est (i.e.) 3.5-s cycle, 6-mm amplitude) every three cycles. Images of the object were acquired six times using respiratory-gated data acquisition. The volume of the object was calculated and the reproducibility of the volume was decided based on the variety. The registered image of the object was added and the reproducibility of the shape was decided based on the degree of overlap of objects. The variety in the volumes and shapes differed significantly as the respiratory cycle changed according to regular respiratory motions. In irregular respiratory motion, shape reproducibility was further inferior, and the percentage of overlap among the six images was 35.26% in the 2.5- and 3.5-s cycle mixed group. Amplitude changes did not produce significant differences in the variety of the volumes and shapes. Respiratory cycle changes reduced the reproducibility of the image quality in respiratory-gated CT. (author)

  4. Reproducibility of current perception threshold with the Neurometer(®) vs the Stimpod NMS450 peripheral nerve stimulator in healthy volunteers: an observational study.

    Science.gov (United States)

    Tsui, Ban C H; Shakespeare, Timothy J; Leung, Danika H; Tsui, Jeremy H; Corry, Gareth N

    2013-08-01

    Current methods of assessing nerve blocks, such as loss of perception to cold sensation, are subjective at best. Transcutaneous nerve stimulation is an alternative method that has previously been used to measure the current perception threshold (CPT) in individuals with neuropathic conditions, and various devices to measure CPT are commercially available. Nevertheless, the device must provide reproducible results to be used as an objective tool for assessing nerve blocks. We recruited ten healthy volunteers to examine CPT reproducibility using the Neurometer(®) and the Stimpod NMS450 peripheral nerve stimulator. Each subject's CPT was determined for the median (second digit) and ulnar (fifth digit) nerve sensory distributions on both hands - with the Neurometer at 5 Hz, 250 Hz, and 2000 Hz and with the Stimpod at pulse widths of 0.1 msec, 0.3 msec, 0.5 msec, and 1.0 msec, both at 5 Hz and 2 Hz. Intraclass correlation coefficients (ICC) were also calculated to assess reproducibility; acceptable ICCs were defined as ≥ 0.4. The ICC values for the Stimpod ranged from 0.425-0.79, depending on pulse width, digit, and stimulation; ICCs for the Neurometer were 0.615 and 0.735 at 250 and 2,000 Hz, respectively. These values were considered acceptable; however, the Neurometer performed less efficiently at 5 Hz (ICCs for the second and fifth digits were 0.292 and 0.318, respectively). Overall, the Stimpod device displayed good to excellent reproducibility in measuring CPT in healthy volunteers. The Neurometer displayed poor reproducibility at low frequency (5 Hz). These results suggest that peripheral nerve stimulators may be potential devices for measuring CPT to assess nerve blocks.

  5. Selecting and optimizing eco-physiological parameters of Biome-BGC to reproduce observed woody and leaf biomass growth of Eucommia ulmoides plantation in China using Dakota optimizer

    Science.gov (United States)

    Miyauchi, T.; Machimura, T.

    2013-12-01

    In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the

  6. Observability and synchronization of neuron models

    Science.gov (United States)

    Aguirre, Luis A.; Portes, Leonardo L.; Letellier, Christophe

    2017-10-01

    Observability is the property that enables recovering the state of a dynamical system from a reduced number of measured variables. In high-dimensional systems, it is therefore important to make sure that the variable recorded to perform the analysis conveys good observability of the system dynamics. The observability of a network of neuron models depends nontrivially on the observability of the node dynamics and on the topology of the network. The aim of this paper is twofold. First, to perform a study of observability using four well-known neuron models by computing three different observability coefficients. This not only clarifies observability properties of the models but also shows the limitations of applicability of each type of coefficients in the context of such models. Second, to study the emergence of phase synchronization in networks composed of neuron models. This is done performing multivariate singular spectrum analysis which, to the best of the authors' knowledge, has not been used in the context of networks of neuron models. It is shown that it is possible to detect phase synchronization: (i) without having to measure all the state variables, but only one (that provides greatest observability) from each node and (ii) without having to estimate the phase.

  7. Ability of an ensemble of regional climate models to reproduce weather regimes over Europe-Atlantic during the period 1961-2000

    Science.gov (United States)

    Sanchez-Gomez, Emilia; Somot, S.; Déqué, M.

    2009-10-01

    One of the main concerns in regional climate modeling is to which extent limited-area regional climate models (RCM) reproduce the large-scale atmospheric conditions of their driving general circulation model (GCM). In this work we investigate the ability of a multi-model ensemble of regional climate simulations to reproduce the large-scale weather regimes of the driving conditions. The ensemble consists of a set of 13 RCMs on a European domain, driven at their lateral boundaries by the ERA40 reanalysis for the time period 1961-2000. Two sets of experiments have been completed with horizontal resolutions of 50 and 25 km, respectively. The spectral nudging technique has been applied to one of the models within the ensemble. The RCMs reproduce the weather regimes behavior in terms of composite pattern, mean frequency of occurrence and persistence reasonably well. The models also simulate well the long-term trends and the inter-annual variability of the frequency of occurrence. However, there is a non-negligible spread among the models which is stronger in summer than in winter. This spread is due to two reasons: (1) we are dealing with different models and (2) each RCM produces an internal variability. As far as the day-to-day weather regime history is concerned, the ensemble shows large discrepancies. At daily time scale, the model spread has also a seasonal dependence, being stronger in summer than in winter. Results also show that the spectral nudging technique improves the model performance in reproducing the large-scale of the driving field. In addition, the impact of increasing the number of grid points has been addressed by comparing the 25 and 50 km experiments. We show that the horizontal resolution does not affect significantly the model performance for large-scale circulation.

  8. Ability of an ensemble of regional climate models to reproduce weather regimes over Europe-Atlantic during the period 1961-2000

    Energy Technology Data Exchange (ETDEWEB)

    Somot, S.; Deque, M. [Meteo-France CNRM/GMGEC CNRS/GAME, Toulouse (France); Sanchez-Gomez, Emilia

    2009-10-15

    One of the main concerns in regional climate modeling is to which extent limited-area regional climate models (RCM) reproduce the large-scale atmospheric conditions of their driving general circulation model (GCM). In this work we investigate the ability of a multi-model ensemble of regional climate simulations to reproduce the large-scale weather regimes of the driving conditions. The ensemble consists of a set of 13 RCMs on a European domain, driven at their lateral boundaries by the ERA40 reanalysis for the time period 1961-2000. Two sets of experiments have been completed with horizontal resolutions of 50 and 25 km, respectively. The spectral nudging technique has been applied to one of the models within the ensemble. The RCMs reproduce the weather regimes behavior in terms of composite pattern, mean frequency of occurrence and persistence reasonably well. The models also simulate well the long-term trends and the inter-annual variability of the frequency of occurrence. However, there is a non-negligible spread among the models which is stronger in summer than in winter. This spread is due to two reasons: (1) we are dealing with different models and (2) each RCM produces an internal variability. As far as the day-to-day weather regime history is concerned, the ensemble shows large discrepancies. At daily time scale, the model spread has also a seasonal dependence, being stronger in summer than in winter. Results also show that the spectral nudging technique improves the model performance in reproducing the large-scale of the driving field. In addition, the impact of increasing the number of grid points has been addressed by comparing the 25 and 50 km experiments. We show that the horizontal resolution does not affect significantly the model performance for large-scale circulation. (orig.)

  9. Reliability versus reproducibility

    International Nuclear Information System (INIS)

    Lautzenheiser, C.E.

    1976-01-01

    Defect detection and reproducibility of results are two separate but closely related subjects. It is axiomatic that a defect must be detected from examination to examination or reproducibility of results is very poor. On the other hand, a defect can be detected on each of subsequent examinations for higher reliability and still have poor reproducibility of results

  10. The Need for Reproducibility

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-06-27

    The purpose of this presentation is to consider issues of reproducibility, specifically it determines whether bitwise reproducible computation is possible, if computational research in DOE improves its publication process, and if reproducible results can be achieved apart from the peer review process?

  11. Dwarf novae in outburst: modelling the observations

    International Nuclear Information System (INIS)

    Pringle, J.E.; Verbunt, F.

    1986-01-01

    Time-dependent accretion-disc models are constructed and used to calculate theoretical spectra in order to try to fit the ultraviolet and optical observations of outbursts of the two dwarf novae VW Hydri and CN Orionis. It is found that the behaviour on the rise to outburst is the strongest discriminator between theoretical models. The mass-transfer burst model is able to fit the spectral behaviour for both objects. The disc-instability model is unable to fit the rise to outburst in VW Hydri, and gives a poor fit to the observations of CN Orionis. (author)

  12. Observations and Modeling of Atmospheric Radiance Structure

    National Research Council Canada - National Science Library

    Wintersteiner, Peter

    2001-01-01

    The overall purpose of the work that we have undertaken is to provide new capabilities for observing and modeling structured radiance in the atmosphere, particularly the non-LTE regions of the atmosphere...

  13. Model for behavior observation training programs

    International Nuclear Information System (INIS)

    Berghausen, P.E. Jr.

    1987-01-01

    Continued behavior observation is mandated by ANSI/ANS 3.3. This paper presents a model for behavior observation training that is in accordance with this standard and the recommendations contained in US NRC publications. The model includes seventeen major topics or activities. Ten of these are discussed: Pretesting of supervisor's knowledge of behavior observation requirements, explanation of the goals of behavior observation programs, why behavior observation training programs are needed (legal and psychological issues), early indicators of emotional instability, use of videotaped interviews to demonstrate significant psychopathology, practice recording behaviors, what to do when unusual behaviors are observed, supervisor rationalizations for noncompliance, when to be especially vigilant, and prevention of emotional instability

  14. Measurement of cerebral blood flow by intravenous xenon-133 technique and a mobile system. Reproducibility using the Obrist model compared to total curve analysis

    DEFF Research Database (Denmark)

    Schroeder, T; Holstein, P; Lassen, N A

    1986-01-01

    and side-to-side asymmetry. Data were analysed according to the Obrist model and the results compared with those obtained using a model correcting for the air passage artifact. Reproducibility was of the same order of magnitude as reported using stationary equipment. The side-to-side CBF asymmetry...... was considerably more reproducible than CBF level. Using a single detector instead of five regional values averaged as the hemispheric flow increased standard deviation of CBF level by 10-20%, while the variation in asymmetry was doubled. In optimal measuring conditions the two models revealed no significant...... differences, but in low flow situations the artifact model yielded significantly more stable results. The present apparatus, equipped with 3-5 detectors covering each hemisphere, offers the opportunity of performing serial CBF measurements in situations not otherwise feasible....

  15. Development of a Three-Dimensional Hand Model Using Three-Dimensional Stereophotogrammetry: Assessment of Image Reproducibility.

    Directory of Open Access Journals (Sweden)

    Inge A Hoevenaren

    Full Text Available Using three-dimensional (3D stereophotogrammetry precise images and reconstructions of the human body can be produced. Over the last few years, this technique is mainly being developed in the field of maxillofacial reconstructive surgery, creating fusion images with computed tomography (CT data for precise planning and prediction of treatment outcome. Though, in hand surgery 3D stereophotogrammetry is not yet being used in clinical settings.A total of 34 three-dimensional hand photographs were analyzed to investigate the reproducibility. For every individual, 3D photographs were captured at two different time points (baseline T0 and one week later T1. Using two different registration methods, the reproducibility of the methods was analyzed. Furthermore, the differences between 3D photos of men and women were compared in a distance map as a first clinical pilot testing our registration method.The absolute mean registration error for the complete hand was 1.46 mm. This reduced to an error of 0.56 mm isolating the region to the palm of the hand. When comparing hands of both sexes, it was seen that the male hand was larger (broader base and longer fingers than the female hand.This study shows that 3D stereophotogrammetry can produce reproducible images of the hand without harmful side effects for the patient, so proving to be a reliable method for soft tissue analysis. Its potential use in everyday practice of hand surgery needs to be further explored.

  16. REPRODUCING THE OBSERVED ABUNDANCES IN RCB AND HdC STARS WITH POST-DOUBLE-DEGENERATE MERGER MODELS—CONSTRAINTS ON MERGER AND POST-MERGER SIMULATIONS AND PHYSICS PROCESSES

    International Nuclear Information System (INIS)

    Menon, Athira; Herwig, Falk; Denissenkov, Pavel A.; Clayton, Geoffrey C.; Staff, Jan; Pignatari, Marco; Paxton, Bill

    2013-01-01

    The R Coronae Borealis (RCB) stars are hydrogen-deficient, variable stars that are most likely the result of He-CO WD mergers. They display extremely low oxygen isotopic ratios, 16 O/ 18 O ≅ 1-10, 12 C/ 13 C ≥ 100, and enhancements up to 2.6 dex in F and in s-process elements from Zn to La, compared to solar. These abundances provide stringent constraints on the physical processes during and after the double-degenerate merger. As shown previously, O-isotopic ratios observed in RCB stars cannot result from the dynamic double-degenerate merger phase, and we now investigate the role of the long-term one-dimensional spherical post-merger evolution and nucleosynthesis based on realistic hydrodynamic merger progenitor models. We adopt a model for extra envelope mixing to represent processes driven by rotation originating in the dynamical merger. Comprehensive nucleosynthesis post-processing simulations for these stellar evolution models reproduce, for the first time, the full range of the observed abundances for almost all the elements measured in RCB stars: 16 O/ 18 O ratios between 9 and 15, C-isotopic ratios above 100, and ∼1.4-2.35 dex F enhancements, along with enrichments in s-process elements. The nucleosynthesis processes in our models constrain the length and temperature in the dynamic merger shell-of-fire feature as well as the envelope mixing in the post-merger phase. s-process elements originate either in the shell-of-fire merger feature or during the post-merger evolution, but the contribution from the asymptotic giant branch progenitors is negligible. The post-merger envelope mixing must eventually cease ∼10 6 yr after the dynamic merger phase before the star enters the RCB phase

  17. Skills of General Circulation and Earth System Models in reproducing streamflow to the ocean: the case of Congo river

    Science.gov (United States)

    Santini, M.; Caporaso, L.

    2017-12-01

    Although the importance of water resources in the context of climate change, it is still difficult to correctly simulate the freshwater cycle over the land via General Circulation and Earth System Models (GCMs and ESMs). Existing efforts from the Climate Model Intercomparison Project 5 (CMIP5) were mainly devoted to the validation of atmospheric variables like temperature and precipitation, with low attention to discharge.Here we investigate the present-day performances of GCMs and ESMs participating to CMIP5 in simulating the discharge of the river Congo to the sea thanks to: i) the long-term availability of discharge data for the Kinshasa hydrological station representative of more than 95% of the water flowing in the whole catchment; and ii) the River's still low influence by human intervention, which enables comparison with the (mostly) natural streamflow simulated within CMIP5.Our findings suggest how most of models appear overestimating the streamflow in terms of seasonal cycle, especially in the late winter and spring, while overestimation and variability across models are lower in late summer. Weighted ensemble means are also calculated, based on simulations' performances given by several metrics, showing some improvements of results.Although simulated inter-monthly and inter-annual percent anomalies do not appear significantly different from those in observed data, when translated into well consolidated indicators of drought attributes (frequency, magnitude, timing, duration), usually adopted for more immediate communication to stakeholders and decision makers, such anomalies can be misleading.These inconsistencies produce incorrect assessments towards water management planning and infrastructures (e.g. dams or irrigated areas), especially if models are used instead of measurements, as in case of ungauged basins or for basins with insufficient data, as well as when relying on models for future estimates without a preliminary quantification of model biases.

  18. Developing a Collection of Composable Data Translation Software Units to Improve Efficiency and Reproducibility in Ecohydrologic Modeling Workflows

    Science.gov (United States)

    Olschanowsky, C.; Flores, A. N.; FitzGerald, K.; Masarik, M. T.; Rudisill, W. J.; Aguayo, M.

    2017-12-01

    Dynamic models of the spatiotemporal evolution of water, energy, and nutrient cycling are important tools to assess impacts of climate and other environmental changes on ecohydrologic systems. These models require spatiotemporally varying environmental forcings like precipitation, temperature, humidity, windspeed, and solar radiation. These input data originate from a variety of sources, including global and regional weather and climate models, global and regional reanalysis products, and geostatistically interpolated surface observations. Data translation measures, often subsetting in space and/or time and transforming and converting variable units, represent a seemingly mundane, but critical step in the application workflows. Translation steps can introduce errors, misrepresentations of data, slow execution time, and interrupt data provenance. We leverage a workflow that subsets a large regional dataset derived from the Weather Research and Forecasting (WRF) model and prepares inputs to the Parflow integrated hydrologic model to demonstrate the impact translation tool software quality on scientific workflow results and performance. We propose that such workflows will benefit from a community approved collection of data transformation components. The components should be self-contained composable units of code. This design pattern enables automated parallelization and software verification, improving performance and reliability. Ensuring that individual translation components are self-contained and target minute tasks increases reliability. The small code size of each component enables effective unit and regression testing. The components can be automatically composed for efficient execution. An efficient data translation framework should be written to minimize data movement. Composing components within a single streaming process reduces data movement. Each component will typically have a low arithmetic intensity, meaning that it requires about the same number of

  19. A Community Data Model for Hydrologic Observations

    Science.gov (United States)

    Tarboton, D. G.; Horsburgh, J. S.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.; Jennings, B.

    2006-12-01

    The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. Hydrologic information science involves the description of hydrologic environments in a consistent way, using data models for information integration. This includes a hydrologic observations data model for the storage and retrieval of hydrologic observations in a relational database designed to facilitate data retrieval for integrated analysis of information collected by multiple investigators. It is intended to provide a standard format to facilitate the effective sharing of information between investigators and to facilitate analysis of information within a single study area or hydrologic observatory, or across hydrologic observatories and regions. The observations data model is designed to store hydrologic observations and sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used and provide traceable heritage from raw measurements to usable information. The design is based on the premise that a relational database at the single observation level is most effective for providing querying capability and cross dimension data retrieval and analysis. This premise is being tested through the implementation of a prototype hydrologic observations database, and the development of web services for the retrieval of data from and ingestion of data into the database. These web services hosted by the San Diego Supercomputer center make data in the database accessible both through a Hydrologic Data Access System portal and directly from applications software such as Excel, Matlab and ArcGIS that have Standard Object Access Protocol (SOAP) capability. This paper will (1) describe the data model; (2) demonstrate the capability for representing diverse data in the same database; (3) demonstrate the use of the database from applications software for the performance of hydrologic analysis

  20. Reproducibility study of [{sup 18}F]FPP(RGD){sub 2} uptake in murine models of human tumor xenografts

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Edwin; Liu, Shuangdong; Chin, Frederick; Cheng, Zhen [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Gowrishankar, Gayatri; Yaghoubi, Shahriar [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Wedgeworth, James Patrick [Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Berndorff, Dietmar; Gekeler, Volker [Bayer Schering Pharma AG, Global Drug Discovery, Berlin (Germany); Gambhir, Sanjiv S. [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Canary Center at Stanford for Cancer Early Detection, Nuclear Medicine, Departments of Radiology and Bioengineering, Molecular Imaging Program at Stanford, Stanford, CA (United States)

    2011-04-15

    An {sup 18}F-labeled PEGylated arginine-glycine-aspartic acid (RGD) dimer [{sup 18}F]FPP(RGD){sub 2} has been used to image tumor {alpha}{sub v}{beta}{sub 3} integrin levels in preclinical and clinical studies. Serial positron emission tomography (PET) studies may be useful for monitoring antiangiogenic therapy response or for drug screening; however, the reproducibility of serial scans has not been determined for this PET probe. The purpose of this study was to determine the reproducibility of the integrin {alpha}{sub v}{beta}{sub 3}-targeted PET probe, [{sup 18}F ]FPP(RGD){sub 2} using small animal PET. Human HCT116 colon cancer xenografts were implanted into nude mice (n = 12) in the breast and scapular region and grown to mean diameters of 5-15 mm for approximately 2.5 weeks. A 3-min acquisition was performed on a small animal PET scanner approximately 1 h after administration of [{sup 18}F]FPP(RGD){sub 2} (1.9-3.8 MBq, 50-100 {mu}Ci) via the tail vein. A second small animal PET scan was performed approximately 6 h later after reinjection of the probe to assess for reproducibility. Images were analyzed by drawing an ellipsoidal region of interest (ROI) around the tumor xenograft activity. Percentage injected dose per gram (%ID/g) values were calculated from the mean or maximum activity in the ROIs. Coefficients of variation and differences in %ID/g values between studies from the same day were calculated to determine the reproducibility. The coefficient of variation (mean {+-}SD) for %ID{sub mean}/g and %ID{sub max}/g values between [{sup 18}F]FPP(RGD){sub 2} small animal PET scans performed 6 h apart on the same day were 11.1 {+-} 7.6% and 10.4 {+-} 9.3%, respectively. The corresponding differences in %ID{sub mean}/g and %ID{sub max}/g values between scans were -0.025 {+-} 0.067 and -0.039 {+-} 0.426. Immunofluorescence studies revealed a direct relationship between extent of {alpha}{sub {nu}}{beta}{sub 3} integrin expression in tumors and tumor vasculature

  1. 3D-modeling of the spine using EOS imaging system: Inter-reader reproducibility and reliability.

    Directory of Open Access Journals (Sweden)

    Johannes Rehm

    Full Text Available To retrospectively assess the interreader reproducibility and reliability of EOS 3D full spine reconstructions in patients with adolescent idiopathic scoliosis (AIS.73 patients with mean age of 17 years and a moderate AIS (median Cobb Angle 18.2° obtained low-dose standing biplanar radiographs with EOS. Two independent readers performed "full spine" 3D reconstructions of the spine with the "full-spine" method adjusting the bone contour of every thoracic and lumbar vertebra (Th1-L5. Interreader reproducibility was assessed regarding rotation of every single vertebra in the coronal (i.e. frontal, sagittal (i.e. lateral, and axial plane, T1/T12 kyphosis, T4/T12 kyphosis, L1/L5 lordosis, L1/S1 lordosis and pelvic parameters. Radiation exposure, scan-time and 3D reconstruction time were recorded.Interclass correlation (ICC ranged between 0.83 and 0.98 for frontal vertebral rotation, between 0.94 and 0.99 for lateral vertebral rotation and between 0.51 and 0.88 for axial vertebral rotation. ICC was 0.92 for T1/T12 kyphosis, 0.95 for T4/T12 kyphosis, 0.90 for L1/L5 lordosis, 0.85 for L1/S1 lordosis, 0.97 for pelvic incidence, 0.96 for sacral slope, 0.98 for sagittal pelvic tilt and 0.94 for lateral pelvic tilt. The mean time for reconstruction was 14.9 minutes (reader 1: 14.6 minutes, reader 2: 15.2 minutes, p<0.0001. The mean total absorbed dose was 593.4μGy ±212.3 per patient.EOS "full spine" 3D angle measurement of vertebral rotation proved to be reliable and was performed in an acceptable reconstruction time. Interreader reproducibility of axial rotation was limited to some degree in the upper and middle thoracic spine due the obtuse angulation of the pedicles and the processi spinosi in the frontal view somewhat complicating their delineation.

  2. Respiratory-Gated Helical Computed Tomography of Lung: Reproducibility of Small Volumes in an Ex Vivo Model

    International Nuclear Information System (INIS)

    Biederer, Juergen; Dinkel, Julien; Bolte, Hendrik; Welzel, Thomas; Hoffmann, Beata M.Sc.; Thierfelder, Carsten; Mende, Ulrich; Debus, Juergen; Heller, Martin; Kauczor, Hans-Ulrich

    2007-01-01

    Purpose: Motion-adapted radiotherapy with gated irradiation or tracking of tumor positions requires dedicated imaging techniques such as four-dimensional (4D) helical computed tomography (CT) for patient selection and treatment planning. The objective was to evaluate the reproducibility of spatial information for small objects on respiratory-gated 4D helical CT using computer-assisted volumetry of lung nodules in a ventilated ex vivo system. Methods and Materials: Five porcine lungs were inflated inside a chest phantom and prepared with 55 artificial nodules (mean diameter, 8.4 mm ± 1.8). The lungs were respirated by a flexible diaphragm and scanned with 40-row detector CT (collimation, 24 x 1.2 mm; pitch, 0.1; rotation time, 1 s; slice thickness, 1.5 mm; increment, 0.8 mm). The 4D-CT scans acquired during respiration (eight per minute) and reconstructed at 0-100% inspiration and equivalent static scans were scored for motion-related artifacts (0 or absent to 3 or relevant). The reproducibility of nodule volumetry (three readers) was assessed using the variation coefficient (VC). Results: The mean volumes from the static and dynamic inspiratory scans were equal (364.9 and 360.8 mm 3 , respectively, p = 0.24). The static and dynamic end-expiratory volumes were slightly greater (371.9 and 369.7 mm 3 , respectively, p = 0.019). The VC for volumetry (static) was 3.1%, with no significant difference between 20 apical and 20 caudal nodules (2.6% and 3.5%, p = 0.25). In dynamic scans, the VC was greater (3.9%, p = 0.004; apical and caudal, 2.6% and 4.9%; p = 0.004), with a significant difference between static and dynamic in the 20 caudal nodules (3.5% and 4.9%, p = 0.015). This was consistent with greater motion-related artifacts and image noise at the diaphragm (p <0.05). The VC for interobserver variability was 0.6%. Conclusion: Residual motion-related artifacts had only minimal influence on volumetry of small solid lesions. This indicates a high reproducibility of

  3. Magni Reproducibility Example

    DEFF Research Database (Denmark)

    2016-01-01

    An example of how to use the magni.reproducibility package for storing metadata along with results from a computational experiment. The example is based on simulating the Mandelbrot set.......An example of how to use the magni.reproducibility package for storing metadata along with results from a computational experiment. The example is based on simulating the Mandelbrot set....

  4. Spatio-temporal reproducibility of the microbial food web structure associated with the change in temperature: Long-term observations in the Adriatic Sea

    Science.gov (United States)

    Šolić, Mladen; Grbec, Branka; Matić, Frano; Šantić, Danijela; Šestanović, Stefanija; Ninčević Gladan, Živana; Bojanić, Natalia; Ordulj, Marin; Jozić, Slaven; Vrdoljak, Ana

    2018-02-01

    Global and atmospheric climate change is altering the thermal conditions in the Adriatic Sea and, consequently, the marine ecosystem. Along the eastern Adriatic coast sea surface temperature (SST) increased by an average of 1.03 °C during the period from 1979 to 2015, while in the recent period, starting from 2008, a strong upward almost linear trend of 0.013 °C/month was noted. Being mainly oligotrophic, the middle Adriatic Sea is characterized by the important role played by the microbial food web in the production and transfer of biomass and energy towards higher trophic levels. It is very important to understand the effect of warming on microbial communities, since small temperature increases in surface seawater can greatly modify the microbial role in the global carbon cycle. In this study, the Self-Organizing Map (SOM) procedure was used to analyse the time series of a number of microbial parameters at two stations with different trophic status in the central Adriatic Sea. The results show that responses of the microbial food web (MFW) structure to temperature changes are reproducible in time. Furthermore, qualitatively similar changes in the structure of the MFW occurred regardless of the trophic status. The rise in temperature was associated with: (1) the increasing importance of microbial heterotrophic activities (increase bacterial growth and bacterial predator abundance, particularly heterotrophic nanoflagellates) and (2) the increasing importance of autotrophic picoplankton (APP) in the MFW.

  5. Observations and NLTE modeling of Ellerman bombs

    Science.gov (United States)

    Berlicki, A.; Heinzel, P.

    2014-07-01

    Context. Ellerman bombs (EBs) are short-lived, compact, and spatially well localized emission structures that are observed well in the wings of the hydrogen Hα line. EBs are also observed in the chromospheric CaII lines and in UV continua as bright points located within active regions. Hα line profiles of EBs show a deep absorption at the line center and enhanced emission in the line wings with maxima around ±1 Å from the line center. Similar shapes of the line profiles are observed for the CaII IR line at 8542 Å. In CaII H and K lines the emission peaks are much stronger, and EBs emission is also enhanced in the line center. Aims: It is generally accepted that EBs may be considered as compact microflares located in lower solar atmosphere that contribute to the heating of these low-lying regions, close to the temperature minimum of the atmosphere. However, it is still not clear where exactly the emission of EBs is formed in the solar atmosphere. High-resolution spectrophotometric observations of EBs were used for determining of their physical parameters and construction of semi-empirical models. Obtained models allow us to determine the position of EBs in the solar atmosphere, as well as the vertical structure of the activated EB atmosphere Methods: In our analysis we used observations of EBs obtained in the Hα and CaII H lines with the Dutch Open Telescope (DOT). These one-hour long simultaneous sequences obtained with high temporal and spatial resolution were used to determine the line emissions. To analyze them, we used NLTE numerical codes for the construction of grids of 243 semi-empirical models simulating EBs structures. In this way, the observed emission could be compared with the synthetic line spectra calculated for all such models. Results: For a specific model we found reasonable agreement between the observed and theoretical emission and thus we consider such model as a good approximation to EBs atmospheres. This model is characterized by an

  6. Verification of land-atmosphere coupling in forecast models, reanalyses and land surface models using flux site observations.

    Science.gov (United States)

    Dirmeyer, Paul A; Chen, Liang; Wu, Jiexia; Shin, Chul-Su; Huang, Bohua; Cash, Benjamin A; Bosilovich, Michael G; Mahanama, Sarith; Koster, Randal D; Santanello, Joseph A; Ek, Michael B; Balsamo, Gianpaolo; Dutra, Emanuel; Lawrence, D M

    2018-02-01

    We confront four model systems in three configurations (LSM, LSM+GCM, and reanalysis) with global flux tower observations to validate states, surface fluxes, and coupling indices between land and atmosphere. Models clearly under-represent the feedback of surface fluxes on boundary layer properties (the atmospheric leg of land-atmosphere coupling), and may over-represent the connection between soil moisture and surface fluxes (the terrestrial leg). Models generally under-represent spatial and temporal variability relative to observations, which is at least partially an artifact of the differences in spatial scale between model grid boxes and flux tower footprints. All models bias high in near-surface humidity and downward shortwave radiation, struggle to represent precipitation accurately, and show serious problems in reproducing surface albedos. These errors create challenges for models to partition surface energy properly and errors are traceable through the surface energy and water cycles. The spatial distribution of the amplitude and phase of annual cycles (first harmonic) are generally well reproduced, but the biases in means tend to reflect in these amplitudes. Interannual variability is also a challenge for models to reproduce. Our analysis illuminates targets for coupled land-atmosphere model development, as well as the value of long-term globally-distributed observational monitoring.

  7. Interpretation of TOMS Observations of Tropical Tropospheric Ozone with a Global Model and In Situ Observations

    Science.gov (United States)

    Martin, Randall V.; Jacob, Daniel J.; Logan, Jennifer A.; Bey, Isabelle; Yantosca, Robert M.; Staudt, Amanda C.; Fiore, Arlene M.; Duncan, Bryan N.; Liu, Hongyu; Ginoux, Paul

    2004-01-01

    We interpret the distribution of tropical tropospheric ozone columns (TTOCs) from the Total Ozone Mapping Spectrometer (TOMS) by using a global three-dimensional model of tropospheric chemistry (GEOS-CHEM) and additional information from in situ observations. The GEOS-CHEM TTOCs capture 44% of the variance of monthly mean TOMS TTOCs from the convective cloud differential method (CCD) with no global bias. Major discrepancies are found over northern Africa and south Asia where the TOMS TTOCs do not capture the seasonal enhancements from biomass burning found in the model and in aircraft observations. A characteristic feature of these northern topical enhancements, in contrast to southern tropical enhancements, is that they are driven by the lower troposphere where the sensitivity of TOMS is poor due to Rayleigh scattering. We develop an efficiency correction to the TOMS retrieval algorithm that accounts for the variability of ozone in the lower troposphere. This efficiency correction increases TTOC's over biomass burning regions by 3-5 Dobson units (DU) and decreases them by 2-5 DU over oceanic regions, improving the agreement between CCD TTOCs and in situ observations. Applying the correction to CCD TTOCs reduces by approximately DU the magnitude of the "tropical Atlantic paradox" [Thompson et al, 2000], i.e. the presence of a TTOC enhancement over the southern tropical Atlantic during the northern African biomass burning season in December-February. We reproduce the remainder of the paradox in the model and explain it by the combination of upper tropospheric ozone production from lightning NOx, peristent subsidence over the southern tropical Atlantic as part of the Walker circulation, and cross-equatorial transport of upper tropospheric ozone from northern midlatitudes in the African "westerly duct." These processes in the model can also account for the observed 13-17 DU persistent wave-1 pattern in TTOCs with a maximum above the tropical Atlantic and a minimum

  8. Comparing soil moisture memory in satellite observations and models

    Science.gov (United States)

    Stacke, Tobias; Hagemann, Stefan; Loew, Alexander

    2013-04-01

    A major obstacle to a correct parametrization of soil processes in large scale global land surface models is the lack of long term soil moisture observations for large parts of the globe. Currently, a compilation of soil moisture data derived from a range of satellites is released by the ESA Climate Change Initiative (ECV_SM). Comprising the period from 1978 until 2010, it provides the opportunity to compute climatological relevant statistics on a quasi-global scale and to compare these to the output of climate models. Our study is focused on the investigation of soil moisture memory in satellite observations and models. As a proxy for memory we compute the autocorrelation length (ACL) of the available satellite data and the uppermost soil layer of the models. Additional to the ECV_SM data, AMSR-E soil moisture is used as observational estimate. Simulated soil moisture fields are taken from ERA-Interim reanalysis and generated with the land surface model JSBACH, which was driven with quasi-observational meteorological forcing data. The satellite data show ACLs between one week and one month for the greater part of the land surface while the models simulate a longer memory of up to two months. Some pattern are similar in models and observations, e.g. a longer memory in the Sahel Zone and the Arabian Peninsula, but the models are not able to reproduce regions with a very short ACL of just a few days. If the long term seasonality is subtracted from the data the memory is strongly shortened, indicating the importance of seasonal variations for the memory in most regions. Furthermore, we analyze the change of soil moisture memory in the different soil layers of the models to investigate to which extent the surface soil moisture includes information about the whole soil column. A first analysis reveals that the ACL is increasing for deeper layers. However, its increase is stronger in the soil moisture anomaly than in its absolute values and the first even exceeds the

  9. Observational tests of FRW world models

    International Nuclear Information System (INIS)

    Lahav, Ofer

    2002-01-01

    Observational tests for the cosmological principle are reviewed. Assuming the FRW metric we then summarize estimates of cosmological parameters from various datasets, in particular the cosmic microwave background and the 2dF galaxy redshift survey. These and other analyses suggest a best-fit Λ-cold dark matter model with Ω m = 1 - Ω l ∼ 0.3 and H 0 ∼ 70 km s -1 Mpc -1 . It is remarkable that different measurements converge to this 'concordance model', although it remains to be seen if the two main components of this model, the dark matter and the dark energy, are real entities or just 'epicycles'. We point out some open questions related to this fashionable model

  10. Reproducibility of Carbon and Water Cycle by an Ecosystem Process Based Model Using a Weather Generator and Effect of Temporal Concentration of Precipitation on Model Outputs

    Science.gov (United States)

    Miyauchi, T.; Machimura, T.

    2014-12-01

    GCM is generally used to produce input weather data for the simulation of carbon and water cycle by ecosystem process based models under climate change however its temporal resolution is sometimes incompatible to requirement. A weather generator (WG) is used for temporal downscaling of input weather data for models, where the effect of WG algorithms on reproducibility of ecosystem model outputs must be assessed. In this study simulated carbon and water cycle by Biome-BGC model using weather data measured and generated by CLIMGEN weather generator were compared. The measured weather data (daily precipitation, maximum, minimum air temperature) at a few sites for 30 years was collected from NNDC Online weather data. The generated weather data was produced by CLIMGEN parameterized using the measured weather data. NPP, heterotrophic respiration (HR), NEE and water outflow were simulated by Biome-BGC using measured and generated weather data. In the case of deciduous broad leaf forest in Lushi, Henan Province, China, 30 years average monthly NPP by WG was 10% larger than that by measured weather in the growing season. HR by WG was larger than that by measured weather in all months by 15% in average. NEE by WG was more negative in winter and was close to that by measured weather in summer. These differences in carbon cycle were because the soil water content by WG was larger than that by measured weather. The difference between monthly water outflow by WG and by measured weather was large and variable, and annual outflow by WG was 50% of that by measured weather. The inconsistency in carbon and water cycle by WG and measured weather was suggested be affected by the difference in temporal concentration of precipitation, which was assessed.

  11. Evaluation of CNN as anthropomorphic model observer

    Science.gov (United States)

    Massanes, Francesc; Brankov, Jovan G.

    2017-03-01

    Model observers (MO) are widely used in medical imaging to act as surrogates of human observers in task-based image quality evaluation, frequently towards optimization of reconstruction algorithms. In this paper, we explore the use of convolutional neural networks (CNN) to be used as MO. We will compare CNN MO to alternative MO currently being proposed and used such as the relevance vector machine based MO and channelized Hotelling observer (CHO). As the success of the CNN, and other deep learning approaches, is rooted in large data sets availability, which is rarely the case in medical imaging systems task-performance evaluation, we will evaluate CNN performance on both large and small training data sets.

  12. Modeling and interpretation of line observations*

    Directory of Open Access Journals (Sweden)

    Kamp Inga

    2015-01-01

    Full Text Available Models for the interpretation of line observations from protoplanetary disks are summarized. The spectrum ranges from 1D LTE slab models to 2D thermo-chemical radiative transfer models and their use depends largely on the type/nature of observational data that is analyzed. I discuss the various types of observational data and their interpretation in the context of disk physical and chemical properties. The most simple spatially and spectral unresolved data are line fluxes, which can be interpreted using so-called Boltzmann diagrams. The interpretation is often tricky due to optical depth and non-LTE effects and requires care. Line profiles contain kinematic information and thus indirectly the spatial origin of the emission. Using series of line profiles, we can for example deduce radial temperature gradients in disks (CO pure rotational ladder. Spectro-astrometry of e.g. CO ro-vibrational line profiles probes the disk structure in the 1–30 AU region, where planet formation through core accretion should be most efficient. Spatially and spectrally resolved line images from (submm interferometers are the richest datasets we have to date and they enable us to unravel exciting details of the radial and vertical disk structure such as winds and asymmetries.

  13. The use of real-time cell analyzer technology in drug discovery: defining optimal cell culture conditions and assay reproducibility with different adherent cellular models.

    Science.gov (United States)

    Atienzar, Franck A; Tilmant, Karen; Gerets, Helga H; Toussaint, Gaelle; Speeckaert, Sebastien; Hanon, Etienne; Depelchin, Olympe; Dhalluin, Stephane

    2011-07-01

    The use of impedance-based label-free technology applied to drug discovery is nowadays receiving more and more attention. Indeed, such a simple and noninvasive assay that interferes minimally with cell morphology and function allows one to perform kinetic measurements and to obtain information on proliferation, migration, cytotoxicity, and receptor-mediated signaling. The objective of the study was to further assess the usefulness of a real-time cell analyzer (RTCA) platform based on impedance in the context of quality control and data reproducibility. The data indicate that this technology is useful to determine the best coating and cellular density conditions for different adherent cellular models including hepatocytes, cardiomyocytes, fibroblasts, and hybrid neuroblastoma/neuronal cells. Based on 31 independent experiments, the reproducibility of cell index data generated from HepG2 cells exposed to DMSO and to Triton X-100 was satisfactory, with a coefficient of variation close to 10%. Cell index data were also well reproduced when cardiomyocytes and fibroblasts were exposed to 21 compounds three times (correlation >0.91, p technology appears to be a powerful and reliable tool in drug discovery because of the reasonable throughput, rapid and efficient performance, technical optimization, and cell quality control.

  14. The Proximal Medial Sural Nerve Biopsy Model: A Standardised and Reproducible Baseline Clinical Model for the Translational Evaluation of Bioengineered Nerve Guides

    Directory of Open Access Journals (Sweden)

    Ahmet Bozkurt

    2014-01-01

    Full Text Available Autologous nerve transplantation (ANT is the clinical gold standard for the reconstruction of peripheral nerve defects. A large number of bioengineered nerve guides have been tested under laboratory conditions as an alternative to the ANT. The step from experimental studies to the implementation of the device in the clinical setting is often substantial and the outcome is unpredictable. This is mainly linked to the heterogeneity of clinical peripheral nerve injuries, which is very different from standardized animal studies. In search of a reproducible human model for the implantation of bioengineered nerve guides, we propose the reconstruction of sural nerve defects after routine nerve biopsy as a first or baseline study. Our concept uses the medial sural nerve of patients undergoing diagnostic nerve biopsy (≥2 cm. The biopsy-induced nerve gap was immediately reconstructed by implantation of the novel microstructured nerve guide, Neuromaix, as part of an ongoing first-in-human study. Here we present (i a detailed list of inclusion and exclusion criteria, (ii a detailed description of the surgical procedure, and (iii a follow-up concept with multimodal sensory evaluation techniques. The proximal medial sural nerve biopsy model can serve as a preliminarynature of the injuries or baseline nerve lesion model. In a subsequent step, newly developed nerve guides could be tested in more unpredictable and challenging clinical peripheral nerve lesions (e.g., following trauma which have reduced comparability due to the different nature of the injuries (e.g., site of injury and length of nerve gap.

  15. Type I supernova models vs observations

    International Nuclear Information System (INIS)

    Weaver, T.A.; Axelrod, T.S.; Woosley, S.E.

    1980-01-01

    This paper explores tHe observational consequences of models for Type I supernovae based on the detonation (or deflagration) of the degenerate cores of white dwarfs or intermediate mass (approx. = 9 M/sub sun/) stars. Such nuclear burning can be initiated either at the center of the core or near its edge. The model examined in most detail is that of a 0.5M/sub sun/ C/O white dwarf which undergoes an edge-lit He/C/O detonation after accreting 0.62 M/sub sun/ of he at 10 -8 M/sub sun//yr. The light curve resulting from this model is found to be in excellent agreement with those observed for Type I supernovae, particularly those in the fast subclass. The physical processes involved in the detailed numerical calculations which lead to this conclusion are quantitatively elucidated by simple analytic models, and effects of uncertainties in the input physics are explored

  16. INTERVAL OBSERVER FOR A BIOLOGICAL REACTOR MODEL

    Directory of Open Access Journals (Sweden)

    T. A. Kharkovskaia

    2014-05-01

    Full Text Available The method of an interval observer design for nonlinear systems with parametric uncertainties is considered. The interval observer synthesis problem for systems with varying parameters consists in the following. If there is the uncertainty restraint for the state values of the system, limiting the initial conditions of the system and the set of admissible values for the vector of unknown parameters and inputs, the interval existence condition for the estimations of the system state variables, containing the actual state at a given time, needs to be held valid over the whole considered time segment as well. Conditions of the interval observers design for the considered class of systems are shown. They are: limitation of the input and state, the existence of a majorizing function defining the uncertainty vector for the system, Lipschitz continuity or finiteness of this function, the existence of an observer gain with the suitable Lyapunov matrix. The main condition for design of such a device is cooperativity of the interval estimation error dynamics. An individual observer gain matrix selection problem is considered. In order to ensure the property of cooperativity for interval estimation error dynamics, a static transformation of coordinates is proposed. The proposed algorithm is demonstrated by computer modeling of the biological reactor. Possible applications of these interval estimation systems are the spheres of robust control, where the presence of various types of uncertainties in the system dynamics is assumed, biotechnology and environmental systems and processes, mechatronics and robotics, etc.

  17. A universe model confronted to observations

    International Nuclear Information System (INIS)

    Souriau, J.M.

    1982-09-01

    Present work is a detailed study of a Universe model elaborated in several steps, and some of its consequences. Absence zone in quasar spatial distribution is first described; demonstration is made it is sufficient to determine a cosmological model. Each following paragraph is concerned with a type of observation, which is confronted with the model. Universe age and density, redshift-luminosity relation for galaxies and quasars, diameter-redshift relation for radiosources, radiation isotropy at 3 0 K, matter-antimatter contact zone physics. An eventual stratification of universe parallel to this zone is more peculiarly studied; absorption lines in quasar spectra are in way interpreted, just as local super-cluster and local group of galaxies, galaxy HI region orientation, and at last neighbouring galaxy kinematics [fr

  18. Dark energy observational evidence and theoretical models

    CERN Document Server

    Novosyadlyj, B; Shtanov, Yu; Zhuk, A

    2013-01-01

    The book elucidates the current state of the dark energy problem and presents the results of the authors, who work in this area. It describes the observational evidence for the existence of dark energy, the methods and results of constraining of its parameters, modeling of dark energy by scalar fields, the space-times with extra spatial dimensions, especially Kaluza---Klein models, the braneworld models with a single extra dimension as well as the problems of positive definition of gravitational energy in General Relativity, energy conditions and consequences of their violation in the presence of dark energy. This monograph is intended for science professionals, educators and graduate students, specializing in general relativity, cosmology, field theory and particle physics.

  19. WhatsApp Messenger is useful and reproducible in the assessment of tibial plateau fractures: inter- and intra-observer agreement study.

    Science.gov (United States)

    Giordano, Vincenzo; Koch, Hilton Augusto; Mendes, Carlos Henrique; Bergamin, André; de Souza, Felipe Serrão; do Amaral, Ney Pecegueiro

    2015-02-01

    The aim of this study was to evaluate the inter- and intra-observer agreement in the initial diagnosis and classification by means of plain radiographs and CT scans of tibial plateau fractures photographed and sent via WhatsApp Messenger. The increasing popularity of smartphones has driven the development of technology for data transmission and imaging and generated a growing interest in the use of these devices as diagnostic tools. The emergence of WhatsApp Messenger technology, which is available for various platforms used by smartphones, has led to an improvement in the quality and resolution of images sent and received. The images (plain radiographs and CT scans) were obtained from 13 cases of tibial plateau fractures using the iPhone 5 (Apple Inc., Cupertino, CA, USA) and were sent to six observers via the WhatsApp Messenger application. The observers were asked to determine the standard deviation and type of injury, the classification according to the Schatzker and the Luo classifications schemes, and whether the CT scan changed the classification. The six observers independently assessed the images on two separate occasions, 15 days apart. The inter- and intra-observer agreement for both periods of the study ranged from excellent to perfect (0.75WhatsApp Messenger. The authors now propose the systematic use of the application to facilitate faster documentation and obtaining the opinion of an experienced consultant when not on call. Finally, we think the use of the WhatsApp Messenger as an adjuvant tool could be broadened to other clinical centres to assess its viability in other skeletal and non-skeletal trauma situations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Observations and Modelling of the Zodiacal Light

    Science.gov (United States)

    Kelsall, T.

    1994-12-01

    The DIRBE instrument on the COBE satellite performed a full-sky survey in ten bands covering the spectral range from 1.25 to 240 microns, and made measurements of the polarization from 1.25 to 3.5 microns. These observations provide a wealth of data on the radiations from the interplanetary dust cloud (IPD). The presentation covers the observations, the model-independent findings, and the results from the extensive efforts of the DIRBE team to model the IPD. Emphasis is placed on describing the importance of correctly accounting for the IPD contribution to the observed-sky signal for the purpose of detecting the cosmic infrared background. (*) The NASA/Goddard Space Flight Center (GSFC) is responsible for the design, development, and operation of the COBE mission. GSFC is also responsible for the development of the analysis software and for the production of the mission data sets. Scientific guidance is provided by the COBE Science Working Group. The COBE program is supported by the Astrophysics Division of NASA's Office of Space Science.

  1. Accuracy and reproducibility of voxel based superimposition of cone beam computed tomography models on the anterior cranial base and the zygomatic arches.

    Science.gov (United States)

    Nada, Rania M; Maal, Thomas J J; Breuning, K Hero; Bergé, Stefaan J; Mostafa, Yehya A; Kuijpers-Jagtman, Anne Marie

    2011-02-09

    Superimposition of serial Cone Beam Computed Tomography (CBCT) scans has become a valuable tool for three dimensional (3D) assessment of treatment effects and stability. Voxel based image registration is a newly developed semi-automated technique for superimposition and comparison of two CBCT scans. The accuracy and reproducibility of CBCT superimposition on the anterior cranial base or the zygomatic arches using voxel based image registration was tested in this study. 16 pairs of 3D CBCT models were constructed from pre and post treatment CBCT scans of 16 adult dysgnathic patients. Each pair was registered on the anterior cranial base three times and on the left zygomatic arch twice. Following each superimposition, the mean absolute distances between the 2 models were calculated at 4 regions: anterior cranial base, forehead, left and right zygomatic arches. The mean distances between the models ranged from 0.2 to 0.37 mm (SD 0.08-0.16) for the anterior cranial base registration and from 0.2 to 0.45 mm (SD 0.09-0.27) for the zygomatic arch registration. The mean differences between the two registration zones ranged between 0.12 to 0.19 mm at the 4 regions. Voxel based image registration on both zones could be considered as an accurate and a reproducible method for CBCT superimposition. The left zygomatic arch could be used as a stable structure for the superimposition of smaller field of view CBCT scans where the anterior cranial base is not visible.

  2. A rat model of post-traumatic stress disorder reproduces the hippocampal deficits seen in the human syndrome

    OpenAIRE

    Goswami, Sonal; Samuel, Sherin; Sierra, Olga R.; Cascardi, Michele; Paré, Denis

    2012-01-01

    Despite recent progress, the causes and pathophysiology of post-traumatic stress disorder (PTSD) remain poorly understood, partly because of ethical limitations inherent to human studies. One approach to circumvent this obstacle is to study PTSD in a valid animal model of the human syndrome. In one such model, extreme and long-lasting behavioral manifestations of anxiety develop in a subset of Lewis rats after exposure to an intense predatory threat that mimics the type of life-and-death situ...

  3. Building entity models through observation and learning

    Science.gov (United States)

    Garcia, Richard; Kania, Robert; Fields, MaryAnne; Barnes, Laura

    2011-05-01

    To support the missions and tasks of mixed robotic/human teams, future robotic systems will need to adapt to the dynamic behavior of both teammates and opponents. One of the basic elements of this adaptation is the ability to exploit both long and short-term temporal data. This adaptation allows robotic systems to predict/anticipate, as well as influence, future behavior for both opponents and teammates and will afford the system the ability to adjust its own behavior in order to optimize its ability to achieve the mission goals. This work is a preliminary step in the effort to develop online entity behavior models through a combination of learning techniques and observations. As knowledge is extracted from the system through sensor and temporal feedback, agents within the multi-agent system attempt to develop and exploit a basic movement model of an opponent. For the purpose of this work, extraction and exploitation is performed through the use of a discretized two-dimensional game. The game consists of a predetermined number of sentries attempting to keep an unknown intruder agent from penetrating their territory. The sentries utilize temporal data coupled with past opponent observations to hypothesize the probable locations of the opponent and thus optimize their guarding locations.

  4. A Holoinformational Model of the Physical Observer

    Science.gov (United States)

    di Biase, Francisco

    2013-09-01

    The author proposes a holoinformational view of the observer based, on the holonomic theory of brain/mind function and quantum brain dynamics developed by Karl Pribram, Sir John Eccles, R.L. Amoroso, Hameroff, Jibu and Yasue, and in the quantumholographic and holomovement theory of David Bohm. This conceptual framework is integrated with nonlocal information properties of the Quantum Field Theory of Umesawa, with the concept of negentropy, order, and organization developed by Shannon, Wiener, Szilard and Brillouin, and to the theories of self-organization and complexity of Prigogine, Atlan, Jantsch and Kauffman. Wheeler's "it from bit" concept of a participatory universe, and the developments of the physics of information made by Zureck and others with the concepts of statistical entropy and algorithmic entropy, related to the number of bits being processed in the mind of the observer are also considered. This new synthesis gives a self-organizing quantum nonlocal informational basis for a new model of awareness in a participatory universe. In this synthesis, awareness is conceived as meaningful quantum nonlocal information interconnecting the brain and the cosmos, by a holoinformational unified field (integrating nonlocal holistic (quantum) and local (Newtonian). We propose that the cosmology of the physical observer is this unified nonlocal quantum-holographic cosmos manifesting itself through awareness, interconnected in a participatory holistic and indivisible way the human mind-brain to all levels of the self-organizing holographic anthropic multiverse.

  5. Apparent diffusion coefficient measurements in diffusion-weighted magnetic resonance imaging of the anterior mediastinum: inter-observer reproducibility of five different methods of region-of-interest positioning

    Energy Technology Data Exchange (ETDEWEB)

    Priola, Adriano Massimiliano; Priola, Sandro Massimo; Parlatano, Daniela; Gned, Dario; Veltri, Andrea [San Luigi Gonzaga University Hospital, Department of Diagnostic Imaging, Regione Gonzole 10, Orbassano, Torino (Italy); Giraudo, Maria Teresa [University of Torino, Department of Mathematics ' ' Giuseppe Peano' ' , Torino (Italy); Giardino, Roberto; Ardissone, Francesco [San Luigi Gonzaga University Hospital, Department of Thoracic Surgery, Regione Gonzole 10, Orbassano, Torino (Italy); Ferrero, Bruno [San Luigi Gonzaga University Hospital, Department of Neurology, Regione Gonzole 10, Orbassano, Torino (Italy)

    2017-04-15

    To investigate inter-reader reproducibility of five different region-of-interest (ROI) protocols for apparent diffusion coefficient (ADC) measurements in the anterior mediastinum. In eighty-one subjects, on ADC mapping, two readers measured the ADC using five methods of ROI positioning that encompassed the entire tissue (whole tissue volume [WTV], three slices observer-defined [TSOD], single-slice [SS]) or the more restricted areas (one small round ROI [OSR], multiple small round ROI [MSR]). Inter-observer variability was assessed with interclass correlation coefficient (ICC), coefficient of variation (CoV), and Bland-Altman analysis. Nonparametric tests were performed to compare the ADC between ROI methods. The measurement time was recorded and compared between ROI methods. All methods showed excellent inter-reader agreement with best and worst reproducibility in WTV and OSR, respectively (ICC, 0.937/0.874; CoV, 7.3 %/16.8 %; limits of agreement, ±0.44/±0.77 x 10{sup -3} mm{sup 2}/s). ADC values of OSR and MSR were significantly lower compared to the other methods in both readers (p < 0.001). The SS and OSR methods required less measurement time (14 ± 2 s) compared to the others (p < 0.0001), while the WTV method required the longest measurement time (90 ± 56 and 77 ± 49 s for each reader) (p < 0.0001). All methods demonstrate excellent inter-observer reproducibility with the best agreement in WTV, although it requires the longest measurement time. (orig.)

  6. Reproducibility of brain ADC histograms

    International Nuclear Information System (INIS)

    Steens, S.C.A.; Buchem, M.A. van; Admiraal-Behloul, F.; Schaap, J.A.; Hoogenraad, F.G.C.; Wheeler-Kingshott, C.A.M.; Tofts, P.S.; Cessie, S. le

    2004-01-01

    The aim of this study was to assess the effect of differences in acquisition technique on whole-brain apparent diffusion coefficient (ADC) histogram parameters, as well as to assess scan-rescan reproducibility. Diffusion-weighted imaging (DWI) was performed in 7 healthy subjects with b-values 0-800, 0-1000, and 0-1500 s/mm 2 and fluid-attenuated inversion recovery (FLAIR) DWI with b-values 0-1000 s/mm 2 . All sequences were repeated with and without repositioning. The peak location, peak height, and mean ADC of the ADC histograms and mean ADC of a region of interest (ROI) in the white matter were compared using paired-sample t tests. Scan-rescan reproducibility was assessed using paired-sample t tests, and repeatability coefficients were reported. With increasing maximum b-values, ADC histograms shifted to lower values, with an increase in peak height (p<0.01). With FLAIR DWI, the ADC histogram shifted to lower values with a significantly higher, narrower peak (p<0.01), although the ROI mean ADC showed no significant differences. For scan-rescan reproducibility, no significant differences were observed. Different DWI pulse sequences give rise to different ADC histograms. With a given pulse sequence, however, ADC histogram analysis is a robust and reproducible technique. Using FLAIR DWI, the partial-voluming effect of cerebrospinal fluid, and thus its confounding effect on histogram analyses, can be reduced

  7. Lagrangian Observations and Modeling of Marine Larvae

    Science.gov (United States)

    Paris, Claire B.; Irisson, Jean-Olivier

    2017-04-01

    Just within the past two decades, studies on the early-life history stages of marine organisms have led to new paradigms in population dynamics. Unlike passive plant seeds that are transported by the wind or by animals, marine larvae have motor and sensory capabilities. As a result, marine larvae have a tremendous capacity to actively influence their dispersal. This is continuously revealed as we develop new techniques to observe larvae in their natural environment and begin to understand their ability to detect cues throughout ontogeny, process the information, and use it to ride ocean currents and navigate their way back home, or to a place like home. We present innovative in situ and numerical modeling approaches developed to understand the underlying mechanisms of larval transport in the ocean. We describe a novel concept of a Lagrangian platform, the Drifting In Situ Chamber (DISC), designed to observe and quantify complex larval behaviors and their interactions with the pelagic environment. We give a brief history of larval ecology research with the DISC, showing that swimming is directional in most species, guided by cues as diverse as the position of the sun or the underwater soundscape, and even that (unlike humans!) larvae orient better and swim faster when moving as a group. The observed Lagrangian behavior of individual larvae are directly implemented in the Connectivity Modeling System (CMS), an open source Lagrangian tracking application. Simulations help demonstrate the impact that larval behavior has compared to passive Lagrangian trajectories. These methodologies are already the base of exciting findings and are promising tools for documenting and simulating the behavior of other small pelagic organisms, forecasting their migration in a changing ocean.

  8. Ancient Chinese Observations and Modern Cometary Models

    Science.gov (United States)

    Yeomans, D. K.

    1995-12-01

    Ancient astronomical observations by Chinese, Japanese, and Korean observers represent the only data source for discerning the long-term behavior of comets. The primary source material is derived from Chinese astrologers who kept a vigilant celestial watch in an effort to issue up-to-date astrological forecasts for the reigning emperors. Surprisingly accurate records were kept on cometary apparitions with careful notes being made of an object's position, motion, size, color, and tail length. For comets Halley, Swift-Tuttle, and Tempel-Tuttle, Chinese observations have been used to model their motions over two millennia and to infer their photometric histories. One general result is that active comets must achieve an apparent magnitude of 3.5 or brighter before they become obvious naked-eye objects. For both comets Halley and Swift-Tuttle, their absolute magnitudes and hence their outgassing rates, have remained relatively constant for two millennia. Comet Halley's rocket-like outgassing has consistently delayed the comet's return to perihelion by 4 days so that the comet's spin axis must have remained stable for at least two millennia. Although its outgassing is at nearly the same rate as Halley's, comet Swift-Tuttle's motion has been unaffected by outgassing forces; this comet is likely to be ten times more massive than Halley and hence far more difficult for rocket-like forces to push it around. Although the earliest definite observations of comet Tempel-Tuttle were in 1366, the associated Leonid meteor showers have been identified as early as A.D. 902. The circumstance for each historical meteor shower and storm have been used to guide predictions for the upcoming 1998-1999 Leonid meteor displays.

  9. Models and observations of Arctic melt ponds

    Science.gov (United States)

    Golden, K. M.

    2016-12-01

    During the Arctic melt season, the sea ice surface undergoes a striking transformation from vast expanses of snow covered ice to complex mosaics of ice and melt ponds. Sea ice albedo, a key parameter in climate modeling, is largely determined by the complex evolution of melt pond configurations. In fact, ice-albedo feedback has played a significant role in the recent declines of the summer Arctic sea ice pack. However, understanding melt pond evolution remains a challenge to improving climate projections. It has been found that as the ponds grow and coalesce, the fractal dimension of their boundaries undergoes a transition from 1 to about 2, around a critical length scale of 100 square meters in area. As the ponds evolve they take complex, self-similar shapes with boundaries resembling space-filling curves. I will outline how mathematical models of composite materials and statistical physics, such as percolation and Ising models, are being used to describe this evolution and predict key geometrical parameters that agree very closely with observations.

  10. Observational constraints on Visser's cosmological model

    International Nuclear Information System (INIS)

    Alves, M. E. S.; Araujo, J. C. N. de; Miranda, O. D.; Wuensche, C. A.; Carvalho, F. C.; Santos, E. M.

    2010-01-01

    Theories of gravity for which gravitons can be treated as massive particles have presently been studied as realistic modifications of general relativity, and can be tested with cosmological observations. In this work, we study the ability of a recently proposed theory with massive gravitons, the so-called Visser theory, to explain the measurements of luminosity distance from the Union2 compilation, the most recent Type-Ia Supernovae (SNe Ia) data set, adopting the current ratio of the total density of nonrelativistic matter to the critical density (Ω m ) as a free parameter. We also combine the SNe Ia data with constraints from baryon acoustic oscillations (BAO) and cosmic microwave background (CMB) measurements. We find that, for the allowed interval of values for Ω m , a model based on Visser's theory can produce an accelerated expansion period without any dark energy component, but the combined analysis (SNe Ia+BAO+CMB) shows that the model is disfavored when compared with the ΛCDM model.

  11. Attempting to train a digital human model to reproduce human subject reach capabilities in an ejection seat aircraft

    NARCIS (Netherlands)

    Zehner, G.F.; Hudson, J.A.; Oudenhuijzen, A.

    2006-01-01

    From 1997 through 2002, the Air Force Research Lab and TNO Defence, Security and Safety (Business Unit Human Factors) were involved in a series of tests to quantify the accuracy of five Human Modeling Systems (HMSs) in determining accommodation limits of ejection seat aircraft. The results of these

  12. A Reliable and Reproducible Model for Assessing the Effect of Different Concentrations of α-Solanine on Rat Bone Marrow Mesenchymal Stem Cells

    Directory of Open Access Journals (Sweden)

    Adriana Ordóñez-Vásquez

    2017-01-01

    Full Text Available Αlpha-solanine (α-solanine is a glycoalkaloid present in potato (Solanum tuberosum. It has been of particular interest because of its toxicity and potential teratogenic effects that include abnormalities of the central nervous system, such as exencephaly, encephalocele, and anophthalmia. Various types of cell culture have been used as experimental models to determine the effect of α-solanine on cell physiology. The morphological changes in the mesenchymal stem cell upon exposure to α-solanine have not been established. This study aimed to describe a reliable and reproducible model for assessing the structural changes induced by exposure of mouse bone marrow mesenchymal stem cells (MSCs to different concentrations of α-solanine for 24 h. The results demonstrate that nonlethal concentrations of α-solanine (2–6 μM changed the morphology of the cells, including an increase in the number of nucleoli, suggesting elevated protein synthesis, and the formation of spicules. In addition, treatment with α-solanine reduced the number of adherent cells and the formation of colonies in culture. Immunophenotypic characterization and staining of MSCs are proposed as a reproducible method that allows description of cells exposed to the glycoalkaloid, α-solanine.

  13. Synchronized mammalian cell culture: part II--population ensemble modeling and analysis for development of reproducible processes.

    Science.gov (United States)

    Jandt, Uwe; Barradas, Oscar Platas; Pörtner, Ralf; Zeng, An-Ping

    2015-01-01

    The consideration of inherent population inhomogeneities of mammalian cell cultures becomes increasingly important for systems biology study and for developing more stable and efficient processes. However, variations of cellular properties belonging to different sub-populations and their potential effects on cellular physiology and kinetics of culture productivity under bioproduction conditions have not yet been much in the focus of research. Culture heterogeneity is strongly determined by the advance of the cell cycle. The assignment of cell-cycle specific cellular variations to large-scale process conditions can be optimally determined based on the combination of (partially) synchronized cultivation under otherwise physiological conditions and subsequent population-resolved model adaptation. The first step has been achieved using the physical selection method of countercurrent flow centrifugal elutriation, recently established in our group for different mammalian cell lines which is presented in Part I of this paper series. In this second part, we demonstrate the successful adaptation and application of a cell-cycle dependent population balance ensemble model to describe and understand synchronized bioreactor cultivations performed with two model mammalian cell lines, AGE1.HNAAT and CHO-K1. Numerical adaptation of the model to experimental data allows for detection of phase-specific parameters and for determination of significant variations between different phases and different cell lines. It shows that special care must be taken with regard to the sampling frequency in such oscillation cultures to minimize phase shift (jitter) artifacts. Based on predictions of long-term oscillation behavior of a culture depending on its start conditions, optimal elutriation setup trade-offs between high cell yields and high synchronization efficiency are proposed. © 2014 American Institute of Chemical Engineers.

  14. Preserve specimens for reproducibility

    Czech Academy of Sciences Publication Activity Database

    Krell, F.-T.; Klimeš, Petr; Rocha, L. A.; Fikáček, M.; Miller, S. E.

    2016-01-01

    Roč. 539, č. 7628 (2016), s. 168 ISSN 0028-0836 Institutional support: RVO:60077344 Keywords : reproducibility * specimen * biodiversity Subject RIV: EH - Ecology, Behaviour Impact factor: 40.137, year: 2016 http://www.nature.com/nature/journal/v539/n7628/full/539168b.html

  15. Observations and Modeling of Merging Galaxy Clusters

    Science.gov (United States)

    Golovich, Nathan Ryan

    Context: Galaxy clusters grow hierarchically with continuous accretion bookended by major merging events that release immense gravitational potential energy (as much as ˜1065 erg). This energy creates an environment for rich astrophysics. Precise measurements of the dark matter halo, intracluster medium, and galaxy population have resulted in a number of important results including dark matter constraints and explanations of the generation of cosmic rays. However, since the timescale of major mergers (˜several Gyr) relegates observations of individual systems to mere snapshots, these results are difficult to understand under a consistent dynamical framework. While computationally expensive simulations are vital in this regard, the vastness of parameter space has necessitated simulations of idealized mergers that are unlikely to capture the full richness. Merger speeds, geometries, and timescales each have a profound consequential effect, but even these simple dynamical properties of the mergers are often poorly understood. A method to identify and constrain the best systems for probing the rich astrophysics of merging clusters is needed. Such a method could then be utilized to prioritize observational follow up and best inform proper exploration of dynamical phase space. Task: In order to identify and model a large number of systems, in this dissertation, we compile an ensemble of major mergers each containing radio relics. We then complete a pan-chromatic study of these 29 systems including wide field optical photometry, targeted optical spectroscopy of member galaxies, radio, and X-ray observations. We use the optical observations to model the galaxy substructure and estimate line of sight motion. In conjunction with the radio and X-ray data, these substructure models helped elucidate the most likely merger scenario for each system and further constrain the dynamical properties of each system. We demonstrate the power of this technique through detailed analyses

  16. Hyper-Resolution Global Land Surface Model at Regional-to-Local Scales with observed Groundwater data assimilation

    OpenAIRE

    Singh, Raj Shekhar

    2014-01-01

    Modeling groundwater is challenging: it is not readily visible and is difficult to measure, with limited sets of observations available. Even though groundwater models can reproduce water table and head variations, considerable drift in modeled land surface states can nonetheless result from partially known geologic structure, errors in the input forcing fields, and imperfect Land Surface Model (LSM) parameterizations. These models frequently have biased results that are very different from o...

  17. A CRPS-IgG-transfer-trauma model reproducing inflammatory and positive sensory signs associated with complex regional pain syndrome.

    Science.gov (United States)

    Tékus, Valéria; Hajna, Zsófia; Borbély, Éva; Markovics, Adrienn; Bagoly, Teréz; Szolcsányi, János; Thompson, Victoria; Kemény, Ágnes; Helyes, Zsuzsanna; Goebel, Andreas

    2014-02-01

    The aetiology of complex regional pain syndrome (CRPS), a highly painful, usually post-traumatic condition affecting the limbs, is unknown, but recent results have suggested an autoimmune contribution. To confirm a role for pathogenic autoantibodies, we established a passive-transfer trauma model. Prior to undergoing incision of hind limb plantar skin and muscle, mice were injected either with serum IgG obtained from chronic CRPS patients or matched healthy volunteers, or with saline. Unilateral hind limb plantar skin and muscle incision was performed to induce typical, mild tissue injury. Mechanical hyperalgesia, paw swelling, heat and cold sensitivity, weight-bearing ability, locomotor activity, motor coordination, paw temperature, and body weight were investigated for 8days. After sacrifice, proinflammatory sensory neuropeptides and cytokines were measured in paw tissues. CRPS patient IgG treatment significantly increased hind limb mechanical hyperalgesia and oedema in the incised paw compared with IgG from healthy subjects or saline. Plantar incision induced a remarkable elevation of substance P immunoreactivity on day 8, which was significantly increased by CRPS-IgG. In this IgG-transfer-trauma model for CRPS, serum IgG from chronic CRPS patients induced clinical and laboratory features resembling the human disease. These results support the hypothesis that autoantibodies may contribute to the pathophysiology of CRPS, and that autoantibody-removing therapies may be effective treatments for long-standing CRPS. Copyright © 2013 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  18. Reproducibility of ultrasonic testing

    International Nuclear Information System (INIS)

    Lecomte, J.-C.; Thomas, Andre; Launay, J.-P.; Martin, Pierre

    The reproducibility of amplitude quotations for both artificial and natural reflectors was studied for several combinations of instrument/search unit, all being of the same type. This study shows that in industrial inspection if a range of standardized equipment is used, a margin of error of about 6 decibels has to be taken into account (confidence interval of 95%). This margin is about 4 to 5 dB for natural or artificial defects located in the central area and about 6 to 7 dB for artificial defects located on the back surface. This lack of reproducibility seems to be attributable first to the search unit and then to the instrument and operator. These results were confirmed by analysis of calibration data obtained from 250 tests performed by 25 operators under shop conditions. The margin of error was higher than the 6 dB obtained in the study [fr

  19. Accuracy and reproducibility of voxel based superimposition of cone beam computed tomography models on the anterior cranial base and the zygomatic arches.

    Directory of Open Access Journals (Sweden)

    Rania M Nada

    Full Text Available Superimposition of serial Cone Beam Computed Tomography (CBCT scans has become a valuable tool for three dimensional (3D assessment of treatment effects and stability. Voxel based image registration is a newly developed semi-automated technique for superimposition and comparison of two CBCT scans. The accuracy and reproducibility of CBCT superimposition on the anterior cranial base or the zygomatic arches using voxel based image registration was tested in this study. 16 pairs of 3D CBCT models were constructed from pre and post treatment CBCT scans of 16 adult dysgnathic patients. Each pair was registered on the anterior cranial base three times and on the left zygomatic arch twice. Following each superimposition, the mean absolute distances between the 2 models were calculated at 4 regions: anterior cranial base, forehead, left and right zygomatic arches. The mean distances between the models ranged from 0.2 to 0.37 mm (SD 0.08-0.16 for the anterior cranial base registration and from 0.2 to 0.45 mm (SD 0.09-0.27 for the zygomatic arch registration. The mean differences between the two registration zones ranged between 0.12 to 0.19 mm at the 4 regions. Voxel based image registration on both zones could be considered as an accurate and a reproducible method for CBCT superimposition. The left zygomatic arch could be used as a stable structure for the superimposition of smaller field of view CBCT scans where the anterior cranial base is not visible.

  20. Reproducibility of haemodynamical simulations in a subject-specific stented aneurysm model--a report on the Virtual Intracranial Stenting Challenge 2007.

    Science.gov (United States)

    Radaelli, A G; Augsburger, L; Cebral, J R; Ohta, M; Rüfenacht, D A; Balossino, R; Benndorf, G; Hose, D R; Marzo, A; Metcalfe, R; Mortier, P; Mut, F; Reymond, P; Socci, L; Verhegghe, B; Frangi, A F

    2008-07-19

    This paper presents the results of the Virtual Intracranial Stenting Challenge (VISC) 2007, an international initiative whose aim was to establish the reproducibility of state-of-the-art haemodynamical simulation techniques in subject-specific stented models of intracranial aneurysms (IAs). IAs are pathological dilatations of the cerebral artery walls, which are associated with high mortality and morbidity rates due to subarachnoid haemorrhage following rupture. The deployment of a stent as flow diverter has recently been indicated as a promising treatment option, which has the potential to protect the aneurysm by reducing the action of haemodynamical forces and facilitating aneurysm thrombosis. The direct assessment of changes in aneurysm haemodynamics after stent deployment is hampered by limitations in existing imaging techniques and currently requires resorting to numerical simulations. Numerical simulations also have the potential to assist in the personalized selection of an optimal stent design prior to intervention. However, from the current literature it is difficult to assess the level of technological advancement and the reproducibility of haemodynamical predictions in stented patient-specific models. The VISC 2007 initiative engaged in the development of a multicentre-controlled benchmark to analyse differences induced by diverse grid generation and computational fluid dynamics (CFD) technologies. The challenge also represented an opportunity to provide a survey of available technologies currently adopted by international teams from both academic and industrial institutions for constructing computational models of stented aneurysms. The results demonstrate the ability of current strategies in consistently quantifying the performance of three commercial intracranial stents, and contribute to reinforce the confidence in haemodynamical simulation, thus taking a step forward towards the introduction of simulation tools to support diagnostics and

  1. Planck intermediate results XXIX. All-sky dust modelling with Planck, IRAS, and WISE observations

    DEFF Research Database (Denmark)

    Ade, P. A. R.; Aghanim, N.; Alves, M. I. R.

    2016-01-01

    We present all-sky modelling of the high resolution Planck, IRAS, andWISE infrared (IR) observations using the physical dust model presented by Draine & Li in 2007 (DL, ApJ, 657, 810). We study the performance and results of this model, and discuss implications for future dust modelling....... The present work extends the DL dust modelling carried out on nearby galaxies using Herschel and Spitzer data to Galactic dust emission. We employ the DL dust model to generate maps of the dust mass surface density Sigma(Md), the dust optical extinction A(V), and the starlight intensity heating the bulk...... of the dust, parametrized by U-min. The DL model reproduces the observed spectral energy distribution (SED) satisfactorily over most of the sky, with small deviations in the inner Galactic disk and in low ecliptic latitude areas, presumably due to zodiacal light contamination. In the Andromeda galaxy (M31...

  2. Observation and modelling of the Fe XXI line profile observed by IRIS during the impulsive phase of flares

    Science.gov (United States)

    Polito, V.; Testa, P.; De Pontieu, B.; Allred, J. C.

    2017-12-01

    The observation of the high temperature (above 10 MK) Fe XXI 1354.1 A line with the Interface Region Imaging Spectrograph (IRIS) has provided significant insights into the chromospheric evaporation process in flares. In particular, the line is often observed to be completely blueshifted, in contrast to previous observations at lower spatial and spectral resolution, and in agreement with predictions from theoretical models. Interestingly, the line is also observed to be mostly symmetric and with a large excess above the thermal width. One popular interpretation for the excess broadening is given by assuming a superposition of flows from different loop strands. In this work, we perform a statistical analysis of Fe XXI line profiles observed by IRIS during the impulsive phase of flares and compare our results with hydrodynamic simulations of multi-thread flare loops performed with the 1D RADYN code. Our results indicate that the multi-thread models cannot easily reproduce the symmetry of the line and that some other physical process might need to be invoked in order to explain the observed profiles.

  3. Evaluation of multichannel reproduced sound

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Wickelmaier, Florian Maria

    2007-01-01

    A study was conducted with the goal of quantifying auditory attributes which underlie listener preference for multichannel reproduced sound. Short musical excerpts were presented in mono, stereo and several multichannel formats to a panel of forty selected listeners. Scaling of auditory attributes......, as well as overall preference, was based on consistency tests of binary paired-comparison judgments and on modeling the choice frequencies using probabilistic choice models. As a result, the preferences of non-expert listeners could be measured reliably at a ratio scale level. Principal components derived...

  4. Characterization of the Sahelian-Sudan rainfall based on observations and regional climate models

    Science.gov (United States)

    Salih, Abubakr A. M.; Elagib, Nadir Ahmed; Tjernström, Michael; Zhang, Qiong

    2018-04-01

    The African Sahel region is known to be highly vulnerable to climate variability and change. We analyze rainfall in the Sahelian Sudan in terms of distribution of rain-days and amounts, and examine whether regional climate models can capture these rainfall features. Three regional models namely, Regional Model (REMO), Rossby Center Atmospheric Model (RCA) and Regional Climate Model (RegCM4), are evaluated against gridded observations (Climate Research Unit, Tropical Rainfall Measuring Mission, and ERA-interim reanalysis) and rain-gauge data from six arid and semi-arid weather stations across Sahelian Sudan over the period 1989 to 2008. Most of the observed rain-days are characterized by weak (0.1-1.0 mm/day) to moderate (> 1.0-10.0 mm/day) rainfall, with average frequencies of 18.5% and 48.0% of the total annual rain-days, respectively. Although very strong rainfall events (> 30.0 mm/day) occur rarely, they account for a large fraction of the total annual rainfall (28-42% across the stations). The performance of the models varies both spatially and temporally. RegCM4 most closely reproduces the observed annual rainfall cycle, especially for the more arid locations, but all of the three models fail to capture the strong rainfall events and hence underestimate its contribution to the total annual number of rain-days and rainfall amount. However, excessive moderate rainfall compensates this underestimation in the models in an annual average sense. The present study uncovers some of the models' limitations in skillfully reproducing the observed climate over dry regions, will aid model users in recognizing the uncertainties in the model output and will help climate and hydrological modeling communities in improving models.

  5. Should we trust models or observations

    International Nuclear Information System (INIS)

    Ellsaesser, H.W.

    1982-01-01

    Scientists and laymen alike already trust observational data more than theories-this is made explicit in all formalizations of the scientific method. It was demonstrated again during the Supersonic Transport (SST) controversy by the continued efforts to reconcile the computed effect of the 1961-62 nuclear test series on the ozone layer with the observational record. Scientists, caught in the focus of the political limelight, sometimes, demonstrated their faith in the primacy of observations by studiously ignoring or dismissing as erroneous data at variance with the prevailing theoretical consensus-thereby stalling the theoretical modifications required to accommodate the observations. (author)

  6. Reproducible research in palaeomagnetism

    Science.gov (United States)

    Lurcock, Pontus; Florindo, Fabio

    2015-04-01

    The reproducibility of research findings is attracting increasing attention across all scientific disciplines. In palaeomagnetism as elsewhere, computer-based analysis techniques are becoming more commonplace, complex, and diverse. Analyses can often be difficult to reproduce from scratch, both for the original researchers and for others seeking to build on the work. We present a palaeomagnetic plotting and analysis program designed to make reproducibility easier. Part of the problem is the divide between interactive and scripted (batch) analysis programs. An interactive desktop program with a graphical interface is a powerful tool for exploring data and iteratively refining analyses, but usually cannot operate without human interaction. This makes it impossible to re-run an analysis automatically, or to integrate it into a larger automated scientific workflow - for example, a script to generate figures and tables for a paper. In some cases the parameters of the analysis process itself are not saved explicitly, making it hard to repeat or improve the analysis even with human interaction. Conversely, non-interactive batch tools can be controlled by pre-written scripts and configuration files, allowing an analysis to be 'replayed' automatically from the raw data. However, this advantage comes at the expense of exploratory capability: iteratively improving an analysis entails a time-consuming cycle of editing scripts, running them, and viewing the output. Batch tools also tend to require more computer expertise from their users. PuffinPlot is a palaeomagnetic plotting and analysis program which aims to bridge this gap. First released in 2012, it offers both an interactive, user-friendly desktop interface and a batch scripting interface, both making use of the same core library of palaeomagnetic functions. We present new improvements to the program that help to integrate the interactive and batch approaches, allowing an analysis to be interactively explored and refined

  7. Examination of reproducibility in microbiological degredation experiments

    DEFF Research Database (Denmark)

    Sommer, Helle Mølgaard; Spliid, Henrik; Holst, Helle

    1998-01-01

    Experimental data indicate that certain microbiological degradation experiments have a limited reproducibility. Nine identical batch experiments were carried out on 3 different days to examine reproducibility. A pure culture, isolated from soil, grew with toluene as the only carbon and energy...... source. Toluene was degraded under aerobic conditions at a constant temperature of 28 degreesC. The experiments were modelled by a Monod model - extended to meet the air/liquid system, and the parameter values were estimated using a statistical nonlinear estimation procedure. Model reduction analysis...... resulted in a simpler model without the biomass decay term. In order to test for model reduction and reproducibility of parameter estimates, a likelihood ratio test was employed. The limited reproducibility for these experiments implied that all 9 batch experiments could not be described by the same set...

  8. Modelling impacts of performance on the probability of reproducing, and thereby on productive lifespan, allow prediction of lifetime efficiency in dairy cows.

    Science.gov (United States)

    Phuong, H N; Blavy, P; Martin, O; Schmidely, P; Friggens, N C

    2016-01-01

    Reproductive success is a key component of lifetime efficiency - which is the ratio of energy in milk (MJ) to energy intake (MJ) over the lifespan, of cows. At the animal level, breeding and feeding management can substantially impact milk yield, body condition and energy balance of cows, which are known as major contributors to reproductive failure in dairy cattle. This study extended an existing lifetime performance model to incorporate the impacts that performance changes due to changing breeding and feeding strategies have on the probability of reproducing and thereby on the productive lifespan, and thus allow the prediction of a cow's lifetime efficiency. The model is dynamic and stochastic, with an individual cow being the unit modelled and one day being the unit of time. To evaluate the model, data from a French study including Holstein and Normande cows fed high-concentrate diets and data from a Scottish study including Holstein cows selected for high and average genetic merit for fat plus protein that were fed high- v. low-concentrate diets were used. Generally, the model consistently simulated productive and reproductive performance of various genotypes of cows across feeding systems. In the French data, the model adequately simulated the reproductive performance of Holsteins but significantly under-predicted that of Normande cows. In the Scottish data, conception to first service was comparably simulated, whereas interval traits were slightly under-predicted. Selection for greater milk production impaired the reproductive performance and lifespan but not lifetime efficiency. The definition of lifetime efficiency used in this model did not include associated costs or herd-level effects. Further works should include such economic indicators to allow more accurate simulation of lifetime profitability in different production scenarios.

  9. Retrospective Correction of Physiological Noise: Impact on Sensitivity, Specificity, and Reproducibility of Resting-State Functional Connectivity in a Reading Network Model.

    Science.gov (United States)

    Krishnamurthy, Venkatagiri; Krishnamurthy, Lisa C; Schwam, Dina M; Ealey, Ashley; Shin, Jaemin; Greenberg, Daphne; Morris, Robin D

    2018-03-01

    It is well accepted that physiological noise (PN) obscures the detection of neural fluctuations in resting-state functional connectivity (rsFC) magnetic resonance imaging. However, a clear consensus for an optimal PN correction (PNC) methodology and how it can impact the rsFC signal characteristics is still lacking. In this study, we probe the impact of three PNC methods: RETROICOR: (Glover et al., 2000 ), ANATICOR: (Jo et al., 2010 ), and RVTMBPM: (Bianciardi et al., 2009 ). Using a reading network model, we systematically explore the effects of PNC optimization on sensitivity, specificity, and reproducibility of rsFC signals. In terms of specificity, ANATICOR was found to be effective in removing local white matter (WM) fluctuations and also resulted in aggressive removal of expected cortical-to-subcortical functional connections. The ability of RETROICOR to remove PN was equivalent to removal of simulated random PN such that it artificially inflated the connection strength, thereby decreasing sensitivity. RVTMBPM maintained specificity and sensitivity by balanced removal of vasodilatory PN and local WM nuisance edges. Another aspect of this work was exploring the effects of PNC on identifying reading group differences. Most PNC methods accounted for between-subject PN variability resulting in reduced intersession reproducibility. This effect facilitated the detection of the most consistent group differences. RVTMBPM was most effective in detecting significant group differences due to its inherent sensitivity to removing spatially structured and temporally repeating PN arising from dense vasculature. Finally, results suggest that combining all three PNC resulted in "overcorrection" by removing signal along with noise.

  10. Microbial community development in a dynamic gut model is reproducible, colon region specific, and selective for Bacteroidetes and Clostridium cluster IX.

    Science.gov (United States)

    Van den Abbeele, Pieter; Grootaert, Charlotte; Marzorati, Massimo; Possemiers, Sam; Verstraete, Willy; Gérard, Philippe; Rabot, Sylvie; Bruneau, Aurélia; El Aidy, Sahar; Derrien, Muriel; Zoetendal, Erwin; Kleerebezem, Michiel; Smidt, Hauke; Van de Wiele, Tom

    2010-08-01

    Dynamic, multicompartment in vitro gastrointestinal simulators are often used to monitor gut microbial dynamics and activity. These reactors need to harbor a microbial community that is stable upon inoculation, colon region specific, and relevant to in vivo conditions. Together with the reproducibility of the colonization process, these criteria are often overlooked when the modulatory properties from different treatments are compared. We therefore investigated the microbial colonization process in two identical simulators of the human intestinal microbial ecosystem (SHIME), simultaneously inoculated with the same human fecal microbiota with a high-resolution phylogenetic microarray: the human intestinal tract chip (HITChip). Following inoculation of the in vitro colon compartments, microbial community composition reached steady state after 2 weeks, whereas 3 weeks were required to reach functional stability. This dynamic colonization process was reproducible in both SHIME units and resulted in highly diverse microbial communities which were colon region specific, with the proximal regions harboring saccharolytic microbes (e.g., Bacteroides spp. and Eubacterium spp.) and the distal regions harboring mucin-degrading microbes (e.g., Akkermansia spp.). Importantly, the shift from an in vivo to an in vitro environment resulted in an increased Bacteroidetes/Firmicutes ratio, whereas Clostridium cluster IX (propionate producers) was enriched compared to clusters IV and XIVa (butyrate producers). This was supported by proportionally higher in vitro propionate concentrations. In conclusion, high-resolution analysis of in vitro-cultured gut microbiota offers new insight on the microbial colonization process and indicates the importance of digestive parameters that may be crucial in the development of new in vitro models.

  11. Towards Reproducibility in Computational Hydrology

    Science.gov (United States)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei; Duffy, Chris; Arheimer, Berit

    2017-04-01

    Reproducibility is a foundational principle in scientific research. The ability to independently re-run an experiment helps to verify the legitimacy of individual findings, and evolve (or reject) hypotheses and models of how environmental systems function, and move them from specific circumstances to more general theory. Yet in computational hydrology (and in environmental science more widely) the code and data that produces published results are not regularly made available, and even if they are made available, there remains a multitude of generally unreported choices that an individual scientist may have made that impact the study result. This situation strongly inhibits the ability of our community to reproduce and verify previous findings, as all the information and boundary conditions required to set up a computational experiment simply cannot be reported in an article's text alone. In Hutton et al 2016 [1], we argue that a cultural change is required in the computational hydrological community, in order to advance and make more robust the process of knowledge creation and hypothesis testing. We need to adopt common standards and infrastructures to: (1) make code readable and re-useable; (2) create well-documented workflows that combine re-useable code together with data to enable published scientific findings to be reproduced; (3) make code and workflows available, easy to find, and easy to interpret, using code and code metadata repositories. To create change we argue for improved graduate training in these areas. In this talk we reflect on our progress in achieving reproducible, open science in computational hydrology, which are relevant to the broader computational geoscience community. In particular, we draw on our experience in the Switch-On (EU funded) virtual water science laboratory (http://www.switch-on-vwsl.eu/participate/), which is an open platform for collaboration in hydrological experiments (e.g. [2]). While we use computational hydrology as

  12. Observations and Numerical Modeling of the Jovian Ribbon

    Science.gov (United States)

    Cosentino, R. G.; Simon, A.; Morales-Juberias, R.; Sayanagi, K. M.

    2015-01-01

    Multiple wavelength observations made by the Hubble Space Telescope in early 2007 show the presence of a wavy, high-contrast feature in Jupiter's atmosphere near 30 degrees North. The "Jovian Ribbon," best seen at 410 nanometers, irregularly undulates in latitude and is time-variable in appearance. A meridional intensity gradient algorithm was applied to the observations to track the Ribbon's contour. Spectral analysis of the contour revealed that the Ribbon's structure is a combination of several wavenumbers ranging from k equals 8-40. The Ribbon is a dynamic structure that has been observed to have spectral power for dominant wavenumbers which vary over a time period of one month. The presence of the Ribbon correlates with periods when the velocity of the westward jet at the same location is highest. We conducted numerical simulations to investigate the stability of westward jets of varying speed, vertical shear, and background static stability to different perturbations. A Ribbon-like morphology was best reproduced with a 35 per millisecond westward jet that decreases in amplitude for pressures greater than 700 hectopascals and a background static stability of N equals 0.005 per second perturbed by heat pulses constrained to latitudes south of 30 degrees North. Additionally, the simulated feature had wavenumbers that qualitatively matched observations and evolved throughout the simulation reproducing the Jovian Ribbon's dynamic structure.

  13. Television Advertising and Children's Observational Modeling.

    Science.gov (United States)

    Atkin, Charles K.

    This paper assesses advertising effects on children and adolescents from a social learning theory perspective, emphasizing imitative performance of vicariously reinforced consumption stimuli. The basic elements of social psychologist Albert Bandura's modeling theory are outlined. Then specific derivations from the theory are applied to the problem…

  14. Reproducing early Martian atmospheric carbon dioxide partial pressure by modeling the formation of Mg-Fe-Ca carbonate identified in the Comanche rock outcrops on Mars

    Science.gov (United States)

    Berk, Wolfgang; Fu, Yunjiao; Ilger, Jan-Michael

    2012-10-01

    The well defined composition of the Comanche rock's carbonate (Magnesite0.62Siderite0.25Calcite0.11Rhodochrosite0.02) and its host rock's composition, dominated by Mg-rich olivine, enable us to reproduce the atmospheric CO2partial pressure that may have triggered the formation of these carbonates. Hydrogeochemical one-dimensional transport modeling reveals that similar aqueous rock alteration conditions (including CO2partial pressure) may have led to the formation of Mg-Fe-Ca carbonate identified in the Comanche rock outcrops (Gusev Crater) and also in the ultramafic rocks exposed in the Nili Fossae region. Hydrogeochemical conditions enabling the formation of Mg-rich solid solution carbonate result from equilibrium species distributions involving (1) ultramafic rocks (ca. 32 wt% olivine; Fo0.72Fa0.28), (2) pure water, and (3) CO2partial pressures of ca. 0.5 to 2.0 bar at water-to-rock ratios of ca. 500 molH2O mol-1rock and ca. 5°C (278 K). Our modeled carbonate composition (Magnesite0.64Siderite0.28Calcite0.08) matches the measured composition of carbonates preserved in the Comanche rocks. Considerably different carbonate compositions are achieved at (1) higher temperature (85°C), (2) water-to-rock ratios considerably higher and lower than 500 mol mol-1 and (3) CO2partial pressures differing from 1.0 bar in the model set up. The Comanche rocks, hosting the carbonate, may have been subjected to long-lasting (>104 to 105 years) aqueous alteration processes triggered by atmospheric CO2partial pressures of ca. 1.0 bar at low temperature. Their outcrop may represent a fragment of the upper layers of an altered olivine-rich rock column, which is characterized by newly formed Mg-Fe-Ca solid solution carbonate, and phyllosilicate-rich alteration assemblages within deeper (unexposed) units.

  15. Meteor head echoes - observations and models

    Directory of Open Access Journals (Sweden)

    A. Pellinen-Wannberg

    2005-01-01

    Full Text Available Meteor head echoes - instantaneous echoes moving with the velocities of the meteors - have been recorded since 1947. Despite many attempts, this phenomenon did not receive a comprehensive theory for over 4 decades. The High Power and Large Aperture (HPLA features, combined with present signal processing and data storage capabilities of incoherent scatter radars, may give an explanation for the old riddle. The meteoroid passage through the radar beam can be followed with simultaneous spatial-time resolution of about 100m-ms class. The current views of the meteor head echo process will be presented and discussed. These will be related to various EISCAT observations, such as dual-frequency target sizes, altitude distributions and vector velocities.

  16. Spectrophotometric Modeling of MAHLI Goniometer Observations

    Science.gov (United States)

    Liang, W.; Johnson, J. R.; Hayes, A.; Lemmon, M. T.; Bell, J. F., III; Grundy, W. M.; Deen, R. G.

    2017-12-01

    The Mars Hand Lends Imager (MAHLI) on the Curiosity rover's robotic arm was used as a goniometer to acquire a multiple-viewpoint data set on sol 544 [1]. Images were acquired at 20 arm positions, all centered at the same location and from a near-constant distance of 1.0 m from the surface. Although this sequence was acquired at only one time of day ( 13:30 LTST), it provided phase angle coverage from 0-110°. Images were converted to radiance from calibrated PDS files (DRXX) using radiance scaling factors and MAHLI focus position counts in an algorithm that rescaled the data to match the Mastcam M-34 calibration via comparison of sky images acquired during the mission. Converted MAHLI radiance values from an image of the Mastcam calibration target compared favorably in the red, green, and blue Bayer filters to M-34 radiance values from an image of the same target taken minutes afterwards. The 20 MAHLI images allowed construction of a digital terrain model (DTM), although images with shadows cast by the rover arm were more challenging to include. Their current absence restricts the lowest phase angles available to about 17°. The DTM enables calculation of surface normals that can be used with sky models to correct for diffuse reflectance on surface facets prior to Hapke modeling [cf. 2-6]. Regions of interest (ROIs) were extracted using one of the low emission-angle images as a template. ROI unit types included soils, light-toned surfaces (5 cm felsic rock "Nita"), dark-toned rocks with variable textures and dust cover, and larger areas representative of the average surface (see attached figure). These ROIs were translated from the template image to the other images through a matching of DTM three-dimensional coordinates. Preliminary phase curves (prior to atmospheric correction) show that soil-dominated surfaces are most backscattering, whereas rocks are least backscattering, and light-toned surfaces exhibit wavelength-dependent scattering. Future work will

  17. Delectability of low contrast in CT. Comparison between human observes and observer model

    International Nuclear Information System (INIS)

    Hernandez-Giron, I.; Geleijins, J.; Calzado, A.; Joemai, R. M. S.; Veldkamp, W. J. H.

    2013-01-01

    The objective of this work is to study the real images of TC and other simulated LCD with white noise through a model of observer. The results are compared with those obtained in a similar experiment by human observers. (Author)

  18. The Seasonal cycle of the Tropical Lower Stratospheric Water Vapor in Chemistry-Climate Models in Comparison with Observations

    Science.gov (United States)

    Wang, X.; Dessler, A. E.

    2017-12-01

    The seasonal cycle is one of the key features of the tropical lower stratospheric water vapor, so it is important that the climate models reproduce it. In this analysis, we evaluate how well the Goddard Earth Observing System Chemistry Climate Model (GEOSCCM) and the Whole Atmosphere Community Climate Model (WACCM) reproduce the seasonal cycle of tropical lower stratospheric water vapor. We do this by comparing the models to observations from the Microwave Limb Sounder (MLS) and the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-Interim (ERAi). We also evaluate if the chemistry-climate models (CCMs) reproduce the key transport and dehydration processes that regulate the seasonal cycle using a forward, domain filling, diabatic trajectory model. Finally, we explore the changes of the seasonal cycle during the 21st century in the two CCMs. Our results show general agreement in the seasonal cycles from the MLS, the ERAi, and the CCMs. Despite this agreement, there are some clear disagreements between the models and the observations on the details of transport and dehydration in the TTL. Finally, both the CCMs predict a moister seasonal cycle by the end of the 21st century. But they disagree on the changes of the seasonal amplitude, which is predicted to increase in the GEOSCCM and decrease in the WACCM.

  19. Peru 2007 tsunami runup observations and modeling

    Science.gov (United States)

    Fritz, H. M.; Kalligeris, N.; Borrero, J. C.

    2008-05-01

    On 15 August 2007 an earthquake with moment magnitude (Mw) of 8.0 centered off the coast of central Peru, generated a tsunami with locally focused runup heights of up to 10 m. A reconnaissance team was deployed in the immediate aftermath and investigated the tsunami effects at 51 sites. The largest runup heights were measured in a sparsely populated desert area south of the Paracas Peninsula resulting in only 3 tsunami fatalities. Numerical modeling of the earthquake source and tsunami suggest that a region of high slip near the coastline was primarily responsible for the extreme runup heights. The town of Pisco was spared by the presence of the Paracas Peninsula, which blocked tsunami waves from propagating northward from the high slip region. The coast of Peru has experienced numerous deadly and destructive tsunamis throughout history, which highlights the importance of ongoing tsunami awareness and education efforts in the region. The Peru tsunami is compared against recent mega-disasters such as the 2004 Indian Ocean tsunami and Hurricane Katrina.

  20. Modeling active region transient brightenings observed with X-ray telescope as multi-stranded loops

    Energy Technology Data Exchange (ETDEWEB)

    Kobelski, Adam R.; McKenzie, David E. [Department of Physics, P.O. Box 173840, Montana State University, Bozeman, MT 59717-3840 (United States); Donachie, Martin, E-mail: kobelski@solar.physics.montana.edu [University of Glasgow, Glasgow, G128QQ, Scotland (United Kingdom)

    2014-05-10

    Strong evidence exists that coronal loops as observed in extreme ultraviolet and soft X-rays may not be monolithic isotropic structures, but can often be more accurately modeled as bundles of independent strands. Modeling the observed active region transient brightenings (ARTBs) within this framework allows for the exploration of the energetic ramifications and characteristics of these stratified structures. Here we present a simple method of detecting and modeling ARTBs observed with the Hinode X-Ray Telescope (XRT) as groups of zero-dimensional strands, which allows us to probe parameter space to better understand the spatial and temporal dependence of strand heating in impulsively heated loops. This partially automated method can be used to analyze a large number of observations to gain a statistical insight into the parameters of coronal structures, including the number of heating events required in a given model to fit the observations. In this article, we present the methodology and demonstrate its use in detecting and modeling ARTBs in a sample data set from Hinode/XRT. These initial results show that, in general, multiple heating events are necessary to reproduce observed ARTBs, but the spatial dependence of these heating events cannot yet be established.

  1. Modeling active region transient brightenings observed with X-ray telescope as multi-stranded loops

    International Nuclear Information System (INIS)

    Kobelski, Adam R.; McKenzie, David E.; Donachie, Martin

    2014-01-01

    Strong evidence exists that coronal loops as observed in extreme ultraviolet and soft X-rays may not be monolithic isotropic structures, but can often be more accurately modeled as bundles of independent strands. Modeling the observed active region transient brightenings (ARTBs) within this framework allows for the exploration of the energetic ramifications and characteristics of these stratified structures. Here we present a simple method of detecting and modeling ARTBs observed with the Hinode X-Ray Telescope (XRT) as groups of zero-dimensional strands, which allows us to probe parameter space to better understand the spatial and temporal dependence of strand heating in impulsively heated loops. This partially automated method can be used to analyze a large number of observations to gain a statistical insight into the parameters of coronal structures, including the number of heating events required in a given model to fit the observations. In this article, we present the methodology and demonstrate its use in detecting and modeling ARTBs in a sample data set from Hinode/XRT. These initial results show that, in general, multiple heating events are necessary to reproduce observed ARTBs, but the spatial dependence of these heating events cannot yet be established.

  2. Observations and modeling of seismic background noise

    Science.gov (United States)

    Peterson, Jon R.

    1993-01-01

    The preparation of this report had two purposes. One was to present a catalog of seismic background noise spectra obtained from a worldwide network of seismograph stations. The other purpose was to refine and document models of seismic background noise that have been in use for several years. The second objective was, in fact, the principal reason that this study was initiated and influenced the procedures used in collecting and processing the data.With a single exception, all of the data used in this study were extracted from the digital data archive at the U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL). This archive dates from 1972 when ASL first began deploying digital seismograph systems and collecting and distributing digital data under the sponsorship of the Defense Advanced Research Projects Agency (DARPA). There have been many changes and additions to the global seismograph networks during the past twenty years, but perhaps none as significant as the current deployment of very broadband seismographs by the U.S. Geological Survey (USGS) and the University of California San Diego (UCSD) under the scientific direction of the IRIS consortium. The new data acquisition systems have extended the bandwidth and resolution of seismic recording, and they utilize high-density recording media that permit the continuous recording of broadband data. The data improvements and continuous recording greatly benefit and simplify surveys of seismic background noise.Although there are many other sources of digital data, the ASL archive data were used almost exclusively because of accessibility and because the data systems and their calibration are well documented for the most part. Fortunately, the ASL archive contains high-quality data from other stations in addition to those deployed by the USGS. Included are data from UCSD IRIS/IDA stations, the Regional Seismic Test Network (RSTN) deployed by Sandia National Laboratories (SNL), and the TERRAscope network

  3. A Nanoflare-Based Cellular Automaton Model and the Observed Properties of the Coronal Plasma

    Science.gov (United States)

    Lopez-Fuentes, Marcelo; Klimchuk, James Andrew

    2016-01-01

    We use the cellular automaton model described in Lopez Fuentes and Klimchuk to study the evolution of coronal loop plasmas. The model, based on the idea of a critical misalignment angle in tangled magnetic fields, produces nanoflares of varying frequency with respect to the plasma cooling time. We compare the results of the model with active region (AR) observations obtained with the Hinode/XRT and SDOAIA instruments. The comparison is based on the statistical properties of synthetic and observed loop light curves. Our results show that the model reproduces the main observational characteristics of the evolution of the plasma in AR coronal loops. The typical intensity fluctuations have amplitudes of 10 percent - 15 percent both for the model and the observations. The sign of the skewness of the intensity distributions indicates the presence of cooling plasma in the loops. We also study the emission measure (EM) distribution predicted by the model and obtain slopes in log(EM) versus log(T) between 2.7 and 4.3, in agreement with published observational values.

  4. A NANOFLARE-BASED CELLULAR AUTOMATON MODEL AND THE OBSERVED PROPERTIES OF THE CORONAL PLASMA

    Energy Technology Data Exchange (ETDEWEB)

    Fuentes, Marcelo López [Instituto de Astronomía y Física del Espacio, CONICET-UBA, CC. 67, Suc. 28, 1428 Buenos Aires (Argentina); Klimchuk, James A., E-mail: lopezf@iafe.uba.ar [NASA Goddard Space Flight Center, Code 671, Greenbelt, MD 20771 (United States)

    2016-09-10

    We use the cellular automaton model described in López Fuentes and Klimchuk to study the evolution of coronal loop plasmas. The model, based on the idea of a critical misalignment angle in tangled magnetic fields, produces nanoflares of varying frequency with respect to the plasma cooling time. We compare the results of the model with active region (AR) observations obtained with the Hinode /XRT and SDO /AIA instruments. The comparison is based on the statistical properties of synthetic and observed loop light curves. Our results show that the model reproduces the main observational characteristics of the evolution of the plasma in AR coronal loops. The typical intensity fluctuations have amplitudes of 10%–15% both for the model and the observations. The sign of the skewness of the intensity distributions indicates the presence of cooling plasma in the loops. We also study the emission measure (EM) distribution predicted by the model and obtain slopes in log(EM) versus log(T) between 2.7 and 4.3, in agreement with published observational values.

  5. Combined constraints on global ocean primary production using observations and models

    Science.gov (United States)

    Buitenhuis, Erik T.; Hashioka, Taketo; Quéré, Corinne Le

    2013-09-01

    production is at the base of the marine food web and plays a central role for global biogeochemical cycles. Yet global ocean primary production is known to only a factor of 2, with previous estimates ranging from 38 to 65 Pg C yr-1 and no formal uncertainty analysis. Here, we present an improved global ocean biogeochemistry model that includes a mechanistic representation of photosynthesis and a new observational database of net primary production (NPP) in the ocean. We combine the model and observations to constrain particulate NPP in the ocean with statistical metrics. The PlankTOM5.3 model includes a new photosynthesis formulation with a dynamic representation of iron-light colimitation, which leads to a considerable improvement of the interannual variability of surface chlorophyll. The database includes a consistent set of 50,050 measurements of 14C primary production. The model best reproduces observations when global NPP is 58 ± 7 Pg C yr-1, with a most probable value of 56 Pg C yr-1. The most probable value is robust to the model used. The uncertainty represents 95% confidence intervals. It considers all random errors in the model and observations, but not potential biases in the observations. We show that tropical regions (23°S-23°N) contribute half of the global NPP, while NPPs in the Northern and Southern Hemispheres are approximately equal in spite of the larger ocean area in the South.

  6. Mesoscale influence on long-range transport — evidence from ETEX modelling and observations

    Science.gov (United States)

    Sørensen, Jens Havskov; Rasmussen, Alix; Ellermann, Thomas; Lyck, Erik

    During the first European Tracer Experiment (ETEX) tracer gas was released from a site in Brittany, France, and subsequently observed over a range of 2000 km. Hourly measurements were taken at the National Environmental Research Institute (NERI) located at Risø, Denmark, using two measurement techniques. At this location, the observed concentration time series shows a double-peak structure occurring between two and three days after the release. By using the Danish Emergency Response Model of the Atmosphere (DERMA), which is developed at the Danish Meteorological Institute (DMI), simulations of the dispersion of the tracer gas have been performed. Using numerical weather-prediction data from the European Centre for Medium-Range Weather Forecast (ECMWF) by DERMA, the arrival time of the tracer is quite well predicted, so also is the duration of the passage of the plume, but the double-peak structure is not reproduced. However, using higher-resolution data from the DMI version of the HIgh Resolution Limited Area Model (DMI-HIRLAM), DERMA reproduces the observed structure very well. The double-peak structure is caused by the influence of a mesoscale anti-cyclonic eddy on the tracer gas plume about one day earlier.

  7. Charge state evolution in the solar wind. III. Model comparison with observations

    Energy Technology Data Exchange (ETDEWEB)

    Landi, E.; Oran, R.; Lepri, S. T.; Zurbuchen, T. H.; Fisk, L. A.; Van der Holst, B. [Department of Atmospheric, Oceanic and Space Sciences, University of Michigan, Ann Arbor, MI 48109 (United States)

    2014-08-01

    We test three theoretical models of the fast solar wind with a set of remote sensing observations and in-situ measurements taken during the minimum of solar cycle 23. First, the model electron density and temperature are compared to SOHO/SUMER spectroscopic measurements. Second, the model electron density, temperature, and wind speed are used to predict the charge state evolution of the wind plasma from the source regions to the freeze-in point. Frozen-in charge states are compared with Ulysses/SWICS measurements at 1 AU, while charge states close to the Sun are combined with the CHIANTI spectral code to calculate the intensities of selected spectral lines, to be compared with SOHO/SUMER observations in the north polar coronal hole. We find that none of the theoretical models are able to completely reproduce all observations; namely, all of them underestimate the charge state distribution of the solar wind everywhere, although the levels of disagreement vary from model to model. We discuss possible causes of the disagreement, namely, uncertainties in the calculation of the charge state evolution and of line intensities, in the atomic data, and in the assumptions on the wind plasma conditions. Last, we discuss the scenario where the wind is accelerated from a region located in the solar corona rather than in the chromosphere as assumed in the three theoretical models, and find that a wind originating from the corona is in much closer agreement with observations.

  8. 4-D modeling of CME expansion and EUV dimming observed with STEREO/EUVI

    Directory of Open Access Journals (Sweden)

    M. J. Aschwanden

    2009-08-01

    Full Text Available This is the first attempt to model the kinematics of a CME launch and the resulting EUV dimming quantitatively with a self-consistent model. Our 4-D-model assumes self-similar expansion of a spherical CME geometry that consists of a CME front with density compression and a cavity with density rarefaction, satisfying mass conservation of the total CME and swept-up corona. The model contains 14 free parameters and is fitted to the 25 March 2008 CME event observed with STEREO/A and B. Our model is able to reproduce the observed CME expansion and related EUV dimming during the initial phase from 18:30 UT to 19:00 UT. The CME kinematics can be characterized by a constant acceleration (i.e., a constant magnetic driving force. While the observations of EUVI/A are consistent with a spherical bubble geometry, we detect significant asymmetries and density inhomogeneities with EUVI/B. This new forward-modeling method demonstrates how the observed EUV dimming can be used to model physical parameters of the CME source region, the CME geometry, and CME kinematics.

  9. Charge state evolution in the solar wind. III. Model comparison with observations

    International Nuclear Information System (INIS)

    Landi, E.; Oran, R.; Lepri, S. T.; Zurbuchen, T. H.; Fisk, L. A.; Van der Holst, B.

    2014-01-01

    We test three theoretical models of the fast solar wind with a set of remote sensing observations and in-situ measurements taken during the minimum of solar cycle 23. First, the model electron density and temperature are compared to SOHO/SUMER spectroscopic measurements. Second, the model electron density, temperature, and wind speed are used to predict the charge state evolution of the wind plasma from the source regions to the freeze-in point. Frozen-in charge states are compared with Ulysses/SWICS measurements at 1 AU, while charge states close to the Sun are combined with the CHIANTI spectral code to calculate the intensities of selected spectral lines, to be compared with SOHO/SUMER observations in the north polar coronal hole. We find that none of the theoretical models are able to completely reproduce all observations; namely, all of them underestimate the charge state distribution of the solar wind everywhere, although the levels of disagreement vary from model to model. We discuss possible causes of the disagreement, namely, uncertainties in the calculation of the charge state evolution and of line intensities, in the atomic data, and in the assumptions on the wind plasma conditions. Last, we discuss the scenario where the wind is accelerated from a region located in the solar corona rather than in the chromosphere as assumed in the three theoretical models, and find that a wind originating from the corona is in much closer agreement with observations.

  10. Evaluation of Land Surface Models in Reproducing Satellite-Derived LAI over the High-Latitude Northern Hemisphere. Part I: Uncoupled DGVMs

    Directory of Open Access Journals (Sweden)

    Ning Zeng

    2013-10-01

    Full Text Available Leaf Area Index (LAI represents the total surface area of leaves above a unit area of ground and is a key variable in any vegetation model, as well as in climate models. New high resolution LAI satellite data is now available covering a period of several decades. This provides a unique opportunity to validate LAI estimates from multiple vegetation models. The objective of this paper is to compare new, satellite-derived LAI measurements with modeled output for the Northern Hemisphere. We compare monthly LAI output from eight land surface models from the TRENDY compendium with satellite data from an Artificial Neural Network (ANN from the latest version (third generation of GIMMS AVHRR NDVI data over the period 1986–2005. Our results show that all the models overestimate the mean LAI, particularly over the boreal forest. We also find that seven out of the eight models overestimate the length of the active vegetation-growing season, mostly due to a late dormancy as a result of a late summer phenology. Finally, we find that the models report a much larger positive trend in LAI over this period than the satellite observations suggest, which translates into a higher trend in the growing season length. These results highlight the need to incorporate a larger number of more accurate plant functional types in all models and, in particular, to improve the phenology of deciduous trees.

  11. Correlation between human observer performance and model observer performance in differential phase contrast CT

    International Nuclear Information System (INIS)

    Li, Ke; Garrett, John; Chen, Guang-Hong

    2013-01-01

    Purpose: With the recently expanding interest and developments in x-ray differential phase contrast CT (DPC-CT), the evaluation of its task-specific detection performance and comparison with the corresponding absorption CT under a given radiation dose constraint become increasingly important. Mathematical model observers are often used to quantify the performance of imaging systems, but their correlations with actual human observers need to be confirmed for each new imaging method. This work is an investigation of the effects of stochastic DPC-CT noise on the correlation of detection performance between model and human observers with signal-known-exactly (SKE) detection tasks.Methods: The detectabilities of different objects (five disks with different diameters and two breast lesion masses) embedded in an experimental DPC-CT noise background were assessed using both model and human observers. The detectability of the disk and lesion signals was then measured using five types of model observers including the prewhitening ideal observer, the nonprewhitening (NPW) observer, the nonprewhitening observer with eye filter and internal noise (NPWEi), the prewhitening observer with eye filter and internal noise (PWEi), and the channelized Hotelling observer (CHO). The same objects were also evaluated by four human observers using the two-alternative forced choice method. The results from the model observer experiment were quantitatively compared to the human observer results to assess the correlation between the two techniques.Results: The contrast-to-detail (CD) curve generated by the human observers for the disk-detection experiments shows that the required contrast to detect a disk is inversely proportional to the square root of the disk size. Based on the CD curves, the ideal and NPW observers tend to systematically overestimate the performance of the human observers. The NPWEi and PWEi observers did not predict human performance well either, as the slopes of their CD

  12. Reproducibility in a multiprocessor system

    Science.gov (United States)

    Bellofatto, Ralph A; Chen, Dong; Coteus, Paul W; Eisley, Noel A; Gara, Alan; Gooding, Thomas M; Haring, Rudolf A; Heidelberger, Philip; Kopcsay, Gerard V; Liebsch, Thomas A; Ohmacht, Martin; Reed, Don D; Senger, Robert M; Steinmacher-Burow, Burkhard; Sugawara, Yutaka

    2013-11-26

    Fixing a problem is usually greatly aided if the problem is reproducible. To ensure reproducibility of a multiprocessor system, the following aspects are proposed; a deterministic system start state, a single system clock, phase alignment of clocks in the system, system-wide synchronization events, reproducible execution of system components, deterministic chip interfaces, zero-impact communication with the system, precise stop of the system and a scan of the system state.

  13. Results of an interactively coupled atmospheric chemistry - general circulation model. Comparison with observations

    Energy Technology Data Exchange (ETDEWEB)

    Hein, R.; Dameris, M.; Schnadt, C. [and others

    2000-01-01

    An interactively coupled climate-chemistry model which enables a simultaneous treatment of meteorology and atmospheric chemistry and their feedbacks is presented. This is the first model, which interactively combines a general circulation model based on primitive equations with a rather complex model of stratospheric and tropospheric chemistry, and which is computational efficient enough to allow long-term integrations with currently available computer resources. The applied model version extends from the Earth's surface up to 10 hPa with a relatively high number (39) of vertical levels. We present the results of a present-day (1990) simulation and compare it to available observations. We focus on stratospheric dynamics and chemistry relevant to describe the stratospheric ozone layer. The current model version ECHAM4.L39(DLR)/CHEM can realistically reproduce stratospheric dynamics in the Arctic vortex region, including stratospheric warming events. This constitutes a major improvement compared to formerly applied model versions. However, apparent shortcomings in Antarctic circulation and temperatures persist. The seasonal and interannual variability of the ozone layer is simulated in accordance with observations. Activation and deactivation of chlorine in the polar stratospheric vortices and their interhemispheric differences are reproduced. The consideration of the chemistry feedback on dynamics results in an improved representation of the spatial distribution of stratospheric water vapor concentrations, i.e., the simulated meriodional water vapor gradient in the stratosphere is realistic. The present model version constitutes a powerful tool to investigate, for instance, the combined direct and indirect effects of anthropogenic trace gas emissions, and the future evolution of the ozone layer. (orig.)

  14. Constraining the models' response of tropical low clouds to SST forcings using CALIPSO observations

    Science.gov (United States)

    Cesana, G.; Del Genio, A. D.; Ackerman, A. S.; Brient, F.; Fridlind, A. M.; Kelley, M.; Elsaesser, G.

    2017-12-01

    Low-cloud response to a warmer climate is still pointed out as being the largest source of uncertainty in the last generation of climate models. To date there is no consensus among the models on whether the tropical low cloudiness would increase or decrease in a warmer climate. In addition, it has been shown that - depending on their climate sensitivity - the models either predict deeper or shallower low clouds. Recently, several relationships between inter-model characteristics of the present-day climate and future climate changes have been highlighted. These so-called emergent constraints aim to target relevant model improvements and to constrain models' projections based on current climate observations. Here we propose to use - for the first time - 10 years of CALIPSO cloud statistics to assess the ability of the models to represent the vertical structure of tropical low clouds for abnormally warm SST. We use a simulator approach to compare observations and simulations and focus on the low-layered clouds (i.e. z fraction. Vertically, the clouds deepen namely by decreasing the cloud fraction in the lowest levels and increasing it around the top of the boundary-layer. This feature is coincident with an increase of the high-level cloud fraction (z > 6.5km). Although the models' spread is large, the multi-model mean captures the observed variations but with a smaller amplitude. We then employ the GISS model to investigate how changes in cloud parameterizations affect the response of low clouds to warmer SSTs on the one hand; and how they affect the variations of the model's cloud profiles with respect to environmental parameters on the other hand. Finally, we use CALIPSO observations to constrain the model by determining i) what set of parameters allows reproducing the observed relationships and ii) what are the consequences on the cloud feedbacks. These results point toward process-oriented constraints of low-cloud responses to surface warming and environmental

  15. Global assessment of ocean carbon export by combining satellite observations and food-web models

    Science.gov (United States)

    Siegel, D. A.; Buesseler, K. O.; Doney, S. C.; Sailley, S. F.; Behrenfeld, M. J.; Boyd, P. W.

    2014-03-01

    The export of organic carbon from the surface ocean by sinking particles is an important, yet highly uncertain, component of the global carbon cycle. Here we introduce a mechanistic assessment of the global ocean carbon export using satellite observations, including determinations of net primary production and the slope of the particle size spectrum, to drive a food-web model that estimates the production of sinking zooplankton feces and algal aggregates comprising the sinking particle flux at the base of the euphotic zone. The synthesis of observations and models reveals fundamentally different and ecologically consistent regional-scale patterns in export and export efficiency not found in previous global carbon export assessments. The model reproduces regional-scale particle export field observations and predicts a climatological mean global carbon export from the euphotic zone of 6 Pg C yr-1. Global export estimates show small variation (typically model parameter values. The model is also robust to the choices of the satellite data products used and enables interannual changes to be quantified. The present synthesis of observations and models provides a path for quantifying the ocean's biological pump.

  16. Evaluation of Multiclass Model Observers in PET LROC Studies

    Science.gov (United States)

    Gifford, H. C.; Kinahan, P. E.; Lartizien, C.; King, M. A.

    2007-02-01

    A localization ROC (LROC) study was conducted to evaluate nonprewhitening matched-filter (NPW) and channelized NPW (CNPW) versions of a multiclass model observer as predictors of human tumor-detection performance with PET images. Target localization is explicitly performed by these model observers. Tumors were placed in the liver, lungs, and background soft tissue of a mathematical phantom, and the data simulation modeled a full-3D acquisition mode. Reconstructions were performed with the FORE+AWOSEM algorithm. The LROC study measured observer performance with 2D images consisting of either coronal, sagittal, or transverse views of the same set of cases. Versions of the CNPW observer based on two previously published difference-of-Gaussian channel models demonstrated good quantitative agreement with human observers. One interpretation of these results treats the CNPW observer as a channelized Hotelling observer with implicit internal noise

  17. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  18. What sea-ice biogeochemical modellers need from observers

    OpenAIRE

    Steiner, Nadja; Deal, Clara; Lannuzel, Delphine; Lavoie, Diane; Massonnet, François; Miller, Lisa A.; Moreau, Sebastien; Popova, Ekaterina; Stefels, Jacqueline; Tedesco, Letizia

    2016-01-01

    Abstract Numerical models can be a powerful tool helping to understand the role biogeochemical processes play in local and global systems and how this role may be altered in a changing climate. With respect to sea-ice biogeochemical models, our knowledge is severely limited by our poor confidence in numerical model parameterisations representing those processes. Improving model parameterisations requires communication between observers and modellers to guide model development and improve the ...

  19. Results of an interactively coupled atmospheric chemistry – general circulation model: Comparison with observations

    Directory of Open Access Journals (Sweden)

    R. Hein

    Full Text Available The coupled climate-chemistry model ECHAM4.L39(DLR/CHEM is presented which enables a simultaneous treatment of meteorology and atmospheric chemistry and their feedbacks. This is the first model which interactively combines a general circulation model with a chemical model, employing most of the important reactions and species necessary to describe the stratospheric and upper tropospheric ozone chemistry, and which is computationally fast enough to allow long-term integrations with currently available computer resources. This is possible as the model time-step used for the chemistry can be chosen as large as the integration time-step for the dynamics. Vertically the atmosphere is discretized by 39 levels from the surface up to the top layer which is centred at 10 hPa, with a relatively high vertical resolution of approximately 700 m near the extra-tropical tropopause. We present the results of a control simulation representing recent conditions (1990 and compare it to available observations. The focus is on investigations of stratospheric dynamics and chemistry relevant to describe the stratospheric ozone layer. ECHAM4.L39(DLR/CHEM reproduces main features of stratospheric dynamics in the arctic vortex region, including stratospheric warming events. This constitutes a major improvement compared to earlier model versions. However, apparent shortcomings in Antarctic circulation and temperatures persist. The seasonal and interannual variability of the ozone layer is simulated in accordance with observations. Activation and deactivation of chlorine in the polar stratospheric vortices and their inter-hemispheric differences are reproduced. Considering methane oxidation as part of the dynamic-chemistry feedback results in an improved representation of the spatial distribution of stratospheric water vapour concentrations. The current model constitutes a powerful tool to investigate, for instance, the combined direct and indirect effects of anthropogenic

  20. Observations and a linear model of water level in an interconnected inlet-bay system

    Science.gov (United States)

    Aretxabaleta, Alfredo; Ganju, Neil K.; Butman, Bradford; Signell, Richard

    2017-01-01

    A system of barrier islands and back-barrier bays occurs along southern Long Island, New York, and in many coastal areas worldwide. Characterizing the bay physical response to water level fluctuations is needed to understand flooding during extreme events and evaluate their relation to geomorphological changes. Offshore sea level is one of the main drivers of water level fluctuations in semienclosed back-barrier bays. We analyzed observed water levels (October 2007 to November 2015) and developed analytical models to better understand bay water level along southern Long Island. An increase (∼0.02 m change in 0.17 m amplitude) in the dominant M2 tidal amplitude (containing the largest fraction of the variability) was observed in Great South Bay during mid-2014. The observed changes in both tidal amplitude and bay water level transfer from offshore were related to the dredging of nearby inlets and possibly the changing size of a breach across Fire Island caused by Hurricane Sandy (after December 2012). The bay response was independent of the magnitude of the fluctuations (e.g., storms) at a specific frequency. An analytical model that incorporates bay and inlet dimensions reproduced the observed transfer function in Great South Bay and surrounding areas. The model predicts the transfer function in Moriches and Shinnecock bays where long-term observations were not available. The model is a simplified tool to investigate changes in bay water level and enables the evaluation of future conditions and alternative geomorphological settings.

  1. A construction of observables for AKSZ sigma models

    OpenAIRE

    Mnev, Pavel

    2012-01-01

    A construction of gauge-invariant observables is suggested for a class of topological field theories, the AKSZ sigma-models. The observables are associated to extensions of the target Q-manifold of the sigma model to a Q-bundle over it with additional Hamiltonian structure in fibers.

  2. Is the island universe model consistent with observations?

    OpenAIRE

    Piao, Yun-Song

    2005-01-01

    We study the island universe model, in which initially the universe is in a cosmological constant sea, then the local quantum fluctuations violating the null energy condition create the islands of matter, some of which might corresponds to our observable universe. We examine the possibility that the island universe model is regarded as an alternative scenario of the origin of observable universe.

  3. An empirical model of the topside plasma density around 600 km based on ROCSAT-1 and Hinotori observations

    Science.gov (United States)

    Huang, He; Chen, Yiding; Liu, Libo; Le, Huijun; Wan, Weixing

    2015-05-01

    It is an urgent task to improve the ability of ionospheric empirical models to more precisely reproduce the plasma density variations in the topside ionosphere. Based on the Republic of China Satellite 1 (ROCSAT-1) observations, we developed a new empirical model of topside plasma density around 600 km under relatively quiet geomagnetic conditions. The model reproduces the ROCSAT-1 plasma density observations with a root-mean-square-error of 0.125 in units of lg(Ni(cm-3)) and reasonably describes the temporal and spatial variations of plasma density at altitudes in the range from 550 to 660 km. The model results are also in good agreement with observations from Hinotori, Coupled Ion-Neutral Dynamics Investigations/Communications/Navigation Outage Forecasting System satellites and the incoherent scatter radar at Arecibo. Further, we combined ROCSAT-1 and Hinotori data to improve the ROCSAT-1 model and built a new model (R&H model) after the consistency between the two data sets had been confirmed with the original ROCSAT-1 model. In particular, we studied the solar activity dependence of topside plasma density at a fixed altitude by R&H model and find that its feature slightly differs from the case when the orbit altitude evolution is ignored. In addition, the R&H model shows the merging of the two crests of equatorial ionization anomaly above the F2 peak, while the IRI_Nq topside option always produces two separate crests in this range of altitudes.

  4. Confronting Weather and Climate Models with Observational Data from Soil Moisture Networks over the United States

    Science.gov (United States)

    Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A., Jr.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal Dean; hide

    2016-01-01

    Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses out perform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.

  5. Contrasting spatial structures of Atlantic Multidecadal Oscillation between observations and slab ocean model simulations

    Science.gov (United States)

    Sun, Cheng; Li, Jianping; Kucharski, Fred; Xue, Jiaqing; Li, Xiang

    2018-04-01

    The spatial structure of Atlantic multidecadal oscillation (AMO) is analyzed and compared between the observations and simulations from slab ocean models (SOMs) and fully coupled models. The observed sea surface temperature (SST) pattern of AMO is characterized by a basin-wide monopole structure, and there is a significantly high degree of spatial coherence of decadal SST variations across the entire North Atlantic basin. The observed SST anomalies share a common decadal-scale signal, corresponding to the basin-wide average (i. e., the AMO). In contrast, the simulated AMO in SOMs (AMOs) exhibits a tripole-like structure, with the mid-latitude North Atlantic SST showing an inverse relationship with other parts of the basin, and the SOMs fail to reproduce the observed strong spatial coherence of decadal SST variations associated with the AMO. The observed spatial coherence of AMO SST anomalies is identified as a key feature that can be used to distinguish the AMO mechanism. The tripole-like SST pattern of AMOs in SOMs can be largely explained by the atmosphere-forced thermodynamics mechanism due to the surface heat flux changes associated with the North Atlantic Oscillation (NAO). The thermodynamic forcing of AMOs by the NAO gives rise to a simultaneous inverse NAO-AMOs relationship at both interannual and decadal timescales and a seasonal phase locking of the AMOs variability to the cold season. However, the NAO-forced thermodynamics mechanism cannot explain the observed NAO-AMO relationship and the seasonal phase locking of observed AMO variability to the warm season. At decadal timescales, a strong lagged relationship between NAO and AMO is observed, with the NAO leading by up to two decades, while the simultaneous correlation of NAO with AMO is weak. This lagged relationship and the spatial coherence of AMO can be well understood from the view point of ocean dynamics. A time-integrated NAO index, which reflects the variations in Atlantic meridional overturning

  6. Fuzzy model-based observers for fault detection in CSTR.

    Science.gov (United States)

    Ballesteros-Moncada, Hazael; Herrera-López, Enrique J; Anzurez-Marín, Juan

    2015-11-01

    Under the vast variety of fuzzy model-based observers reported in the literature, what would be the properone to be used for fault detection in a class of chemical reactor? In this study four fuzzy model-based observers for sensor fault detection of a Continuous Stirred Tank Reactor were designed and compared. The designs include (i) a Luenberger fuzzy observer, (ii) a Luenberger fuzzy observer with sliding modes, (iii) a Walcott-Zak fuzzy observer, and (iv) an Utkin fuzzy observer. A negative, an oscillating fault signal, and a bounded random noise signal with a maximum value of ±0.4 were used to evaluate and compare the performance of the fuzzy observers. The Utkin fuzzy observer showed the best performance under the tested conditions. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Assimilation of Aircraft Observations in High-Resolution Mesoscale Modeling

    Directory of Open Access Journals (Sweden)

    Brian P. Reen

    2018-01-01

    Full Text Available Aircraft-based observations are a promising source of above-surface observations for assimilation into mesoscale model simulations. The Tropospheric Airborne Meteorological Data Reporting (TAMDAR observations have potential advantages over some other aircraft observations including the presence of water vapor observations. The impact of assimilating TAMDAR observations via observation nudging in 1 km horizontal grid spacing Weather Research and Forecasting model simulations is evaluated using five cases centered over California. Overall, the impact of assimilating the observations is mixed, with the layer with the greatest benefit being above the surface in the lowest 1000 m above ground level and the variable showing the most consistent benefit being temperature. Varying the nudging configuration demonstrates the sensitivity of the results to details of the assimilation, but does not clearly demonstrate the superiority of a specific configuration.

  8. Production process reproducibility and product quality consistency of transient gene expression in HEK293 cells with anti-PD1 antibody as the model protein.

    Science.gov (United States)

    Ding, Kai; Han, Lei; Zong, Huifang; Chen, Junsheng; Zhang, Baohong; Zhu, Jianwei

    2017-03-01

    Demonstration of reproducibility and consistency of process and product quality is one of the most crucial issues in using transient gene expression (TGE) technology for biopharmaceutical development. In this study, we challenged the production consistency of TGE by expressing nine batches of recombinant IgG antibody in human embryonic kidney 293 cells to evaluate reproducibility including viable cell density, viability, apoptotic status, and antibody yield in cell culture supernatant. Product quality including isoelectric point, binding affinity, secondary structure, and thermal stability was assessed as well. In addition, major glycan forms of antibody from different batches of production were compared to demonstrate glycosylation consistency. Glycan compositions of the antibody harvested at different time periods were also measured to illustrate N-glycan distribution over the culture time. From the results, it has been demonstrated that different TGE batches are reproducible from lot to lot in overall cell growth, product yield, and product qualities including isoelectric point, binding affinity, secondary structure, and thermal stability. Furthermore, major N-glycan compositions are consistent among different TGE batches and conserved during cell culture time.

  9. Correlation between model observer and human observer performance in CT imaging when lesion location is uncertain

    Energy Technology Data Exchange (ETDEWEB)

    Leng, Shuai; Yu, Lifeng; Zhang, Yi; McCollough, Cynthia H. [Department of Radiology, Mayo Clinic, 200 First Street Southwest, Rochester, Minnesota 55905 (United States); Carter, Rickey [Department of Biostatistics, Mayo Clinic, 200 First Street Southwest, Rochester, Minnesota 55905 (United States); Toledano, Alicia Y. [Biostatistics Consulting, LLC, 10606 Wheatley Street, Kensington, Maryland 20895 (United States)

    2013-08-15

    Purpose: The purpose of this study was to investigate the correlation between model observer and human observer performance in CT imaging for the task of lesion detection and localization when the lesion location is uncertain.Methods: Two cylindrical rods (3-mm and 5-mm diameters) were placed in a 35 × 26 cm torso-shaped water phantom to simulate lesions with −15 HU contrast at 120 kV. The phantom was scanned 100 times on a 128-slice CT scanner at each of four dose levels (CTDIvol = 5.7, 11.4, 17.1, and 22.8 mGy). Regions of interest (ROIs) around each lesion were extracted to generate images with signal-present, with each ROI containing 128 × 128 pixels. Corresponding ROIs of signal-absent images were generated from images without lesion mimicking rods. The location of the lesion (rod) in each ROI was randomly distributed by moving the ROIs around each lesion. Human observer studies were performed by having three trained observers identify the presence or absence of lesions, indicating the lesion location in each image and scoring confidence for the detection task on a 6-point scale. The same image data were analyzed using a channelized Hotelling model observer (CHO) with Gabor channels. Internal noise was added to the decision variables for the model observer study. Area under the curve (AUC) of ROC and localization ROC (LROC) curves were calculated using a nonparametric approach. The Spearman's rank order correlation between the average performance of the human observers and the model observer performance was calculated for the AUC of both ROC and LROC curves for both the 3- and 5-mm diameter lesions.Results: In both ROC and LROC analyses, AUC values for the model observer agreed well with the average values across the three human observers. The Spearman's rank order correlation values for both ROC and LROC analyses for both the 3- and 5-mm diameter lesions were all 1.0, indicating perfect rank ordering agreement of the figures of merit (AUC

  10. A Unimodal Model for Double Observer Distance Sampling Surveys.

    Directory of Open Access Journals (Sweden)

    Earl F Becker

    Full Text Available Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.

  11. Cosmic microwave background observables of small field models of inflation

    International Nuclear Information System (INIS)

    Ben-Dayan, Ido; Brustein, Ram

    2010-01-01

    We construct a class of single small field models of inflation that can predict, contrary to popular wisdom, an observable gravitational wave signal in the cosmic microwave background anisotropies. The spectral index, its running, the tensor to scalar ratio and the number of e-folds can cover all the parameter space currently allowed by cosmological observations. A unique feature of models in this class is their ability to predict a negative spectral index running in accordance with recent cosmic microwave background observations. We discuss the new class of models from an effective field theory perspective and show that if the dimensionless trilinear coupling is small, as required for consistency, then the observed spectral index running implies a high scale of inflation and hence an observable gravitational wave signal. All the models share a distinct prediction of higher power at smaller scales, making them easy targets for detection

  12. Contextual sensitivity in scientific reproducibility.

    Science.gov (United States)

    Van Bavel, Jay J; Mende-Siedlecki, Peter; Brady, William J; Reinero, Diego A

    2016-06-07

    In recent years, scientists have paid increasing attention to reproducibility. For example, the Reproducibility Project, a large-scale replication attempt of 100 studies published in top psychology journals found that only 39% could be unambiguously reproduced. There is a growing consensus among scientists that the lack of reproducibility in psychology and other fields stems from various methodological factors, including low statistical power, researcher's degrees of freedom, and an emphasis on publishing surprising positive results. However, there is a contentious debate about the extent to which failures to reproduce certain results might also reflect contextual differences (often termed "hidden moderators") between the original research and the replication attempt. Although psychologists have found extensive evidence that contextual factors alter behavior, some have argued that context is unlikely to influence the results of direct replications precisely because these studies use the same methods as those used in the original research. To help resolve this debate, we recoded the 100 original studies from the Reproducibility Project on the extent to which the research topic of each study was contextually sensitive. Results suggested that the contextual sensitivity of the research topic was associated with replication success, even after statistically adjusting for several methodological characteristics (e.g., statistical power, effect size). The association between contextual sensitivity and replication success did not differ across psychological subdisciplines. These results suggest that researchers, replicators, and consumers should be mindful of contextual factors that might influence a psychological process. We offer several guidelines for dealing with contextual sensitivity in reproducibility.

  13. Assimilating uncertain, dynamic and intermittent streamflow observations in hydrological models

    Science.gov (United States)

    Mazzoleni, Maurizio; Alfonso, Leonardo; Chacon-Hurtado, Juan; Solomatine, Dimitri

    2015-09-01

    Catastrophic floods cause significant socio-economical losses. Non-structural measures, such as real-time flood forecasting, can potentially reduce flood risk. To this end, data assimilation methods have been used to improve flood forecasts by integrating static ground observations, and in some cases also remote sensing observations, within water models. Current hydrologic and hydraulic research works consider assimilation of observations coming from traditional, static sensors. At the same time, low-cost, mobile sensors and mobile communication devices are becoming also increasingly available. The main goal and innovation of this study is to demonstrate the usefulness of assimilating uncertain streamflow observations that are dynamic in space and intermittent in time in the context of two different semi-distributed hydrological model structures. The developed method is applied to the Brue basin, where the dynamic observations are imitated by the synthetic observations of discharge. The results of this study show how model structures and sensors locations affect in different ways the assimilation of streamflow observations. In addition, it proves how assimilation of such uncertain observations from dynamic sensors can provide model improvements similar to those of streamflow observations coming from a non-optimal network of static physical sensors. This can be a potential application of recent efforts to build citizen observatories of water, which can make the citizens an active part in information capturing, evaluation and communication, helping simultaneously to improvement of model-based flood forecasting.

  14. Reproducibility, controllability, and optimization of LENR experiments

    Energy Technology Data Exchange (ETDEWEB)

    Nagel, David J. [The George Washington University, Washington DC 20052 (United States)

    2006-07-01

    Low-energy nuclear reaction (LENR) measurements are significantly, and increasingly reproducible. Practical control of the production of energy or materials by LENR has yet to be demonstrated. Minimization of costly inputs and maximization of desired outputs of LENR remain for future developments. The paper concludes by underlying that it is now clearly that demands for reproducible experiments in the early years of LENR experiments were premature. In fact, one can argue that irreproducibility should be expected for early experiments in a complex new field. As emphasized in the paper and as often happened in the history of science, experimental and theoretical progress can take even decades. It is likely to be many years before investments in LENR experiments will yield significant returns, even for successful research programs. However, it is clearly that a fundamental understanding of the anomalous effects observed in numerous experiments will significantly increase reproducibility, improve controllability, enable optimization of processes, and accelerate the economic viability of LENR.

  15. Reproducibility, controllability, and optimization of LENR experiments

    International Nuclear Information System (INIS)

    Nagel, David J.

    2006-01-01

    Low-energy nuclear reaction (LENR) measurements are significantly, and increasingly reproducible. Practical control of the production of energy or materials by LENR has yet to be demonstrated. Minimization of costly inputs and maximization of desired outputs of LENR remain for future developments. The paper concludes by underlying that it is now clearly that demands for reproducible experiments in the early years of LENR experiments were premature. In fact, one can argue that irreproducibility should be expected for early experiments in a complex new field. As emphasized in the paper and as often happened in the history of science, experimental and theoretical progress can take even decades. It is likely to be many years before investments in LENR experiments will yield significant returns, even for successful research programs. However, it is clearly that a fundamental understanding of the anomalous effects observed in numerous experiments will significantly increase reproducibility, improve controllability, enable optimization of processes, and accelerate the economic viability of LENR

  16. Evaluation of internal noise methods for Hotelling observer models

    International Nuclear Information System (INIS)

    Zhang Yani; Pham, Binh T.; Eckstein, Miguel P.

    2007-01-01

    The inclusion of internal noise in model observers is a common method to allow for quantitative comparisons between human and model observer performance in visual detection tasks. In this article, we studied two different strategies for inserting internal noise into Hotelling model observers. In the first strategy, internal noise was added to the output of individual channels: (a) Independent nonuniform channel noise, (b) independent uniform channel noise. In the second strategy, internal noise was added to the decision variable arising from the combination of channel responses. The standard deviation of the zero mean internal noise was either constant or proportional to: (a) the decision variable's standard deviation due to the external noise, (b) the decision variable's variance caused by the external noise, (c) the decision variable magnitude on a trial to trial basis. We tested three model observers: square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO) using a four alternative forced choice (4AFC) signal known exactly but variable task with a simulated signal embedded in real x-ray coronary angiogram backgrounds. The results showed that the internal noise method that led to the best prediction of human performance differed across the studied model observers. The CHO model best predicted human observer performance with the channel internal noise. The HO and LGHO best predicted human observer performance with the decision variable internal noise. The present results might guide researchers with the choice of methods to include internal noise into Hotelling model observers when evaluating and optimizing medical image quality

  17. Predicting the future completing models of observed complex systems

    CERN Document Server

    Abarbanel, Henry

    2013-01-01

    Predicting the Future: Completing Models of Observed Complex Systems provides a general framework for the discussion of model building and validation across a broad spectrum of disciplines. This is accomplished through the development of an exact path integral for use in transferring information from observations to a model of the observed system. Through many illustrative examples drawn from models in neuroscience, fluid dynamics, geosciences, and nonlinear electrical circuits, the concepts are exemplified in detail. Practical numerical methods for approximate evaluations of the path integral are explored, and their use in designing experiments and determining a model's consistency with observations is investigated. Using highly instructive examples, the problems of data assimilation and the means to treat them are clearly illustrated. This book will be useful for students and practitioners of physics, neuroscience, regulatory networks, meteorology and climate science, network dynamics, fluid dynamics, and o...

  18. THE STELLAR MASS COMPONENTS OF GALAXIES: COMPARING SEMI-ANALYTICAL MODELS WITH OBSERVATION

    International Nuclear Information System (INIS)

    Liu Lei; Yang Xiaohu; Mo, H. J.; Van den Bosch, Frank C.; Springel, Volker

    2010-01-01

    We compare the stellar masses of central and satellite galaxies predicted by three independent semi-analytical models (SAMs) with observational results obtained from a large galaxy group catalog constructed from the Sloan Digital Sky Survey. In particular, we compare the stellar mass functions of centrals and satellites, the relation between total stellar mass and halo mass, and the conditional stellar mass functions, Φ(M * |M h ), which specify the average number of galaxies of stellar mass M * that reside in a halo of mass M h . The SAMs only predict the correct stellar masses of central galaxies within a limited mass range and all models fail to reproduce the sharp decline of stellar mass with decreasing halo mass observed at the low mass end. In addition, all models over-predict the number of satellite galaxies by roughly a factor of 2. The predicted stellar mass in satellite galaxies can be made to match the data by assuming that a significant fraction of satellite galaxies are tidally stripped and disrupted, giving rise to a population of intra-cluster stars (ICS) in their host halos. However, the amount of ICS thus predicted is too large compared to observation. This suggests that current galaxy formation models still have serious problems in modeling star formation in low-mass halos.

  19. Fast neutrons and the optical model: some observations

    International Nuclear Information System (INIS)

    Smith, A.B.; Lawson, R.D.; Guenther, P.T.

    1985-01-01

    The optical model of fast-neutron-induced phenomena is considered from the observational viewpoint. Experimental characteristics governing the reliability of the modeling are outlined with attention to implications on model parameters and their uncertainties. The physical characteristics of experimentally-deduced ''regional'' and ''specific'' model parameters are examined including: parameter trends with mass and energy, implications of collective effects, and fundamental relations between real and imaginary potentials. These physical properties are illustrated by studies in the A=60 and 90 regions. General trends are identified and outstanding issues cited. Throughout, the approach is that of observational interpretation for basic and applied purposes. 20 refs., 11 figs., 2 tabs

  20. COCOA code for creating mock observations of star cluster models

    Science.gov (United States)

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Dalessandro, Emanuele

    2018-04-01

    We introduce and present results from the COCOA (Cluster simulatiOn Comparison with ObservAtions) code that has been developed to create idealized mock photometric observations using results from numerical simulations of star cluster evolution. COCOA is able to present the output of realistic numerical simulations of star clusters carried out using Monte Carlo or N-body codes in a way that is useful for direct comparison with photometric observations. In this paper, we describe the COCOA code and demonstrate its different applications by utilizing globular cluster (GC) models simulated with the MOCCA (MOnte Carlo Cluster simulAtor) code. COCOA is used to synthetically observe these different GC models with optical telescopes, perform point spread function photometry, and subsequently produce observed colour-magnitude diagrams. We also use COCOA to compare the results from synthetic observations of a cluster model that has the same age and metallicity as the Galactic GC NGC 2808 with observations of the same cluster carried out with a 2.2 m optical telescope. We find that COCOA can effectively simulate realistic observations and recover photometric data. COCOA has numerous scientific applications that maybe be helpful for both theoreticians and observers that work on star clusters. Plans for further improving and developing the code are also discussed in this paper.

  1. Properties of galaxies reproduced by a hydrodynamic simulation

    Science.gov (United States)

    Vogelsberger, M.; Genel, S.; Springel, V.; Torrey, P.; Sijacki, D.; Xu, D.; Snyder, G.; Bird, S.; Nelson, D.; Hernquist, L.

    2014-05-01

    Previous simulations of the growth of cosmic structures have broadly reproduced the `cosmic web' of galaxies that we see in the Universe, but failed to create a mixed population of elliptical and spiral galaxies, because of numerical inaccuracies and incomplete physical models. Moreover, they were unable to track the small-scale evolution of gas and stars to the present epoch within a representative portion of the Universe. Here we report a simulation that starts 12 million years after the Big Bang, and traces 13 billion years of cosmic evolution with 12 billion resolution elements in a cube of 106.5 megaparsecs a side. It yields a reasonable population of ellipticals and spirals, reproduces the observed distribution of galaxies in clusters and characteristics of hydrogen on large scales, and at the same time matches the `metal' and hydrogen content of galaxies on small scales.

  2. Southeast Atmosphere Studies: learning from model-observation syntheses

    Data.gov (United States)

    U.S. Environmental Protection Agency — Observed and modeled data shown in figure 2b-c. This dataset is associated with the following publication: Mao, J., A. Carlton, R. Cohen, W. Brune, S. Brown, G....

  3. Meridional Flow Observations: Implications for the current Flux Transport Models

    International Nuclear Information System (INIS)

    Gonzalez Hernandez, Irene; Komm, Rudolf; Kholikov, Shukur; Howe, Rachel; Hill, Frank

    2011-01-01

    Meridional circulation has become a key element in the solar dynamo flux transport models. Available helioseismic observations from several instruments, Taiwan Oscillation Network (TON), Global Oscillation Network Group (GONG) and Michelson Doppler Imager (MDI), have made possible a continuous monitoring of the solar meridional flow in the subphotospheric layers for the last solar cycle, including the recent extended minimum. Here we review some of the meridional circulation observations using local helioseismology techniques and relate them to magnetic flux transport models.

  4. Perfect fluid models in noncomoving observational spherical coordinates

    International Nuclear Information System (INIS)

    Ishak, Mustapha

    2004-01-01

    We use null spherical (observational) coordinates to describe a class of inhomogeneous cosmological models. The proposed cosmological construction is based on the observer past null cone. A known difficulty in using inhomogeneous models is that the null geodesic equation is not integrable in general. Our choice of null coordinates solves the radial ingoing null geodesic by construction. Furthermore, we use an approach where the velocity field is uniquely calculated from the metric rather than put in by hand. Conveniently, this allows us to explore models in a noncomoving frame of reference. In this frame, we find that the velocity field has shear, acceleration, and expansion rate in general. We show that a comoving frame is not compatible with expanding perfect fluid models in the coordinates proposed and dust models are simply not possible. We describe the models in a noncomoving frame. We use the dust models in a noncomoving frame to outline a fitting procedure

  5. Technical Note: Calibration and validation of geophysical observation models

    NARCIS (Netherlands)

    Salama, M.S.; van der Velde, R.; van der Woerd, H.J.; Kromkamp, J.C.; Philippart, C.J.M.; Joseph, A.T.; O'Neill, P.E.; Lang, R.H.; Gish, T.; Werdell, P.J.; Su, Z.

    2012-01-01

    We present a method to calibrate and validate observational models that interrelate remotely sensed energy fluxes to geophysical variables of land and water surfaces. Coincident sets of remote sensing observation of visible and microwave radiations and geophysical data are assembled and subdivided

  6. A time-symmetric Universe model and its observational implication

    International Nuclear Information System (INIS)

    Futamase, T.; Matsuda, T.

    1987-01-01

    A time-symmetric closed-universe model is discussed in terms of the radiation arrow of time. The time symmetry requires the occurrence of advanced waves in the recontracting phase of the Universe. The observational consequences of such advanced waves are considered, and it is shown that a test observer in the expanding phase can observe a time-reversed image of a source of radiation in the future recontracting phase

  7. Time-symmetric universe model and its observational implication

    Energy Technology Data Exchange (ETDEWEB)

    Futamase, T.; Matsuda, T.

    1987-08-01

    A time-symmetric closed-universe model is discussed in terms of the radiation arrow of time. The time symmetry requires the occurrence of advanced waves in the recontracting phase of the Universe. We consider the observational consequences of such advanced waves, and it is shown that a test observer in the expanding phase can observe a time-reversed image of a source of radiation in the future recontracting phase.

  8. Renormalization group running of fermion observables in an extended non-supersymmetric SO(10) model

    Energy Technology Data Exchange (ETDEWEB)

    Meloni, Davide [Dipartimento di Matematica e Fisica, Università di Roma Tre,Via della Vasca Navale 84, 00146 Rome (Italy); Ohlsson, Tommy; Riad, Stella [Department of Physics, School of Engineering Sciences,KTH Royal Institute of Technology - AlbaNova University Center,Roslagstullsbacken 21, 106 91 Stockholm (Sweden)

    2017-03-08

    We investigate the renormalization group evolution of fermion masses, mixings and quartic scalar Higgs self-couplings in an extended non-supersymmetric SO(10) model, where the Higgs sector contains the 10{sub H}, 120{sub H}, and 126{sub H} representations. The group SO(10) is spontaneously broken at the GUT scale to the Pati-Salam group and subsequently to the Standard Model (SM) at an intermediate scale M{sub I}. We explicitly take into account the effects of the change of gauge groups in the evolution. In particular, we derive the renormalization group equations for the different Yukawa couplings. We find that the computed physical fermion observables can be successfully matched to the experimental measured values at the electroweak scale. Using the same Yukawa couplings at the GUT scale, the measured values of the fermion observables cannot be reproduced with a SM-like evolution, leading to differences in the numerical values up to around 80%. Furthermore, a similar evolution can be performed for a minimal SO(10) model, where the Higgs sector consists of the 10{sub H} and 126{sub H} representations only, showing an equally good potential to describe the low-energy fermion observables. Finally, for both the extended and the minimal SO(10) models, we present predictions for the three Dirac and Majorana CP-violating phases as well as three effective neutrino mass parameters.

  9. Interannual Rainfall Variability in North-East Brazil: Observation and Model Simulation

    Science.gov (United States)

    Harzallah, A.; Rocha de Aragão, J. O.; Sadourny, R.

    1996-08-01

    The relationship between interannual variability of rainfall in north-east Brazil and tropical sea-surface temperature is studied using observations and model simulations. The simulated precipitation is the average of seven independent realizations performed using the Laboratoire de Météorologie Dynamique atmospheric general model forced by the 1970-1988 observed sea-surface temperature. The model reproduces very well the rainfall anomalies (correlation of 091 between observed and modelled anomalies). The study confirms that precipitation in north-east Brazil is highly correlated to the sea-surface temperature in the tropical Atlantic and Pacific oceans. Using the singular value decomposition method, we find that Nordeste rainfall is modulated by two independent oscillations, both governed by the Atlantic dipole, but one involving only the Pacific, the other one having a period of about 10 years. Correlations between precipitation in north-east Brazil during February-May and the sea-surface temperature 6 months earlier indicate that both modes are essential to estimate the quality of the rainy season.

  10. Cosmological observables in the quasi-spherical Szekeres model

    Science.gov (United States)

    Buckley, Robert G.

    2014-10-01

    The standard model of cosmology presents a homogeneous universe, and we interpret cosmological data through this framework. However, structure growth creates nonlinear inhomogeneities that may affect observations, and even larger structures may be hidden by our limited vantage point and small number of independent observations. As we determine the universe's parameters with increasing precision, the accuracy is contingent on our understanding of the effects of such structures. For instance, giant void models can explain some observations without dark energy. Because perturbation theory cannot adequately describe nonlinear inhomogeneities, exact solutions to the equations of general relativity are important for these questions. The most general known solution capable of describing inhomogeneous matter distributions is the Szekeres class of models. In this work, we study the quasi-spherical subclass of these models, using numerical simulations to calculate the inhomogeneities' effects on observations. We calculate the large-angle CMB in giant void models and compare with simpler, symmetric void models that have previously been found inadequate to matchobservations. We extend this by considering models with early-time inhomogeneities as well. Then, we study distance observations, including selection effects, in models which are homogeneous on scales around 100 Mpc---consistent with standard cosmology---but inhomogeneous on smaller scales. Finally, we consider photon polarizations, and show that they are not directly affected by inhomogeneities. Overall, we find that while Szekeres models have some advantages over simpler models, they are still seriously limited in their ability to alter our parameter estimation while remaining within the bounds of current observations.

  11. A comparison of large scale changes in surface humidity over land in observations and CMIP3 general circulation models

    International Nuclear Information System (INIS)

    Willett, Katharine M; Thorne, Peter W; Jones, Philip D; Gillett, Nathan P

    2010-01-01

    Observed changes in the HadCRUH global land surface specific humidity and CRUTEM3 surface temperature from 1973 to 1999 are compared to CMIP3 archive climate model simulations with 20th Century forcings. Observed humidity increases are proportionately largest in the Northern Hemisphere, especially in winter. At the largest spatio-temporal scales moistening is close to the Clausius-Clapeyron scaling of the saturated specific humidity (∼7% K -1 ). At smaller scales in water-limited regions, changes in specific humidity are strongly inversely correlated with total changes in temperature. Conversely, in some regions increases are faster than implied by the Clausius-Clapeyron relation. The range of climate model specific humidity seasonal climatology and variance encompasses the observations. The models also reproduce the magnitude of observed interannual variance over all large regions. Observed and modelled trends and temperature-humidity relationships are comparable except for the extratropical Southern Hemisphere where observations exhibit no trend but models exhibit moistening. This may arise from: long-term biases remaining in the observations; the relative paucity of observational coverage; or common model errors. The overall degree of consistency of anthropogenically forced models with the observations is further evidence for anthropogenic influence on the climate of the late 20th century.

  12. Research Opportunities from Emerging Atmospheric Observing and Modeling Capabilities.

    Science.gov (United States)

    Dabberdt, Walter F.; Schlatter, Thomas W.

    1996-02-01

    The Second Prospectus Development Team (PDT-2) of the U.S. Weather Research Program was charged with identifying research opportunities that are best matched to emerging operational and experimental measurement and modeling methods. The overarching recommendation of PDT-2 is that inputs for weather forecast models can best be obtained through the use of composite observing systems together with adaptive (or targeted) observing strategies employing both in situ and remote sensing. Optimal observing systems and strategies are best determined through a three-part process: observing system simulation experiments, pilot field measurement programs, and model-assisted data sensitivity experiments. Furthermore, the mesoscale research community needs easy and timely access to the new operational and research datasets in a form that can readily be reformatted into existing software packages for analysis and display. The value of these data is diminished to the extent that they remain inaccessible.The composite observing system of the future must combine synoptic observations, routine mobile observations, and targeted observations, as the current or forecast situation dictates. High costs demand fuller exploitation of commercial aircraft, meteorological and navigation [Global Positioning System (GPS)] satellites, and Doppler radar. Single observing systems must be assessed in the context of a composite system that provides complementary information. Maintenance of the current North American rawinsonde network is critical for progress in both research-oriented and operational weather forecasting.Adaptive sampling strategies are designed to improve large-scale and regional weather prediction but they will also improve diagnosis and prediction of flash flooding, air pollution, forest fire management, and other environmental emergencies. Adaptive measurements can be made by piloted or unpiloted aircraft. Rawinsondes can be launched and satellites can be programmed to make

  13. Contextual sensitivity in scientific reproducibility

    Science.gov (United States)

    Van Bavel, Jay J.; Mende-Siedlecki, Peter; Brady, William J.; Reinero, Diego A.

    2016-01-01

    In recent years, scientists have paid increasing attention to reproducibility. For example, the Reproducibility Project, a large-scale replication attempt of 100 studies published in top psychology journals found that only 39% could be unambiguously reproduced. There is a growing consensus among scientists that the lack of reproducibility in psychology and other fields stems from various methodological factors, including low statistical power, researcher’s degrees of freedom, and an emphasis on publishing surprising positive results. However, there is a contentious debate about the extent to which failures to reproduce certain results might also reflect contextual differences (often termed “hidden moderators”) between the original research and the replication attempt. Although psychologists have found extensive evidence that contextual factors alter behavior, some have argued that context is unlikely to influence the results of direct replications precisely because these studies use the same methods as those used in the original research. To help resolve this debate, we recoded the 100 original studies from the Reproducibility Project on the extent to which the research topic of each study was contextually sensitive. Results suggested that the contextual sensitivity of the research topic was associated with replication success, even after statistically adjusting for several methodological characteristics (e.g., statistical power, effect size). The association between contextual sensitivity and replication success did not differ across psychological subdisciplines. These results suggest that researchers, replicators, and consumers should be mindful of contextual factors that might influence a psychological process. We offer several guidelines for dealing with contextual sensitivity in reproducibility. PMID:27217556

  14. Asymptotic behavior of observables in the asymmetric quantum Rabi model

    Science.gov (United States)

    Semple, J.; Kollar, M.

    2018-01-01

    The asymmetric quantum Rabi model with broken parity invariance shows spectral degeneracies in the integer case, that is when the asymmetry parameter equals an integer multiple of half the oscillator frequency, thus hinting at a hidden symmetry and accompanying integrability of the model. We study the expectation values of spin observables for each eigenstate and observe characteristic differences between the integer and noninteger cases for the asymptotics in the deep strong coupling regime, which can be understood from a perturbative expansion in the qubit splitting. We also construct a parent Hamiltonian whose exact eigenstates possess the same symmetries as the perturbative eigenstates of the asymmetric quantum Rabi model in the integer case.

  15. Tests of Financial Models in the Presence of Overlapping Observations.

    OpenAIRE

    Richardson, Matthew; Smith, Tom

    1991-01-01

    A general approach to testing serial dependence restrictions implied from financial models is developed. In particular, we discuss joint serial dependence restrictions imposed by random walk, market microstructure, and rational expectations models recently examined in the literature. This approach incorporates more information from the data by explicitly modeling dependencies induced by the use of overlapping observations. Because the estimation problem is sufficiently simple in this framewor...

  16. Reproducibility of somatosensory spatial perceptual maps.

    Science.gov (United States)

    Steenbergen, Peter; Buitenweg, Jan R; Trojan, Jörg; Veltink, Peter H

    2013-02-01

    Various studies have shown subjects to mislocalize cutaneous stimuli in an idiosyncratic manner. Spatial properties of individual localization behavior can be represented in the form of perceptual maps. Individual differences in these maps may reflect properties of internal body representations, and perceptual maps may therefore be a useful method for studying these representations. For this to be the case, individual perceptual maps need to be reproducible, which has not yet been demonstrated. We assessed the reproducibility of localizations measured twice on subsequent days. Ten subjects participated in the experiments. Non-painful electrocutaneous stimuli were applied at seven sites on the lower arm. Subjects localized the stimuli on a photograph of their own arm, which was presented on a tablet screen overlaying the real arm. Reproducibility was assessed by calculating intraclass correlation coefficients (ICC) for the mean localizations of each electrode site and the slope and offset of regression models of the localizations, which represent scaling and displacement of perceptual maps relative to the stimulated sites. The ICCs of the mean localizations ranged from 0.68 to 0.93; the ICCs of the regression parameters were 0.88 for the intercept and 0.92 for the slope. These results indicate a high degree of reproducibility. We conclude that localization patterns of non-painful electrocutaneous stimuli on the arm are reproducible on subsequent days. Reproducibility is a necessary property of perceptual maps for these to reflect properties of a subject's internal body representations. Perceptual maps are therefore a promising method for studying body representations.

  17. Foundation observation of teaching project--a developmental model of peer observation of teaching.

    Science.gov (United States)

    Pattison, Andrew Timothy; Sherwood, Morgan; Lumsden, Colin James; Gale, Alison; Markides, Maria

    2012-01-01

    Peer observation of teaching is important in the development of educators. The foundation curriculum specifies teaching competencies that must be attained. We created a developmental model of peer observation of teaching to help our foundation doctors achieve these competencies and develop as educators. A process for peer observation was created based on key features of faculty development. The project consisted of a pre-observation meeting, the observation, a post-observation debrief, writing of reflective reports and group feedback sessions. The project was evaluated by completion of questionnaires and focus groups held with both foundation doctors and the students they taught to achieve triangulation. Twenty-one foundation doctors took part. All completed reflective reports on their teaching. Participants described the process as useful in their development as educators, citing specific examples of changes to their teaching practice. Medical students rated the sessions as better or much better quality as their usual teaching. The study highlights the benefits of the project to individual foundation doctors, undergraduate medical students and faculty. It acknowledges potential anxieties involved in having teaching observed. A structured programme of observation of teaching can deliver specific teaching competencies required by foundation doctors and provides additional benefits.

  18. Financial Viability of Emergency Department Observation Unit Billing Models.

    Science.gov (United States)

    Baugh, Christopher W; Suri, Pawan; Caspers, Christopher G; Granovsky, Michael A; Neal, Keith; Ross, Michael A

    2018-05-16

    Outpatients receive observation services to determine the need for inpatient admission. These services are usually provided without the use of condition-specific protocols and in an unstructured manner, scattered throughout a hospital in areas typically designated for inpatient care. Emergency department observation units (EDOUs) use protocolized care to offer an efficient alternative with shorter lengths of stay, lower costs and higher patient satisfaction. EDOU growth is limited by existing policy barriers that prevent a "two-service" model of separate professional billing for both emergency and observation services. The majority of EDOUs use the "one-service" model, where a single composite professional fee is billed for both emergency and observation services. The financial implications of these models are not well understood. We created a Monte Carlo simulation by building a model that reflects current clinical practice in the United States and uses inputs gathered from the most recently available peer-reviewed literature, national survey and payer data. Using this simulation, we modeled annual staffing costs and payments for professional services under two common models of care in an EDOU. We also modeled cash flows over a continuous range of daily EDOU patient encounters to illustrate the dynamic relationship between costs and revenue over various staffing levels. We estimate the mean (±SD) annual net cash flow to be a net loss of $315,382 ±$89,635 in the one-service model and a net profit of $37,569 ±$359,583 in the two-service model. The two-service model is financially sustainable at daily billable encounters above 20 while in the one-service model, costs exceed revenue regardless of encounter count. Physician cost per hour and daily patient encounters had the most significant impact on model estimates. In the one-service model, EDOU staffing costs exceed payments at all levels of patient encounters, making a hospital subsidy necessary to create a

  19. Tropically driven and externally forced patterns of Antarctic sea ice change: reconciling observed and modeled trends

    Science.gov (United States)

    Schneider, David P.; Deser, Clara

    2017-09-01

    Recent work suggests that natural variability has played a significant role in the increase of Antarctic sea ice extent during 1979-2013. The ice extent has responded strongly to atmospheric circulation changes, including a deepened Amundsen Sea Low (ASL), which in part has been driven by tropical variability. Nonetheless, this increase has occurred in the context of externally forced climate change, and it has been difficult to reconcile observed and modeled Antarctic sea ice trends. To understand observed-model disparities, this work defines the internally driven and radiatively forced patterns of Antarctic sea ice change and exposes potential model biases using results from two sets of historical experiments of a coupled climate model compared with observations. One ensemble is constrained only by external factors such as greenhouse gases and stratospheric ozone, while the other explicitly accounts for the influence of tropical variability by specifying observed SST anomalies in the eastern tropical Pacific. The latter experiment reproduces the deepening of the ASL, which drives an increase in regional ice extent due to enhanced ice motion and sea surface cooling. However, the overall sea ice trend in every ensemble member of both experiments is characterized by ice loss and is dominated by the forced pattern, as given by the ensemble-mean of the first experiment. This pervasive ice loss is associated with a strong warming of the ocean mixed layer, suggesting that the ocean model does not locally store or export anomalous heat efficiently enough to maintain a surface environment conducive to sea ice expansion. The pervasive upper-ocean warming, not seen in observations, likely reflects ocean mean-state biases.

  20. Tropically driven and externally forced patterns of Antarctic sea ice change: reconciling observed and modeled trends

    Science.gov (United States)

    Schneider, David P.; Deser, Clara

    2018-06-01

    Recent work suggests that natural variability has played a significant role in the increase of Antarctic sea ice extent during 1979-2013. The ice extent has responded strongly to atmospheric circulation changes, including a deepened Amundsen Sea Low (ASL), which in part has been driven by tropical variability. Nonetheless, this increase has occurred in the context of externally forced climate change, and it has been difficult to reconcile observed and modeled Antarctic sea ice trends. To understand observed-model disparities, this work defines the internally driven and radiatively forced patterns of Antarctic sea ice change and exposes potential model biases using results from two sets of historical experiments of a coupled climate model compared with observations. One ensemble is constrained only by external factors such as greenhouse gases and stratospheric ozone, while the other explicitly accounts for the influence of tropical variability by specifying observed SST anomalies in the eastern tropical Pacific. The latter experiment reproduces the deepening of the ASL, which drives an increase in regional ice extent due to enhanced ice motion and sea surface cooling. However, the overall sea ice trend in every ensemble member of both experiments is characterized by ice loss and is dominated by the forced pattern, as given by the ensemble-mean of the first experiment. This pervasive ice loss is associated with a strong warming of the ocean mixed layer, suggesting that the ocean model does not locally store or export anomalous heat efficiently enough to maintain a surface environment conducive to sea ice expansion. The pervasive upper-ocean warming, not seen in observations, likely reflects ocean mean-state biases.

  1. Additive Manufacturing: Reproducibility of Metallic Parts

    Directory of Open Access Journals (Sweden)

    Konda Gokuldoss Prashanth

    2017-02-01

    Full Text Available The present study deals with the properties of five different metals/alloys (Al-12Si, Cu-10Sn and 316L—face centered cubic structure, CoCrMo and commercially pure Ti (CP-Ti—hexagonal closed packed structure fabricated by selective laser melting. The room temperature tensile properties of Al-12Si samples show good consistency in results within the experimental errors. Similar reproducible results were observed for sliding wear and corrosion experiments. The other metal/alloy systems also show repeatable tensile properties, with the tensile curves overlapping until the yield point. The curves may then follow the same path or show a marginal deviation (~10 MPa until they reach the ultimate tensile strength and a negligible difference in ductility levels (of ~0.3% is observed between the samples. The results show that selective laser melting is a reliable fabrication method to produce metallic materials with consistent and reproducible properties.

  2. Modelled radiative forcing of the direct aerosol effect with multi-observation evaluation

    Directory of Open Access Journals (Sweden)

    G. Myhre

    2009-02-01

    Full Text Available A high-resolution global aerosol model (Oslo CTM2 driven by meteorological data and allowing a comparison with a variety of aerosol observations is used to simulate radiative forcing (RF of the direct aerosol effect. The model simulates all main aerosol components, including several secondary components such as nitrate and secondary organic carbon. The model reproduces the main chemical composition and size features observed during large aerosol campaigns. Although the chemical composition compares best with ground-based measurement over land for modelled sulphate, no systematic differences are found for other compounds. The modelled aerosol optical depth (AOD is compared to remote sensed data from AERONET ground and MODIS and MISR satellite retrievals. To gain confidence in the aerosol modelling, we have tested its ability to reproduce daily variability in the aerosol content, and this is performing well in many regions; however, we also identified some locations where model improvements are needed. The annual mean regional pattern of AOD from the aerosol model is broadly similar to the AERONET and the satellite retrievals (mostly within 10–20%. We notice a significant improvement from MODIS Collection 4 to Collection 5 compared to AERONET data. Satellite derived estimates of aerosol radiative effect over ocean for clear sky conditions differs significantly on regional scales (almost up to a factor two, but also in the global mean. The Oslo CTM2 has an aerosol radiative effect close to the mean of the satellite derived estimates. We derive a radiative forcing (RF of the direct aerosol effect of −0.35 Wm−2 in our base case. Implementation of a simple approach to consider internal black carbon (BC mixture results in a total RF of −0.28 Wm−2. Our results highlight the importance of carbonaceous particles, producing stronger individual RF than considered in the recent IPCC estimate; however, net RF is less different

  3. Influence of rainfall observation network on model calibration and application

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-01-01

    Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as

  4. Testing Reproducibility in Earth Sciences

    Science.gov (United States)

    Church, M. A.; Dudill, A. R.; Frey, P.; Venditti, J. G.

    2017-12-01

    Reproducibility represents how closely the results of independent tests agree when undertaken using the same materials but different conditions of measurement, such as operator, equipment or laboratory. The concept of reproducibility is fundamental to the scientific method as it prevents the persistence of incorrect or biased results. Yet currently the production of scientific knowledge emphasizes rapid publication of previously unreported findings, a culture that has emerged from pressures related to hiring, publication criteria and funding requirements. Awareness and critique of the disconnect between how scientific research should be undertaken, and how it actually is conducted, has been prominent in biomedicine for over a decade, with the fields of economics and psychology more recently joining the conversation. The purpose of this presentation is to stimulate the conversation in earth sciences where, despite implicit evidence in widely accepted classifications, formal testing of reproducibility is rare.As a formal test of reproducibility, two sets of experiments were undertaken with the same experimental procedure, at the same scale, but in different laboratories. Using narrow, steep flumes and spherical glass beads, grain size sorting was examined by introducing fine sediment of varying size and quantity into a mobile coarse bed. The general setup was identical, including flume width and slope; however, there were some variations in the materials, construction and lab environment. Comparison of the results includes examination of the infiltration profiles, sediment mobility and transport characteristics. The physical phenomena were qualitatively reproduced but not quantitatively replicated. Reproduction of results encourages more robust research and reporting, and facilitates exploration of possible variations in data in various specific contexts. Following the lead of other fields, testing of reproducibility can be incentivized through changes to journal

  5. Tropical convection regimes in climate models: evaluation with satellite observations

    Science.gov (United States)

    Steiner, Andrea K.; Lackner, Bettina C.; Ringer, Mark A.

    2018-04-01

    High-quality observations are powerful tools for the evaluation of climate models towards improvement and reduction of uncertainty. Particularly at low latitudes, the most uncertain aspect lies in the representation of moist convection and interaction with dynamics, where rising motion is tied to deep convection and sinking motion to dry regimes. Since humidity is closely coupled with temperature feedbacks in the tropical troposphere, a proper representation of this region is essential. Here we demonstrate the evaluation of atmospheric climate models with satellite-based observations from Global Positioning System (GPS) radio occultation (RO), which feature high vertical resolution and accuracy in the troposphere to lower stratosphere. We focus on the representation of the vertical atmospheric structure in tropical convection regimes, defined by high updraft velocity over warm surfaces, and investigate atmospheric temperature and humidity profiles. Results reveal that some models do not fully capture convection regions, particularly over land, and only partly represent strong vertical wind classes. Models show large biases in tropical mean temperature of more than 4 K in the tropopause region and the lower stratosphere. Reasonable agreement with observations is given in mean specific humidity in the lower to mid-troposphere. In moist convection regions, models tend to underestimate moisture by 10 to 40 % over oceans, whereas in dry downdraft regions they overestimate moisture by 100 %. Our findings provide evidence that RO observations are a unique source of information, with a range of further atmospheric variables to be exploited, for the evaluation and advancement of next-generation climate models.

  6. Energetic protons at Mars: interpretation of SLED/Phobos-2 observations by a kinetic model

    Directory of Open Access Journals (Sweden)

    E. Kallio

    2012-11-01

    Full Text Available Mars has neither a significant global intrinsic magnetic field nor a dense atmosphere. Therefore, solar energetic particles (SEPs from the Sun can penetrate close to the planet (under some circumstances reaching the surface. On 13 March 1989 the SLED instrument aboard the Phobos-2 spacecraft recorded the presence of SEPs near Mars while traversing a circular orbit (at 2.8 RM. In the present study the response of the Martian plasma environment to SEP impingement on 13 March was simulated using a kinetic model. The electric and magnetic fields were derived using a 3-D self-consistent hybrid model (HYB-Mars where ions are modelled as particles while electrons form a massless charge neutralizing fluid. The case study shows that the model successfully reproduced several of the observed features of the in situ observations: (1 a flux enhancement near the inbound bow shock, (2 the formation of a magnetic shadow where the energetic particle flux was decreased relative to its solar wind values, (3 the energy dependency of the flux enhancement near the bow shock and (4 how the size of the magnetic shadow depends on the incident particle energy. Overall, it is demonstrated that the Martian magnetic field environment resulting from the Mars–solar wind interaction significantly modulated the Martian energetic particle environment.

  7. Energetic protons at Mars. Interpretation of SLED/Phobos-2 observations by a kinetic model

    International Nuclear Information System (INIS)

    Kallio, E.; Alho, M.; Jarvinen, R.; Dyadechkin, S.; McKenna-Lawlor, S.; Afonin, V.V.

    2012-01-01

    Mars has neither a significant global intrinsic magnetic field nor a dense atmosphere. Therefore, solar energetic particles (SEPs) from the Sun can penetrate close to the planet (under some circumstances reaching the surface). On 13 March 1989 the SLED instrument aboard the Phobos- 2 spacecraft recorded the presence of SEPs near Mars while traversing a circular orbit (at 2.8RM). In the present study the response of the Martian plasma environment to SEP impingement on 13 March was simulated using a kinetic model. The electric and magnetic fields were derived using a 3- D self-consistent hybrid model (HYB-Mars) where ions are modelled as particles while electrons form a massless charge neutralizing fluid. The case study shows that the model successfully reproduced several of the observed features of the in situ observations: (1) a flux enhancement near the inbound bow shock, (2) the formation of a magnetic shadow where the energetic particle flux was decreased relative to its solar wind values, (3) the energy dependency of the flux enhancement near the bow shock and (4) how the size of the magnetic shadow depends on the incident particle energy. Overall, it is demonstrated that the Martian magnetic field environment resulting from the Mars-solar wind interaction significantly modulated the Martian energetic particle environment. (orig.)

  8. Energetic protons at Mars. Interpretation of SLED/Phobos-2 observations by a kinetic model

    Energy Technology Data Exchange (ETDEWEB)

    Kallio, E.; Alho, M.; Jarvinen, R.; Dyadechkin, S. [Finnish Meteorological Institute, Helsinki (Finland); McKenna-Lawlor, S. [Space Technology Ireland, Maynooth, Co. Kildare (Ireland); Afonin, V.V. [Space Research Institute, Moscow (Russian Federation)

    2012-07-01

    Mars has neither a significant global intrinsic magnetic field nor a dense atmosphere. Therefore, solar energetic particles (SEPs) from the Sun can penetrate close to the planet (under some circumstances reaching the surface). On 13 March 1989 the SLED instrument aboard the Phobos- 2 spacecraft recorded the presence of SEPs near Mars while traversing a circular orbit (at 2.8RM). In the present study the response of the Martian plasma environment to SEP impingement on 13 March was simulated using a kinetic model. The electric and magnetic fields were derived using a 3- D self-consistent hybrid model (HYB-Mars) where ions are modelled as particles while electrons form a massless charge neutralizing fluid. The case study shows that the model successfully reproduced several of the observed features of the in situ observations: (1) a flux enhancement near the inbound bow shock, (2) the formation of a magnetic shadow where the energetic particle flux was decreased relative to its solar wind values, (3) the energy dependency of the flux enhancement near the bow shock and (4) how the size of the magnetic shadow depends on the incident particle energy. Overall, it is demonstrated that the Martian magnetic field environment resulting from the Mars-solar wind interaction significantly modulated the Martian energetic particle environment. (orig.)

  9. Reproducible Bioinformatics Research for Biologists

    Science.gov (United States)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  10. ITK: Enabling Reproducible Research and Open Science

    Directory of Open Access Journals (Sweden)

    Matthew Michael McCormick

    2014-02-01

    Full Text Available Reproducibility verification is essential to the practice of the scientific method. Researchers report their findings, which are strengthened as other independent groups in the scientific community share similar outcomes. In the many scientific fields where software has become a fundamental tool for capturing and analyzing data, this requirement of reproducibility implies that reliable and comprehensive software platforms and tools should be made available to the scientific community. The tools will empower them and the public to verify, through practice, the reproducibility of observations that are reported in the scientific literature.Medical image analysis is one of the fields in which the use of computational resources, both software and hardware, are an essential platform for performing experimental work. In this arena, the introduction of the Insight Toolkit (ITK in 1999 has transformed the field and facilitates its progress by accelerating the rate at which algorithmic implementations are developed, tested, disseminated and improved. By building on the efficiency and quality of open source methodologies, ITK has provided the medical image community with an effective platform on which to build a daily workflow that incorporates the true scientific practices of reproducibility verification.This article describes the multiple tools, methodologies, and practices that the ITK community has adopted, refined, and followed during the past decade, in order to become one of the research communities with the most modern reproducibility verification infrastructure. For example, 207 contributors have created over 2400 unit tests that provide over 84% code line test coverage. The Insight Journal, an open publication journal associated with the toolkit, has seen over 360,000 publication downloads. The median normalized closeness centrality, a measure of knowledge flow, resulting from the distributed peer code review system was high, 0.46.

  11. Observations and a model of undertow over the inner continental shelf

    Science.gov (United States)

    Lentz, Steven J.; Fewings, Melanie; Howd, Peter; Fredericks, Janet; Hathaway, Kent

    2008-01-01

    Onshore volume transport (Stokes drift) due to surface gravity waves propagating toward the beach can result in a compensating Eulerian offshore flow in the surf zone referred to as undertow. Observed offshore flows indicate that wave-driven undertow extends well offshore of the surf zone, over the inner shelves of Martha’s Vineyard, Massachusetts, and North Carolina. Theoretical estimates of the wave-driven offshore transport from linear wave theory and observed wave characteristics account for 50% or more of the observed offshore transport variance in water depths between 5 and 12 m, and reproduce the observed dependence on wave height and water depth.During weak winds, wave-driven cross-shelf velocity profiles over the inner shelf have maximum offshore flow (1–6 cm s−1) and vertical shear near the surface and weak flow and shear in the lower half of the water column. The observed offshore flow profiles do not resemble the parabolic profiles with maximum flow at middepth observed within the surf zone. Instead, the vertical structure is similar to the Stokes drift velocity profile but with the opposite direction. This vertical structure is consistent with a dynamical balance between the Coriolis force associated with the offshore flow and an along-shelf “Hasselmann wave stress” due to the influence of the earth’s rotation on surface gravity waves. The close agreement between the observed and modeled profiles provides compelling evidence for the importance of the Hasselmann wave stress in forcing oceanic flows. Summer profiles are more vertically sheared than either winter profiles or model profiles, for reasons that remain unclear.

  12. On prognostic models, artificial intelligence and censored observations.

    Science.gov (United States)

    Anand, S S; Hamilton, P W; Hughes, J G; Bell, D A

    2001-03-01

    The development of prognostic models for assisting medical practitioners with decision making is not a trivial task. Models need to possess a number of desirable characteristics and few, if any, current modelling approaches based on statistical or artificial intelligence can produce models that display all these characteristics. The inability of modelling techniques to provide truly useful models has led to interest in these models being purely academic in nature. This in turn has resulted in only a very small percentage of models that have been developed being deployed in practice. On the other hand, new modelling paradigms are being proposed continuously within the machine learning and statistical community and claims, often based on inadequate evaluation, being made on their superiority over traditional modelling methods. We believe that for new modelling approaches to deliver true net benefits over traditional techniques, an evaluation centric approach to their development is essential. In this paper we present such an evaluation centric approach to developing extensions to the basic k-nearest neighbour (k-NN) paradigm. We use standard statistical techniques to enhance the distance metric used and a framework based on evidence theory to obtain a prediction for the target example from the outcome of the retrieved exemplars. We refer to this new k-NN algorithm as Censored k-NN (Ck-NN). This reflects the enhancements made to k-NN that are aimed at providing a means for handling censored observations within k-NN.

  13. Linking Geomechanical Models with Observations of Microseismicity during CCS Operations

    Science.gov (United States)

    Verdon, J.; Kendall, J.; White, D.

    2012-12-01

    During CO2 injection for the purposes of carbon capture and storage (CCS), injection-induced fracturing of the overburden represents a key risk to storage integrity. Fractures in a caprock provide a pathway along which buoyant CO2 can rise and escape the storage zone. Therefore the ability to link field-scale geomechanical models with field geophysical observations is of paramount importance to guarantee secure CO2 storage. Accurate location of microseismic events identifies where brittle failure has occurred on fracture planes. This is a manifestation of the deformation induced by CO2 injection. As the pore pressure is increased during injection, effective stress is decreased, leading to inflation of the reservoir and deformation of surrounding rocks, which creates microseismicity. The deformation induced by injection can be simulated using finite-element mechanical models. Such a model can be used to predict when and where microseismicity is expected to occur. However, typical elements in a field scale mechanical models have decameter scales, while the rupture size for microseismic events are typically of the order of 1 square meter. This means that mapping modeled stress changes to predictions of microseismic activity can be challenging. Where larger scale faults have been identified, they can be included explicitly in the geomechanical model. Where movement is simulated along these discrete features, it can be assumed that microseismicity will occur. However, microseismic events typically occur on fracture networks that are too small to be simulated explicitly in a field-scale model. Therefore, the likelihood of microseismicity occurring must be estimated within a finite element that does not contain explicitly modeled discontinuities. This can be done in a number of ways, including the utilization of measures such as closeness on the stress state to predetermined failure criteria, either for planes with a defined orientation (the Mohr-Coulomb criteria) for

  14. CONSTRAINING THE NFW POTENTIAL WITH OBSERVATIONS AND MODELING OF LOW SURFACE BRIGHTNESS GALAXY VELOCITY FIELDS

    International Nuclear Information System (INIS)

    Kuzio de Naray, Rachel; McGaugh, Stacy S.; Mihos, J. Christopher

    2009-01-01

    We model the Navarro-Frenk-White (NFW) potential to determine if, and under what conditions, the NFW halo appears consistent with the observed velocity fields of low surface brightness (LSB) galaxies. We present mock DensePak Integral Field Unit (IFU) velocity fields and rotation curves of axisymmetric and nonaxisymmetric potentials that are well matched to the spatial resolution and velocity range of our sample galaxies. We find that the DensePak IFU can accurately reconstruct the velocity field produced by an axisymmetric NFW potential and that a tilted-ring fitting program can successfully recover the corresponding NFW rotation curve. We also find that nonaxisymmetric potentials with fixed axis ratios change only the normalization of the mock velocity fields and rotation curves and not their shape. The shape of the modeled NFW rotation curves does not reproduce the data: these potentials are unable to simultaneously bring the mock data at both small and large radii into agreement with observations. Indeed, to match the slow rise of LSB galaxy rotation curves, a specific viewing angle of the nonaxisymmetric potential is required. For each of the simulated LSB galaxies, the observer's line of sight must be along the minor axis of the potential, an arrangement that is inconsistent with a random distribution of halo orientations on the sky.

  15. Bad Behavior: Improving Reproducibility in Behavior Testing.

    Science.gov (United States)

    Andrews, Anne M; Cheng, Xinyi; Altieri, Stefanie C; Yang, Hongyan

    2018-01-24

    Systems neuroscience research is increasingly possible through the use of integrated molecular and circuit-level analyses. These studies depend on the use of animal models and, in many cases, molecular and circuit-level analyses. Associated with genetic, pharmacologic, epigenetic, and other types of environmental manipulations. We illustrate typical pitfalls resulting from poor validation of behavior tests. We describe experimental designs and enumerate controls needed to improve reproducibility in investigating and reporting of behavioral phenotypes.

  16. S-AMP for non-linear observation models

    DEFF Research Database (Denmark)

    Cakmak, Burak; Winther, Ole; Fleury, Bernard H.

    2015-01-01

    Recently we presented the S-AMP approach, an extension of approximate message passing (AMP), to be able to handle general invariant matrix ensembles. In this contribution we extend S-AMP to non-linear observation models. We obtain generalized AMP (GAMP) as the special case when the measurement...

  17. A Network Model of Observation and Imitation of Speech

    Science.gov (United States)

    Mashal, Nira; Solodkin, Ana; Dick, Anthony Steven; Chen, E. Elinor; Small, Steven L.

    2012-01-01

    Much evidence has now accumulated demonstrating and quantifying the extent of shared regional brain activation for observation and execution of speech. However, the nature of the actual networks that implement these functions, i.e., both the brain regions and the connections among them, and the similarities and differences across these networks has not been elucidated. The current study aims to characterize formally a network for observation and imitation of syllables in the healthy adult brain and to compare their structure and effective connectivity. Eleven healthy participants observed or imitated audiovisual syllables spoken by a human actor. We constructed four structural equation models to characterize the networks for observation and imitation in each of the two hemispheres. Our results show that the network models for observation and imitation comprise the same essential structure but differ in important ways from each other (in both hemispheres) based on connectivity. In particular, our results show that the connections from posterior superior temporal gyrus and sulcus to ventral premotor, ventral premotor to dorsal premotor, and dorsal premotor to primary motor cortex in the left hemisphere are stronger during imitation than during observation. The first two connections are implicated in a putative dorsal stream of speech perception, thought to involve translating auditory speech signals into motor representations. Thus, the current results suggest that flow of information during imitation, starting at the posterior superior temporal cortex and ending in the motor cortex, enhances input to the motor cortex in the service of speech execution. PMID:22470360

  18. Modeling of oxygen transport and cellular energetics explains observations on in vivo cardiac energy metabolism.

    Directory of Open Access Journals (Sweden)

    Daniel A Beard

    2006-09-01

    Full Text Available Observations on the relationship between cardiac work rate and the levels of energy metabolites adenosine triphosphate (ATP, adenosine diphosphate (ADP, and phosphocreatine (CrP have not been satisfactorily explained by theoretical models of cardiac energy metabolism. Specifically, the in vivo stability of ATP, ADP, and CrP levels in response to changes in work and respiratory rate has eluded explanation. Here a previously developed model of mitochondrial oxidative phosphorylation, which was developed based on data obtained from isolated cardiac mitochondria, is integrated with a spatially distributed model of oxygen transport in the myocardium to analyze data obtained from several laboratories over the past two decades. The model includes the components of the respiratory chain, the F0F1-ATPase, adenine nucleotide translocase, and the mitochondrial phosphate transporter at the mitochondrial level; adenylate kinase, creatine kinase, and ATP consumption in the cytoplasm; and oxygen transport between capillaries, interstitial fluid, and cardiomyocytes. The integrated model is able to reproduce experimental observations on ATP, ADP, CrP, and inorganic phosphate levels in canine hearts over a range of workload and during coronary hypoperfusion and predicts that cytoplasmic inorganic phosphate level is a key regulator of the rate of mitochondrial respiration at workloads for which the rate of cardiac oxygen consumption is less than or equal to approximately 12 mumol per minute per gram of tissue. At work rates corresponding to oxygen consumption higher than 12 mumol min(-1 g(-1, model predictions deviate from the experimental data, indicating that at high work rates, additional regulatory mechanisms that are not currently incorporated into the model may be important. Nevertheless, the integrated model explains metabolite levels observed at low to moderate workloads and the changes in metabolite levels and tissue oxygenation observed during graded

  19. Deployment and Evaluation of an Observations Data Model

    Science.gov (United States)

    Horsburgh, J. S.; Tarboton, D. G.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.

    2007-12-01

    Environmental observations are fundamental to hydrology and water resources, and the way these data are organized and manipulated either enables or inhibits the analyses that can be performed. The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. This includes an Observations Data Model (ODM) that provides a new and consistent format for the storage and retrieval of environmental observations in a relational database designed to facilitate integrated analysis of large datasets collected by multiple investigators. Within this data model, observations are stored with sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used, and to provide traceable heritage from raw measurements to useable information. The design is based upon a relational database model that exposes each single observation as a record, taking advantage of the capability in relational database systems for querying based upon data values and enabling cross dimension data retrieval and analysis. This data model has been deployed, as part of the HIS Server, at the WATERS Network test bed observatories across the U.S where it serves as a repository for real time data in the observatory information system. The ODM holds the data that is then made available to investigators and the public through web services and the Data Access System for Hydrology (DASH) map based interface. In the WATERS Network test bed settings the ODM has been used to ingest, analyze and publish data from a variety of sources and disciplines. This paper will present an evaluation of the effectiveness of this initial deployment and the revisions that are being instituted to address shortcomings. The ODM represents a new, systematic way for hydrologists, scientists, and engineers to organize and share their data and thereby facilitate a fuller integrated understanding of water resources based on

  20. Linear system identification via backward-time observer models

    Science.gov (United States)

    Juang, Jer-Nan; Phan, Minh

    1993-01-01

    This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.

  1. June 13, 2013 U.S. East Coast Meteotsunami: Comparing a Numerical Model With Observations

    Science.gov (United States)

    Wang, D.; Becker, N. C.; Weinstein, S.; Whitmore, P.; Knight, W.; Kim, Y.; Bouchard, R. H.; Grissom, K.

    2013-12-01

    On June 13, 2013, a tsunami struck the U.S. East Coast and caused several reported injuries. This tsunami occurred after a derecho moved offshore from North America into the Atlantic Ocean. The presence of this storm, the lack of a seismic source, and the fact that tsunami arrival times at tide stations and deep ocean-bottom pressure sensors cannot be attributed to a 'point-source' suggest this tsunami was caused by atmospheric forces, i.e., a meteotsunami. In this study we attempt to reproduce the observed phenomenon using a numerical model with idealized atmospheric pressure forcing resembling the propagation of the observed barometric anomaly. The numerical model was able to capture some observed features of the tsunami at some tide stations, including the time-lag between the time of pressure jump and the time of tsunami arrival. The model also captures the response at a deep ocean-bottom pressure gauge (DART 44402), including the primary wave and the reflected wave. There are two components of the oceanic response to the propagating pressure anomaly, inverted barometer response and dynamic response. We find that the dynamic response over the deep ocean to be much smaller than the inverted barometer response. The time lag between the pressure jump and tsunami arrival at tide stations is due to the dynamic response: waves generated and/or reflected at the shelf-break propagate shoreward and amplify due to the shoaling effect. The evolution of the derecho over the deep ocean (propagation direction and intensity) is not well defined, however, because of the lack of data so the forcing used for this study is somewhat speculative. Better definition of the pressure anomaly through increased observation or high resolution atmospheric models would improve meteotsunami forecast capabilities.

  2. Tropical convection regimes in climate models: evaluation with satellite observations

    Directory of Open Access Journals (Sweden)

    A. K. Steiner

    2018-04-01

    Full Text Available High-quality observations are powerful tools for the evaluation of climate models towards improvement and reduction of uncertainty. Particularly at low latitudes, the most uncertain aspect lies in the representation of moist convection and interaction with dynamics, where rising motion is tied to deep convection and sinking motion to dry regimes. Since humidity is closely coupled with temperature feedbacks in the tropical troposphere, a proper representation of this region is essential. Here we demonstrate the evaluation of atmospheric climate models with satellite-based observations from Global Positioning System (GPS radio occultation (RO, which feature high vertical resolution and accuracy in the troposphere to lower stratosphere. We focus on the representation of the vertical atmospheric structure in tropical convection regimes, defined by high updraft velocity over warm surfaces, and investigate atmospheric temperature and humidity profiles. Results reveal that some models do not fully capture convection regions, particularly over land, and only partly represent strong vertical wind classes. Models show large biases in tropical mean temperature of more than 4 K in the tropopause region and the lower stratosphere. Reasonable agreement with observations is given in mean specific humidity in the lower to mid-troposphere. In moist convection regions, models tend to underestimate moisture by 10 to 40 % over oceans, whereas in dry downdraft regions they overestimate moisture by 100 %. Our findings provide evidence that RO observations are a unique source of information, with a range of further atmospheric variables to be exploited, for the evaluation and advancement of next-generation climate models.

  3. Observing the observer (I): meta-bayesian models of learning and decision-making.

    Science.gov (United States)

    Daunizeau, Jean; den Ouden, Hanneke E M; Pessiglione, Matthias; Kiebel, Stefan J; Stephan, Klaas E; Friston, Karl J

    2010-12-14

    In this paper, we present a generic approach that can be used to infer how subjects make optimal decisions under uncertainty. This approach induces a distinction between a subject's perceptual model, which underlies the representation of a hidden "state of affairs" and a response model, which predicts the ensuing behavioural (or neurophysiological) responses to those inputs. We start with the premise that subjects continuously update a probabilistic representation of the causes of their sensory inputs to optimise their behaviour. In addition, subjects have preferences or goals that guide decisions about actions given the above uncertain representation of these hidden causes or state of affairs. From a Bayesian decision theoretic perspective, uncertain representations are so-called "posterior" beliefs, which are influenced by subjective "prior" beliefs. Preferences and goals are encoded through a "loss" (or "utility") function, which measures the cost incurred by making any admissible decision for any given (hidden) state of affair. By assuming that subjects make optimal decisions on the basis of updated (posterior) beliefs and utility (loss) functions, one can evaluate the likelihood of observed behaviour. Critically, this enables one to "observe the observer", i.e. identify (context- or subject-dependent) prior beliefs and utility-functions using psychophysical or neurophysiological measures. In this paper, we describe the main theoretical components of this meta-Bayesian approach (i.e. a Bayesian treatment of Bayesian decision theoretic predictions). In a companion paper ('Observing the observer (II): deciding when to decide'), we describe a concrete implementation of it and demonstrate its utility by applying it to simulated and real reaction time data from an associative learning task.

  4. Observing the observer (I: meta-bayesian models of learning and decision-making.

    Directory of Open Access Journals (Sweden)

    Jean Daunizeau

    2010-12-01

    Full Text Available In this paper, we present a generic approach that can be used to infer how subjects make optimal decisions under uncertainty. This approach induces a distinction between a subject's perceptual model, which underlies the representation of a hidden "state of affairs" and a response model, which predicts the ensuing behavioural (or neurophysiological responses to those inputs. We start with the premise that subjects continuously update a probabilistic representation of the causes of their sensory inputs to optimise their behaviour. In addition, subjects have preferences or goals that guide decisions about actions given the above uncertain representation of these hidden causes or state of affairs. From a Bayesian decision theoretic perspective, uncertain representations are so-called "posterior" beliefs, which are influenced by subjective "prior" beliefs. Preferences and goals are encoded through a "loss" (or "utility" function, which measures the cost incurred by making any admissible decision for any given (hidden state of affair. By assuming that subjects make optimal decisions on the basis of updated (posterior beliefs and utility (loss functions, one can evaluate the likelihood of observed behaviour. Critically, this enables one to "observe the observer", i.e. identify (context- or subject-dependent prior beliefs and utility-functions using psychophysical or neurophysiological measures. In this paper, we describe the main theoretical components of this meta-Bayesian approach (i.e. a Bayesian treatment of Bayesian decision theoretic predictions. In a companion paper ('Observing the observer (II: deciding when to decide', we describe a concrete implementation of it and demonstrate its utility by applying it to simulated and real reaction time data from an associative learning task.

  5. Three-dimensional Kinetic Pulsar Magnetosphere Models: Connecting to Gamma-Ray Observations

    Science.gov (United States)

    Kalapotharakos, Constantinos; Brambilla, Gabriele; Timokhin, Andrey; Harding, Alice K.; Kazanas, Demosthenes

    2018-04-01

    We present three-dimensional (3D) global kinetic pulsar magnetosphere models, where the charged particle trajectories and the corresponding electromagnetic fields are treated self-consistently. For our study, we have developed a Cartesian 3D relativistic particle-in-cell code that incorporates radiation reaction forces. We describe our code and discuss the related technical issues, treatments, and assumptions. Injecting particles up to large distances in the magnetosphere, we apply arbitrarily low to high particle injection rates, and obtain an entire spectrum of solutions from close to the vacuum-retarded dipole to close to the force-free (FF) solution, respectively. For high particle injection rates (close to FF solutions), significant accelerating electric field components are confined only near the equatorial current sheet outside the light cylinder. A judicious interpretation of our models allows the particle emission to be calculated, and consequently, the corresponding realistic high-energy sky maps and spectra to be derived. Using model parameters that cover the entire range of spin-down powers of Fermi young and millisecond pulsars, we compare the corresponding model γ-ray light curves, cutoff energies, and total γ-ray luminosities with those observed by Fermi to discover a dependence of the particle injection rate, { \\mathcal F }, on the spin-down power, \\dot{{ \\mathcal E }}, indicating an increase of { \\mathcal F } with \\dot{{ \\mathcal E }}. Our models, guided by Fermi observations, provide field structures and particle distributions that are not only consistent with each other but also able to reproduce a broad range of the observed γ-ray phenomenologies of both young and millisecond pulsars.

  6. Intercomparison of middle-atmospheric wind in observations and models

    Directory of Open Access Journals (Sweden)

    R. Rüfenacht

    2018-04-01

    Full Text Available Wind profile information throughout the entire upper stratosphere and lower mesosphere (USLM is important for the understanding of atmospheric dynamics but became available only recently, thanks to developments in remote sensing techniques and modelling approaches. However, as wind measurements from these altitudes are rare, such products have generally not yet been validated with (other observations. This paper presents the first long-term intercomparison of wind observations in the USLM by co-located microwave radiometer and lidar instruments at Andenes, Norway (69.3° N, 16.0° E. Good correspondence has been found at all altitudes for both horizontal wind components for nighttime as well as daylight conditions. Biases are mostly within the random errors and do not exceed 5–10 m s−1, which is less than 10 % of the typically encountered wind speeds. Moreover, comparisons of the observations with the major reanalyses and models covering this altitude range are shown, in particular with the recently released ERA5, ECMWF's first reanalysis to cover the whole USLM region. The agreement between models and observations is very good in general, but temporally limited occurrences of pronounced discrepancies (up to 40 m s−1 exist. In the article's Appendix the possibility of obtaining nighttime wind information about the mesopause region by means of microwave radiometry is investigated.

  7. Obs4MIPS: Satellite Observations for Model Evaluation

    Science.gov (United States)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2017-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. The project holdings now exceed 120 datasets with observations that directly correspond to CMIP5 model output variables, with new additions in response to the CMIP6 experiments. With the growth in climate model output data volume, it is increasing more difficult to bring the model output and the observations together to do evaluations. The positioning of the obs4MIPs datasets within the Earth System Grid Federation (ESGF) allows for the use of currently available and planned online tools within the ESGF to perform analysis using model output and observational datasets without necessarily downloading everything to a local workstation. This past year, obs4MIPs has updated its submission guidelines to closely align with changes in the CMIP6 experiments, and is implementing additional indicators and ancillary data to allow users to more easily determine the efficacy of an obs4MIPs dataset for specific evaluation purposes. This poster will present the new guidelines and indicators, and update the list of current obs4MIPs holdings and their connection to the ESGF evaluation and analysis tools currently available, and being developed for the CMIP6 experiments.

  8. Reproducibility of a reaming test

    DEFF Research Database (Denmark)

    Pilny, Lukas; Müller, Pavel; De Chiffre, Leonardo

    2012-01-01

    The reproducibility of a reaming test was analysed to document its applicability as a performance test for cutting fluids. Reaming tests were carried out on a drilling machine using HSS reamers. Workpiece material was an austenitic stainless steel, machined using 4.75 m∙min-1 cutting speed and 0......). Process reproducibility was assessed as the ability of different operators to ensure a consistent rating of individual lubricants. Absolute average values as well as experimental standard deviations of the evaluation parameters were calculated, and uncertainty budgeting was performed. Results document...... a built-up edge occurrence hindering a robust evaluation of cutting fluid performance, if the data evaluation is based on surface finish only. Measurements of hole geometry provide documentation to recognize systematic error distorting the performance test....

  9. Reproducibility of a reaming test

    DEFF Research Database (Denmark)

    Pilny, Lukas; Müller, Pavel; De Chiffre, Leonardo

    2014-01-01

    The reproducibility of a reaming test was analysed to document its applicability as a performance test for cutting fluids. Reaming tests were carried out on a drilling machine using HSS reamers. Workpiece material was an austenitic stainless steel, machined using 4.75 m•min−1 cutting speed and 0......). Process reproducibility was assessed as the ability of different operators to ensure a consistent rating of individual lubricants. Absolute average values as well as experimental standard deviations of the evaluation parameters were calculated, and uncertainty budgeting was performed. Results document...... a built–up edge occurrence hindering a robust evaluation of cutting fluid performance, if the data evaluation is based on surface finish only. Measurements of hole geometry provide documentation to recognise systematic error distorting the performance test....

  10. Southeast Atmosphere Studies: learning from model-observation syntheses

    Directory of Open Access Journals (Sweden)

    J. Mao

    2018-02-01

    Full Text Available Concentrations of atmospheric trace species in the United States have changed dramatically over the past several decades in response to pollution control strategies, shifts in domestic energy policy and economics, and economic development (and resulting emission changes elsewhere in the world. Reliable projections of the future atmosphere require models to not only accurately describe current atmospheric concentrations, but to do so by representing chemical, physical and biological processes with conceptual and quantitative fidelity. Only through incorporation of the processes controlling emissions and chemical mechanisms that represent the key transformations among reactive molecules can models reliably project the impacts of future policy, energy and climate scenarios. Efforts to properly identify and implement the fundamental and controlling mechanisms in atmospheric models benefit from intensive observation periods, during which collocated measurements of diverse, speciated chemicals in both the gas and condensed phases are obtained. The Southeast Atmosphere Studies (SAS, including SENEX, SOAS, NOMADSS and SEAC4RS conducted during the summer of 2013 provided an unprecedented opportunity for the atmospheric modeling community to come together to evaluate, diagnose and improve the representation of fundamental climate and air quality processes in models of varying temporal and spatial scales.This paper is aimed at discussing progress in evaluating, diagnosing and improving air quality and climate modeling using comparisons to SAS observations as a guide to thinking about improvements to mechanisms and parameterizations in models. The effort focused primarily on model representation of fundamental atmospheric processes that are essential to the formation of ozone, secondary organic aerosol (SOA and other trace species in the troposphere, with the ultimate goal of understanding the radiative impacts of these species in the southeast and

  11. Southeast Atmosphere Studies: learning from model-observation syntheses

    Science.gov (United States)

    Mao, Jingqiu; Carlton, Annmarie; Cohen, Ronald C.; Brune, William H.; Brown, Steven S.; Wolfe, Glenn M.; Jimenez, Jose L.; Pye, Havala O. T.; Ng, Nga Lee; Xu, Lu; McNeill, V. Faye; Tsigaridis, Kostas; McDonald, Brian C.; Warneke, Carsten; Guenther, Alex; Alvarado, Matthew J.; de Gouw, Joost; Mickley, Loretta J.; Leibensperger, Eric M.; Mathur, Rohit; Nolte, Christopher G.; Portmann, Robert W.; Unger, Nadine; Tosca, Mika; Horowitz, Larry W.

    2018-02-01

    Concentrations of atmospheric trace species in the United States have changed dramatically over the past several decades in response to pollution control strategies, shifts in domestic energy policy and economics, and economic development (and resulting emission changes) elsewhere in the world. Reliable projections of the future atmosphere require models to not only accurately describe current atmospheric concentrations, but to do so by representing chemical, physical and biological processes with conceptual and quantitative fidelity. Only through incorporation of the processes controlling emissions and chemical mechanisms that represent the key transformations among reactive molecules can models reliably project the impacts of future policy, energy and climate scenarios. Efforts to properly identify and implement the fundamental and controlling mechanisms in atmospheric models benefit from intensive observation periods, during which collocated measurements of diverse, speciated chemicals in both the gas and condensed phases are obtained. The Southeast Atmosphere Studies (SAS, including SENEX, SOAS, NOMADSS and SEAC4RS) conducted during the summer of 2013 provided an unprecedented opportunity for the atmospheric modeling community to come together to evaluate, diagnose and improve the representation of fundamental climate and air quality processes in models of varying temporal and spatial scales.This paper is aimed at discussing progress in evaluating, diagnosing and improving air quality and climate modeling using comparisons to SAS observations as a guide to thinking about improvements to mechanisms and parameterizations in models. The effort focused primarily on model representation of fundamental atmospheric processes that are essential to the formation of ozone, secondary organic aerosol (SOA) and other trace species in the troposphere, with the ultimate goal of understanding the radiative impacts of these species in the southeast and elsewhere. Here we

  12. Modelling and observing urban climate in the Netherlands

    International Nuclear Information System (INIS)

    Van Hove, B.; Steeneveld, G.J.; Heusinkveld, B.; Holtslag, B.; Jacobs, C.; Ter Maat, H.; Elbers, J.; Moors, E.

    2011-06-01

    The main aims of the present study are: (1) to evaluate the performance of two well-known mesoscale NWP (numerical weather prediction) models coupled to a UCM (Urban Canopy Models), and (2) to develop a proper measurement strategy for obtaining meteorological data that can be used in model evaluation studies. We choose the mesoscale models WRF (Weather Research and Forecasting Model) and RAMS (Regional Atmospheric Modeling System), respectively, because the partners in the present project have a large expertise with respect to these models. In addition WRF and RAMS have been successfully used in the meteorology and climate research communities for various purposes, including weather prediction and land-atmosphere interaction research. Recently, state-of-the-art UCM's were embedded within the land surface scheme of the respective models, in order to better represent the exchange of heat, momentum, and water vapour in the urban environment. Key questions addressed here are: What is the general model performance with respect to the urban environment?; How can useful and observational data be obtained that allow sensible validation and further parameterization of the models?; and Can the models be easily modified to simulate the urban climate under Dutch climatic conditions, urban configuration and morphology? Chapter 2 reviews the available Urban Canopy Models; we discuss their theoretical basis, the different representations of the urban environment, the required input and the output. Much of the information was obtained from the Urban Surface Energy Balance: Land Surface Scheme Comparison project (PILPS URBAN, PILPS stands for Project for Inter-comparison of Land-Surface Parameterization Schemes). This project started in March 2008 and was coordinated by the Department of Geography, King's College London. In order to test the performance of our models we participated in this project. Chapter 3 discusses the main results of the first phase of PILPS URBAN. A first

  13. CrowdWater - Can people observe what models need?

    Science.gov (United States)

    van Meerveld, I. H. J.; Seibert, J.; Vis, M.; Etter, S.; Strobl, B.

    2017-12-01

    CrowdWater (www.crowdwater.ch) is a citizen science project that explores the usefulness of crowd-sourced data for hydrological model calibration and prediction. Hydrological models are usually calibrated based on observed streamflow data but it is likely easier for people to estimate relative stream water levels, such as the water level above or below a rock, than streamflow. Relative stream water levels may, therefore, be a more suitable variable for citizen science projects than streamflow. In order to test this assumption, we held surveys near seven different sized rivers in Switzerland and asked more than 450 volunteers to estimate the water level class based on a picture with a virtual staff gauge. The results show that people can generally estimate the relative water level well, although there were also a few outliers. We also asked the volunteers to estimate streamflow based on the stick method. The median estimated streamflow was close to the observed streamflow but the spread in the streamflow estimates was large and there were very large outliers, suggesting that crowd-based streamflow data is highly uncertain. In order to determine the potential value of water level class data for model calibration, we converted streamflow time series for 100 catchments in the US to stream level class time series and used these to calibrate the HBV model. The model was then validated using the streamflow data. The results of this modeling exercise show that stream level class data are useful for constraining a simple runoff model. Time series of only two stream level classes, e.g. above or below a rock in the stream, were already informative, especially when the class boundary was chosen towards the highest stream levels. There was hardly any improvement in model performance when more than five water level classes were used. This suggests that if crowd-sourced stream level observations are available for otherwise ungauged catchments, these data can be used to constrain

  14. Observational constraints from models of close binary evolution

    International Nuclear Information System (INIS)

    Greve, J.P. de; Packet, W.

    1984-01-01

    The evolution of a system of 9 solar masses + 5.4 solar masses is computed from Zero Age Main Sequence through an early case B of mass exchange, up to the second phase of mass transfer after core helium burning. Both components are calculated simultaneously. The evolution is divided into several physically different phases. The characteristics of the models in each of these phases are transformed into corresponding 'observable' quantities. The outlook of the system for photometric observations is discussed, for an idealized case. The influence of the mass of the loser and the initial mass ratio is considered. (Auth.)

  15. Observations and Models of Highly Intermittent Phytoplankton Distributions

    Science.gov (United States)

    Mandal, Sandip; Locke, Christopher; Tanaka, Mamoru; Yamazaki, Hidekatsu

    2014-01-01

    The measurement of phytoplankton distributions in ocean ecosystems provides the basis for elucidating the influences of physical processes on plankton dynamics. Technological advances allow for measurement of phytoplankton data to greater resolution, displaying high spatial variability. In conventional mathematical models, the mean value of the measured variable is approximated to compare with the model output, which may misinterpret the reality of planktonic ecosystems, especially at the microscale level. To consider intermittency of variables, in this work, a new modelling approach to the planktonic ecosystem is applied, called the closure approach. Using this approach for a simple nutrient-phytoplankton model, we have shown how consideration of the fluctuating parts of model variables can affect system dynamics. Also, we have found a critical value of variance of overall fluctuating terms below which the conventional non-closure model and the mean value from the closure model exhibit the same result. This analysis gives an idea about the importance of the fluctuating parts of model variables and about when to use the closure approach. Comparisons of plot of mean versus standard deviation of phytoplankton at different depths, obtained using this new approach with real observations, give this approach good conformity. PMID:24787740

  16. GeoTrust Hub: A Platform For Sharing And Reproducing Geoscience Applications

    Science.gov (United States)

    Malik, T.; Tarboton, D. G.; Goodall, J. L.; Choi, E.; Bhatt, A.; Peckham, S. D.; Foster, I.; Ton That, D. H.; Essawy, B.; Yuan, Z.; Dash, P. K.; Fils, G.; Gan, T.; Fadugba, O. I.; Saxena, A.; Valentic, T. A.

    2017-12-01

    Recent requirements of scholarly communication emphasize the reproducibility of scientific claims. Text-based research papers are considered poor mediums to establish reproducibility. Papers must be accompanied by "research objects", aggregation of digital artifacts that together with the paper provide an authoritative record of a piece of research. We will present GeoTrust Hub (http://geotrusthub.org), a platform for creating, sharing, and reproducing reusable research objects. GeoTrust Hub provides tools for scientists to create `geounits'--reusable research objects. Geounits are self-contained, annotated, and versioned containers that describe and package computational experiments in an efficient and light-weight manner. Geounits can be shared on public repositories such as HydroShare and FigShare, and also using their respective APIs reproduced on provisioned clouds. The latter feature enables science applications to have a lifetime beyond sharing, wherein they can be independently verified and trust be established as they are repeatedly reused. Through research use cases from several geoscience laboratories across the United States, we will demonstrate how tools provided from GeoTrust Hub along with Hydroshare as its public repository for geounits is advancing the state of reproducible research in the geosciences. For each use case, we will address different computational reproducibility requirements. Our first use case will be an example of setup reproducibility which enables a scientist to set up and reproduce an output from a model with complex configuration and development environments. Our second use case will be an example of algorithm/data reproducibility, where in a shared data science model/dataset can be substituted with an alternate one to verify model output results, and finally an example of interactive reproducibility, in which an experiment is dependent on specific versions of data to produce the result. Toward this we will use software and data

  17. Fine resolution atmospheric sulfate model driven by operational meteorological data: Comparison with observations

    International Nuclear Information System (INIS)

    Benkovitz, C.M.; Schwartz, S.E.; Berkowitz, C.M.; Easter, R.C.

    1993-09-01

    The hypothesis that anthropogenic sulfur aerosol influences clear-sky and cloud albedo and can thus influence climate has been advanced by several investigators; current global-average climate forcing is estimated to be of comparable magnitude, but opposite sign, to longwave forcing by anthropogenic greenhouse gases. The high space and time variability of sulfate concentrations and column aerosol burdens have been established by observational data; however, geographic and time coverage provided by data from surface monitoring networks is very limited. Consistent regional and global estimates of sulfate aerosol loading, and the contributions to this loading from different sources can be obtained only by modeling studies. Here we describe a sub-hemispheric to global-scale Eulerian transport and transformation model for atmospheric sulfate and its precursors, driven by operational meteorological data, and report results of calculations for October, 1986 for the North Atlantic and adjacent continental regions. The model, which is based on the Global Chemistry Model uses meteorological data from the 6-hour forecast model of the European Center for Medium-Range Weather Forecast to calculate transport and transformation of sulfur emissions. Time- and location-dependent dry deposition velocities were estimated using the methodology of Wesely and colleagues. Chemical reactions includes gaseous oxidation of SO 2 and DMS by OH, and aqueous oxidation of SO 2 by H 2 O 2 and O 3 . Anthropogenic emissions were from the NAPAP and EMEP 1985 inventories and biogenic emissions based on Bates et al. Calculated sulfate concentrations and column burdens exhibit high variability on spatial scale of hundreds of km and temporal scale of days. Calculated daily average sulfate concentrations closely reproduce observed concentrations at locations widespread over the model domain

  18. New Cosmological Model and Its Implications on Observational Data Interpretation

    Directory of Open Access Journals (Sweden)

    Vlahovic Branislav

    2013-09-01

    Full Text Available The paradigm of ΛCDM cosmology works impressively well and with the concept of inflation it explains the universe after the time of decoupling. However there are still a few concerns; after much effort there is no detection of dark matter and there are significant problems in the theoretical description of dark energy. We will consider a variant of the cosmological spherical shell model, within FRW formalism and will compare it with the standard ΛCDM model. We will show that our new topological model satisfies cosmological principles and is consistent with all observable data, but that it may require new interpretation for some data. Considered will be constraints imposed on the model, as for instance the range for the size and allowed thickness of the shell, by the supernovae luminosity distance and CMB data. In this model propagation of the light is confined along the shell, which has as a consequence that observed CMB originated from one point or a limited space region. It allows to interpret the uniformity of the CMB without inflation scenario. In addition this removes any constraints on the uniformity of the universe at the early stage and opens a possibility that the universe was not uniform and that creation of galaxies and large structures is due to the inhomogeneities that originated in the Big Bang.

  19. Modelling shear wave splitting observations from Wellington, New Zealand

    Science.gov (United States)

    Marson-Pidgeon, Katrina; Savage, Martha K.

    2004-05-01

    Frequency-dependent anisotropy was previously observed at the permanent broad-band station SNZO, South Karori, Wellington, New Zealand. This has important implications for the interpretation of measurements in other subduction zones and hence for our understanding of mantle flow. This motivated us to make further splitting measurements using events recorded since the previous study and to develop a new modelling technique. Thus, in this study we have made 67 high-quality shear wave splitting measurements using events recorded at the SNZO station spanning a 10-yr period. This station is the only one operating in New Zealand for longer than 2 yr. Using a combination of teleseismic SKS and S phases and regional ScS phases provides good azimuthal coverage, allowing us to undertake detailed modelling. The splitting measurements indicate that in addition to the frequency dependence observed previously at this station, there are also variations with propagation and initial polarization directions. The fast polarization directions range between 2° and 103°, and the delay times range between 0.75 s and 3.05 s. These ranges are much larger than observed previously at SNZO or elsewhere in New Zealand. Because of the observed frequency dependence we measure the dominant frequency of the phase used to make the splitting measurement, and take this into account in the modelling. We fit the fast polarization directions fairly well with a two-layer anisotropic model with horizontal axes of symmetry. However, such a model does not fit the delay times or explain the frequency dependence. We have developed a new inversion method which allows for an inclined axis of symmetry in each of the two layers. However, applying this method to SNZO does not significantly improve the fit over a two-layer model with horizontal symmetry axes. We are therefore unable to explain the frequency dependence or large variation in delay time values with multiple horizontal layers of anisotropy, even

  20. Cloud condensation nuclei in Western Colorado: Observations and model predictions

    Science.gov (United States)

    Ward, Daniel Stewart

    Variations in the warm cloud-active portion of atmospheric aerosols, or cloud condensation nuclei (CCN), have been shown to impact cloud droplet number concentration and subsequently cloud and precipitation processes. This issue carries special significance in western Colorado where a significant portion of the region's water resources is supplied by precipitation from winter season, orographic clouds, which are particularly sensitive to variations in CCN. Temporal and spatial variations in CCN in western Colorado were investigated using a combination of observations and a new method for modeling CCN. As part of the Inhibition of Snowfall by Pollution Aerosols (ISPA-III) field campaign, total particle and CCN number concentration were measured for a 24-day period in Mesa Verde National Park, climatologically upwind of the San Juan Mountains. These data were combined with CCN observations from Storm Peak Lab (SPL) in northwestern Colorado and from the King Air platform, flying north to south along the Western Slope. Altogether, the sampled aerosols were characteristic of a rural continental environment and the cloud-active portion varied slowly in time, and little in space. Estimates of the is hygroscopicity parameter indicated consistently low aerosol hygroscopicity typical of organic aerosol species. The modeling approach included the addition of prognostic CCN to the Regional Atmospheric Modeling System (RAMS). The RAMS droplet activation scheme was altered using parcel model simulations to include variations in aerosol hygroscopicity, represented by K. Analysis of the parcel model output and a supplemental sensitivity study showed that model CCN will be sensitive to changes in aerosol hygroscopicity, but only for conditions of low supersaturation or small particle sizes. Aerosol number, size distribution median radius, and hygroscopicity (represented by the K parameter) in RAMS were constrained by nudging to forecasts of these quantities from the Weather

  1. Observational constraints on tachyonic chameleon dark energy model

    Science.gov (United States)

    Banijamali, A.; Bellucci, S.; Fazlpour, B.; Solbi, M.

    2018-03-01

    It has been recently shown that tachyonic chameleon model of dark energy in which tachyon scalar field non-minimally coupled to the matter admits stable scaling attractor solution that could give rise to the late-time accelerated expansion of the universe and hence alleviate the coincidence problem. In the present work, we use data from Type Ia supernova (SN Ia) and Baryon Acoustic oscillations to place constraints on the model parameters. In our analysis we consider in general exponential and non-exponential forms for the non-minimal coupling function and tachyonic potential and show that the scenario is compatible with observations.

  2. A sliding mode observer for hemodynamic characterization under modeling uncertainties

    KAUST Repository

    Zayane, Chadia

    2014-06-01

    This paper addresses the case of physiological states reconstruction in a small region of the brain under modeling uncertainties. The misunderstood coupling between the cerebral blood volume and the oxygen extraction fraction has lead to a partial knowledge of the so-called balloon model describing the hemodynamic behavior of the brain. To overcome this difficulty, a High Order Sliding Mode observer is applied to the balloon system, where the unknown coupling is considered as an internal perturbation. The effectiveness of the proposed method is illustrated through a set of synthetic data that mimic fMRI experiments.

  3. Observed and modelled “chemical weather” during ESCOMPTE

    Science.gov (United States)

    Dufour, A.; Amodei, M.; Ancellet, G.; Peuch, V.-H.

    2005-03-01

    The new MOdèle de Chimie Atmosphérique à Grande Echelle (MOCAGE) three-dimensional multiscale chemistry and transport model (CTM) has been applied to study heavy pollution episodes observed during the ESCOMPTE experiment. The model considers the troposphere and lower stratosphere, and allows the possibility of zooming from the planetary scale down to the regional scale over limited area subdomains. Like this, it generates its own time-dependent chemical boundary conditions in the vertical and in the horizontal. This paper focuses on the evaluation and quantification of uncertainties related to chemical and transport modelling during two intensive observing periods, IOP2 and IOP4 (June 20-26 and July 10-14, 2001, respectively). Simulations are compared to the database of four-dimensional observations, which includes ground-based sites and aircraft measurements, radiosoundings, and quasi-continuous measurements of ozone by LIDARs. Thereby, the observed and modelled day-to-day variabilities in air composition both at the surface and in the vertical have been assessed. Then, three sensitivity studies are conducted concerning boundary conditions, accuracy of the emission dataset, and representation of chemistry. Firstly, to go further in the analysis of chemical boundary conditions, results from the standard grid nesting set-up and altered configurations, relying on climatologies, are compared. Along with other recent studies, this work advocates the systematic coupling of limited-area models with global CTMs, even for regional air quality studies or forecasts. Next, we evaluate the benefits of using the detailed high-resolution emissions inventory of ESCOMPTE: improvements are noticeable both on ozone reactivity and on the concentrations of various species of the ozone photochemical cycle especially primary ones. Finally, we provide some insights on the comparison of two simulations differing only by the parameterisation of chemistry and using two state

  4. Observations in particle physics: from two neutrinos to standard model

    International Nuclear Information System (INIS)

    Lederman, L.M.

    1990-01-01

    Experiments, which have made their contribution to creation of the standard model, are discussed. Results of observations on the following concepts: long-lived neutral V-particles, violation of preservation of parity and charge invariance in meson decays, reaction with high-energy neutrino and existence of neutrino of two types, partons and dynamic quarks, dimuon resonance at 9.5 GeV in 400 GeV-proton-nucleus collisions, are considered

  5. Model dependence of isospin sensitive observables at high densities

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Wen-Mei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); School of Science, Huzhou Teachers College, Huzhou 313000 (China); Yong, Gao-Chan, E-mail: yonggaochan@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Wang, Yongjia [School of Science, Huzhou Teachers College, Huzhou 313000 (China); School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Li, Qingfeng [School of Science, Huzhou Teachers College, Huzhou 313000 (China); Zhang, Hongfei [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Zuo, Wei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China)

    2013-10-07

    Within two different frameworks of isospin-dependent transport model, i.e., Boltzmann–Uehling–Uhlenbeck (IBUU04) and Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport models, sensitive probes of nuclear symmetry energy are simulated and compared. It is shown that neutron to proton ratio of free nucleons, π{sup −}/π{sup +} ratio as well as isospin-sensitive transverse and elliptic flows given by the two transport models with their “best settings”, all have obvious differences. Discrepancy of numerical value of isospin-sensitive n/p ratio of free nucleon from the two models mainly originates from different symmetry potentials used and discrepancies of numerical value of charged π{sup −}/π{sup +} ratio and isospin-sensitive flows mainly originate from different isospin-dependent nucleon–nucleon cross sections. These demonstrations call for more detailed studies on the model inputs (i.e., the density- and momentum-dependent symmetry potential and the isospin-dependent nucleon–nucleon cross section in medium) of isospin-dependent transport model used. The studies of model dependence of isospin sensitive observables can help nuclear physicists to pin down the density dependence of nuclear symmetry energy through comparison between experiments and theoretical simulations scientifically.

  6. Link between laboratory/field observations and models

    International Nuclear Information System (INIS)

    Cole, C.R.; Foley, M.G.

    1985-10-01

    The various linkages in system performance assessments that integrate disposal program elements must be understood. The linkage between model development and field/laboratory observations is described as the iterative program of site and system characterization for development of an observational-confirmatory data base to develop, improve, and support conceptual models for site and system behavior. The program consists of data gathering and experiments to demonstrate understanding at various spatial and time scales and degrees of complexity. Understanding and accounting for the decreasing characterization certainty that arises with increasing space and time scales is an important aspect of the link between models and observations. The performance allocation process for setting performance goals and confidence levels coupled with a performance assessment approach that provides these performance and confidence estimates will resolve when sufficient characterization has been achieved. At each iteration performance allocation goals are reviewed and revised as necessary. The updated data base and appropriate performance assessment tools and approaches are utilized to identify and design additional tests and data needs necessary to meet current performance allocation goals. 9 refs

  7. The link between laboratory/field observations and models

    International Nuclear Information System (INIS)

    Cole, C.R.; Foley, M.G.

    1986-01-01

    The various linkages in system performance assessments that integrate disposal program elements must be understood. The linkage between model development and field/laboratory observations is described as the iterative program of site and system characterization for development of an observational-confirmatory data base. This data base is designed to develop, improve, and support conceptual models for site and system behavior. The program consists of data gathering and experiments to demonstrate understanding at various spatial and time scales and degrees of complexity. Understanding and accounting for the decreasing characterization certainty that arises with increasing space and time scales is an important aspect of the link between models and observations. The performance allocation process for setting performance goals and confidence levels, coupled with a performance assessment approach that provides these performance and confidence estimates, will determine when sufficient characterization has been achieved. At each iteration, performance allocation goals are reviewed and revised as necessary. The updated data base and appropriate performance assessment tools and approaches are utilized to identify and design additional tests and data needs necessary to meet current performance allocation goals

  8. MOCK OBSERVATIONS OF BLUE STRAGGLERS IN GLOBULAR CLUSTER MODELS

    International Nuclear Information System (INIS)

    Sills, Alison; Glebbeek, Evert; Chatterjee, Sourav; Rasio, Frederic A.

    2013-01-01

    We created artificial color-magnitude diagrams of Monte Carlo dynamical models of globular clusters and then used observational methods to determine the number of blue stragglers in those clusters. We compared these blue stragglers to various cluster properties, mimicking work that has been done for blue stragglers in Milky Way globular clusters to determine the dominant formation mechanism(s) of this unusual stellar population. We find that a mass-based prescription for selecting blue stragglers will select approximately twice as many blue stragglers than a selection criterion that was developed for observations of real clusters. However, the two numbers of blue stragglers are well-correlated, so either selection criterion can be used to characterize the blue straggler population of a cluster. We confirm previous results that the simplified prescription for the evolution of a collision or merger product in the BSE code overestimates their lifetimes. We show that our model blue stragglers follow similar trends with cluster properties (core mass, binary fraction, total mass, collision rate) as the true Milky Way blue stragglers as long as we restrict ourselves to model clusters with an initial binary fraction higher than 5%. We also show that, in contrast to earlier work, the number of blue stragglers in the cluster core does have a weak dependence on the collisional parameter Γ in both our models and in Milky Way globular clusters

  9. General Description of Fission Observables - JEFF Report 24. GEF Model

    International Nuclear Information System (INIS)

    Schmidt, Karl-Heinz; Jurado, Beatriz; Amouroux, Charlotte

    2014-06-01

    The Joint Evaluated Fission and Fusion (JEFF) Project is a collaborative effort among the member countries of the OECD Nuclear Energy Agency (NEA) Data Bank to develop a reference nuclear data library. The JEFF library contains sets of evaluated nuclear data, mainly for fission and fusion applications; it contains a number of different data types, including neutron and proton interaction data, radioactive decay data, fission yield data and thermal scattering law data. The General fission (GEF) model is based on novel theoretical concepts and ideas developed to model low energy nuclear fission. The GEF code calculates fission-fragment yields and associated quantities (e.g. prompt neutron and gamma) for a large range of nuclei and excitation energy. This opens up the possibility of a qualitative step forward to improve further the JEFF fission yields sub-library. This report describes the GEF model which explains the complex appearance of fission observables by universal principles of theoretical models and considerations on the basis of fundamental laws of physics and mathematics. The approach reveals a high degree of regularity and provides a considerable insight into the physics of the fission process. Fission observables can be calculated with a precision that comply with the needs for applications in nuclear technology. The relevance of the approach for examining the consistency of experimental results and for evaluating nuclear data is demonstrated. (authors)

  10. Extreme temperature events on Greenland in observations and the MAR regional climate model

    Science.gov (United States)

    Leeson, Amber A.; Eastoe, Emma; Fettweis, Xavier

    2018-03-01

    Meltwater from the Greenland Ice Sheet contributed 1.7-6.12 mm to global sea level between 1993 and 2010 and is expected to contribute 20-110 mm to future sea level rise by 2100. These estimates were produced by regional climate models (RCMs) which are known to be robust at the ice sheet scale but occasionally miss regional- and local-scale climate variability (e.g. Leeson et al., 2017; Medley et al., 2013). To date, the fidelity of these models in the context of short-period variability in time (i.e. intra-seasonal) has not been fully assessed, for example their ability to simulate extreme temperature events. We use an event identification algorithm commonly used in extreme value analysis, together with observations from the Greenland Climate Network (GC-Net), to assess the ability of the MAR (Modèle Atmosphérique Régional) RCM to reproduce observed extreme positive-temperature events at 14 sites around Greenland. We find that MAR is able to accurately simulate the frequency and duration of these events but underestimates their magnitude by more than half a degree Celsius/kelvin, although this bias is much smaller than that exhibited by coarse-scale Era-Interim reanalysis data. As a result, melt energy in MAR output is underestimated by between 16 and 41 % depending on global forcing applied. Further work is needed to precisely determine the drivers of extreme temperature events, and why the model underperforms in this area, but our findings suggest that biases are passed into MAR from boundary forcing data. This is important because these forcings are common between RCMs and their range of predictions of past and future ice sheet melting. We propose that examining extreme events should become a routine part of global and regional climate model evaluation and that addressing shortcomings in this area should be a priority for model development.

  11. Interannual sedimentary effluxes of alkalinity in the southern North Sea: model results compared with summer observations

    Directory of Open Access Journals (Sweden)

    J. Pätsch

    2018-06-01

    Full Text Available For the sediments of the central and southern North Sea different sources of alkalinity generation are quantified by a regional modelling system for the period 2000–2014. For this purpose a formerly global ocean sediment model coupled with a pelagic ecosystem model is adapted to shelf sea dynamics, where much larger turnover rates than in the open and deep ocean occur. To track alkalinity changes due to different nitrogen-related processes, the open ocean sediment model was extended by the state variables particulate organic nitrogen (PON and ammonium. Directly measured alkalinity fluxes and those derived from Ra isotope flux observation from the sediment into the pelagic are reproduced by the model system, but calcite building and calcite dissolution are underestimated. Both fluxes cancel out in terms of alkalinity generation and consumption. Other simulated processes altering alkalinity in the sediment, like net sulfate reduction, denitrification, nitrification, and aerobic degradation, are quantified and compare well with corresponding fluxes derived from observations. Most of these fluxes exhibit a strong positive gradient from the open North Sea to the coast, where large rivers drain nutrients and organic matter. Atmospheric nitrogen deposition also shows a positive gradient from the open sea towards land and supports alkalinity generation in the sediments. An additional source of spatial variability is introduced by the use of a 3-D heterogenous porosity field. Due to realistic porosity variations (0.3–0.5 the alkalinity fluxes vary by about 4 %. The strongest impact on interannual variations of alkalinity fluxes is exhibited by the temporal varying nitrogen inputs from large rivers directly governing the nitrate concentrations in the coastal bottom water, thus providing nitrate necessary for benthic denitrification. Over the time investigated the alkalinity effluxes decrease due to the decrease in the nitrogen supply by the rivers.

  12. A MODEL OF MAGNETIC BRAKING OF SOLAR ROTATION THAT SATISFIES OBSERVATIONAL CONSTRAINTS

    International Nuclear Information System (INIS)

    Denissenkov, Pavel A.

    2010-01-01

    The model of magnetic braking of solar rotation considered by Charbonneau and MacGregor has been modified so that it is able to reproduce for the first time the rotational evolution of both the fastest and slowest rotators among solar-type stars in open clusters of different ages, without coming into conflict with other observational constraints, such as the time evolution of the atmospheric Li abundance in solar twins and the thinness of the solar tachocline. This new model assumes that rotation-driven turbulent diffusion, which is thought to amplify the viscosity and magnetic diffusivity in stellar radiative zones, is strongly anisotropic with the horizontal components of the transport coefficients strongly dominating over those in the vertical direction. Also taken into account is the poloidal field decay that helps to confine the width of the tachocline at the solar age. The model's properties are investigated by numerically solving the azimuthal components of the coupled momentum and magnetic induction equations in two dimensions using a finite element method.

  13. The structure of observed learning outcome (SOLO) taxonomy: a model to promote dental students' learning.

    Science.gov (United States)

    Lucander, H; Bondemark, L; Brown, G; Knutsson, K

    2010-08-01

    Selective memorising of isolated facts or reproducing what is thought to be required - the surface approach to learning - is not the desired outcome for a dental student or a dentist in practice. The preferred outcome is a deep approach as defined by an intention to seek understanding, develop expertise and relate information and knowledge into a coherent whole. The aim of this study was to investigate whether the structure of observed learning outcome (SOLO) taxonomy could be used as a model to assist and promote the dental students to develop a deep approach to learning assessed as learning outcomes in a summative assessment. Thirty-two students, participating in course eight in 2007 at the Faculty of Odontology at Malmö University, were introduced to the SOLO taxonomy and constituted the test group. The control group consisted of 35 students participating in course eight in 2006. The effect of the introduction was measured by evaluating responses to a question in the summative assessment by using the SOLO taxonomy. The evaluators consisted of two teachers who performed the assessment of learning outcomes independently and separately on the coded material. The SOLO taxonomy as a model for learning was found to improve the quality of learning. Compared to the control group significantly more strings and structured relations between these strings were present in the test group after the SOLO taxonomy had been introduced (P SOLO taxonomy is recommended as a model for promoting and developing a deeper approach to learning in dentistry.

  14. Ecological Assimilation of Land and Climate Observations - the EALCO model

    Science.gov (United States)

    Wang, S.; Zhang, Y.; Trishchenko, A.

    2004-05-01

    Ecosystems are intrinsically dynamic and interact with climate at a highly integrated level. Climate variables are the main driving factors in controlling the ecosystem physical, physiological, and biogeochemical processes including energy balance, water balance, photosynthesis, respiration, and nutrient cycling. On the other hand, ecosystems function as an integrity and feedback on the climate system through their control on surface radiation balance, energy partitioning, and greenhouse gases exchange. To improve our capability in climate change impact assessment, a comprehensive ecosystem model is required to address the many interactions between climate change and ecosystems. In addition, different ecosystems can have very different responses to the climate change and its variation. To provide more scientific support for ecosystem impact assessment at national scale, it is imperative that ecosystem models have the capability of assimilating the large scale geospatial information including satellite observations, GIS datasets, and climate model outputs or reanalysis. The EALCO model (Ecological Assimilation of Land and Climate Observations) is developed for such purposes. EALCO includes the comprehensive interactions among ecosystem processes and climate, and assimilates a variety of remote sensing products and GIS database. It provides both national and local scale model outputs for ecosystem responses to climate change including radiation and energy balances, water conditions and hydrological cycles, carbon sequestration and greenhouse gas exchange, and nutrient (N) cycling. These results form the foundation for the assessment of climate change impact on ecosystems, their services, and adaptation options. In this poster, the main algorithms for the radiation, energy, water, carbon, and nitrogen simulations were diagrammed. Sample input data layers at Canada national scale were illustrated. Model outputs including the Canada wide spatial distributions of net

  15. Realistic modelling of observed seismic motion in complex sedimentary basins

    International Nuclear Information System (INIS)

    Faeh, D.; Panza, G.F.

    1994-03-01

    Three applications of a numerical technique are illustrated to model realistically the seismic ground motion for complex two-dimensional structures. First we consider a sedimentary basin in the Friuli region, and we model strong motion records from an aftershock of the 1976 earthquake. Then we simulate the ground motion caused in Rome by the 1915, Fucino (Italy) earthquake, and we compare our modelling with the damage distribution observed in the town. Finally we deal with the interpretation of ground motion recorded in Mexico City, as a consequence of earthquakes in the Mexican subduction zone. The synthetic signals explain the major characteristics (relative amplitudes, spectral amplification, frequency content) of the considered seismograms, and the space distribution of the available macroseismic data. For the sedimentary basin in the Friuli area, parametric studies demonstrate the relevant sensitivity of the computed ground motion to small changes in the subsurface topography of the sedimentary basin, and in the velocity and quality factor of the sediments. The total energy of ground motion, determined from our numerical simulation in Rome, is in very good agreement with the distribution of damage observed during the Fucino earthquake. For epicentral distances in the range 50km-100km, the source location and not only the local soil conditions control the local effects. For Mexico City, the observed ground motion can be explained as resonance effects and as excitation of local surface waves, and the theoretical and the observed maximum spectral amplifications are very similar. In general, our numerical simulations permit the estimate of the maximum and average spectral amplification for specific sites, i.e. are a very powerful tool for accurate micro-zonation. (author). 38 refs, 19 figs, 1 tab

  16. Reproducibility of isotope ratio measurements

    International Nuclear Information System (INIS)

    Elmore, D.

    1981-01-01

    The use of an accelerator as part of a mass spectrometer has improved the sensitivity for measuring low levels of long-lived radionuclides by several orders of magnitude. However, the complexity of a large tandem accelerator and beam transport system has made it difficult to match the precision of low energy mass spectrometry. Although uncertainties for accelerator measured isotope ratios as low as 1% have been obtained under favorable conditions, most errors quoted in the literature for natural samples are in the 5 to 20% range. These errors are dominated by statistics and generally the reproducibility is unknown since the samples are only measured once

  17. Adjustments in the Almod 3W2 code models for reproducing the net load trip test in Angra I nuclear power plant

    International Nuclear Information System (INIS)

    Camargo, C.T.M.; Madeira, A.A.; Pontedeiro, A.C.; Dominguez, L.

    1986-09-01

    The recorded traces got from the net load trip test in Angra I NPP yelded the oportunity to make fine adjustments in the ALMOD 3W2 code models. The changes are described and the results are compared against plant real data. (Author) [pt

  18. Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City

    Directory of Open Access Journals (Sweden)

    M. Zavala

    2009-01-01

    Full Text Available The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3, carbon monoxide (CO and nitrogen oxides (NOx suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio.

    This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM and the standard Brute Force Method (BFM in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with

  19. Runoff-generated debris flows: observations and modeling of surge initiation, magnitude, and frequency

    Science.gov (United States)

    Kean, Jason W.; McCoy, Scott W.; Tucker, Gregory E.; Staley, Dennis M.; Coe, Jeffrey A.

    2013-01-01

    Runoff during intense rainstorms plays a major role in generating debris flows in many alpine areas and burned steeplands. Yet compared to debris flow initiation from shallow landslides, the mechanics by which runoff generates a debris flow are less understood. To better understand debris flow initiation by surface water runoff, we monitored flow stage and rainfall associated with debris flows in the headwaters of two small catchments: a bedrock-dominated alpine basin in central Colorado (0.06 km2) and a recently burned area in southern California (0.01 km2). We also obtained video footage of debris flow initiation and flow dynamics from three cameras at the Colorado site. Stage observations at both sites display distinct patterns in debris flow surge characteristics relative to rainfall intensity (I). We observe small, quasiperiodic surges at low I; large, quasiperiodic surges at intermediate I; and a single large surge followed by small-amplitude fluctuations about a more steady high flow at high I. Video observations of surge formation lead us to the hypothesis that these flow patterns are controlled by upstream variations in channel slope, in which low-gradient sections act as “sediment capacitors,” temporarily storing incoming bed load transported by water flow and periodically releasing the accumulated sediment as a debris flow surge. To explore this hypothesis, we develop a simple one-dimensional morphodynamic model of a sediment capacitor that consists of a system of coupled equations for water flow, bed load transport, slope stability, and mass flow. This model reproduces the essential patterns in surge magnitude and frequency with rainfall intensity observed at the two field sites and provides a new framework for predicting the runoff threshold for debris flow initiation in a burned or alpine setting.

  20. Reproducing the observed energy-dependent structure of Earth's electron radiation belts during storm recovery with an event-specific diffusion model

    Czech Academy of Sciences Publication Activity Database

    Ripoll, J.-F.; Reeves, G. D.; Cunningham, G. S.; Loridan, V.; Denton, M.; Santolík, Ondřej; Kurth, W. S.; Kletzing, C. A.; Turner, D. L.; Henderson, M. G.; Ukhorskiy, A. Y.

    2016-01-01

    Roč. 43, č. 11 (2016), s. 5616-5625 ISSN 0094-8276 R&D Projects: GA MŠk(CZ) LH15304 Institutional support: RVO:68378289 Keywords : radiation belts * slot region * electron losses * wave particle interactions * hiss wave s * electron lifetimes Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 4.253, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/2016GL068869/full

  1. On observational foundations of models with a wave spiral structure

    International Nuclear Information System (INIS)

    Suchkov, A.A.

    1978-01-01

    The validity of the density wave models of the spiral structure is considered. It is shown that the density wave in the Galaxy is doverned by its flat subsystem only, whereas the disk and the halo do not contribute significantly into the wave. It is found that the density wave model of the spiral structure of the Galaxy is confirmed by the value of the pattern speed derived from observational data (Ω = 20-25 km s -1 kpc -1 ). The position and the properties of the outer Lindblad resonance are confirmed by the existence and position of gas ring features in outer regions of our Galaxy and external galaxies. The corotation region in the Galaxy is situated at R=10/12 kpc. Near the corotation region the galactic shock wave is not expected to develop. The observed rapid decrease in the number of H2 regions while moving from R=5 kpc to R=10 kpc confirms this conclusion. The similar consistency between the positions of corotation region and outer resonance and the observed properties of H2 and H1 distribution has also been found for a number of extermal galaxies

  2. Anisotropy in Fracking: A Percolation Model for Observed Microseismicity

    Science.gov (United States)

    Norris, J. Quinn; Turcotte, Donald L.; Rundle, John B.

    2015-01-01

    Hydraulic fracturing (fracking), using high pressures and a low viscosity fluid, allow the extraction of large quantiles of oil and gas from very low permeability shale formations. The initial production of oil and gas at depth leads to high pressures and an extensive distribution of natural fractures which reduce the pressures. With time these fractures heal, sealing the remaining oil and gas in place. High volume fracking opens the healed fractures allowing the oil and gas to flow to horizontal production wells. We model the injection process using invasion percolation. We use a 2D square lattice of bonds to model the sealed natural fractures. The bonds are assigned random strengths and the fluid, injected at a point, opens the weakest bond adjacent to the growing cluster of opened bonds. Our model exhibits burst dynamics in which the clusters extend rapidly into regions with weak bonds. We associate these bursts with the microseismic activity generated by fracking injections. A principal object of this paper is to study the role of anisotropic stress distributions. Bonds in the y-direction are assigned higher random strengths than bonds in the x-direction. We illustrate the spatial distribution of clusters and the spatial distribution of bursts (small earthquakes) for several degrees of anisotropy. The results are compared with observed distributions of microseismicity in a fracking injection. Both our bursts and the observed microseismicity satisfy Gutenberg-Richter frequency-size statistics.

  3. Analysis and modeling of tropical convection observed by CYGNSS

    Science.gov (United States)

    Lang, T. J.; Li, X.; Roberts, J. B.; Mecikalski, J. R.

    2017-12-01

    The Cyclone Global Navigation Satellite System (CYGNSS) is a multi-satellite constellation that utilizes Global Positioning System (GPS) reflectometry to retrieve near-surface wind speeds over the ocean. While CYGNSS is primarily aimed at measuring wind speeds in tropical cyclones, our research has established that the mission may also provide valuable insight into the relationships between wind-driven surface fluxes and general tropical oceanic convection. Currently, we are examining organized tropical convection using a mixture of CYGNSS level 1 through level 3 data, IMERG (Integrated Multi-satellite Retrievals for Global Precipitation Measurement), and other ancillary datasets (including buoys, GPM level 1 and 2 data, as well as ground-based radar). In addition, observing system experiments (OSEs) are being performed using hybrid three-dimensional variational assimilation to ingest CYGNSS observations into a limited-domain, convection-resolving model. Our focus for now is on case studies of convective evolution, but we will also report on progress toward statistical analysis of convection sampled by CYGNSS. Our working hypothesis is that the typical mature phase of organized tropical convection is marked by the development of a sharp gust-front boundary from an originally spatially broader but weaker wind speed change associated with precipitation. This increase in the wind gradient, which we demonstrate is observable by CYGNSS, likely helps to focus enhanced turbulent fluxes of convection-sustaining heat and moisture near the leading edge of the convective system where they are more easily ingested by the updraft. Progress on the testing and refinement of this hypothesis, using a mixture of observations and modeling, will be reported.

  4. Initializing a Mesoscale Boundary-Layer Model with Radiosonde Observations

    Science.gov (United States)

    Berri, Guillermo J.; Bertossa, Germán

    2018-01-01

    A mesoscale boundary-layer model is used to simulate low-level regional wind fields over the La Plata River of South America, a region characterized by a strong daily cycle of land-river surface-temperature contrast and low-level circulations of sea-land breeze type. The initial and boundary conditions are defined from a limited number of local observations and the upper boundary condition is taken from the only radiosonde observations available in the region. The study considers 14 different upper boundary conditions defined from the radiosonde data at standard levels, significant levels, level of the inversion base and interpolated levels at fixed heights, all of them within the first 1500 m. The period of analysis is 1994-2008 during which eight daily observations from 13 weather stations of the region are used to validate the 24-h surface-wind forecast. The model errors are defined as the root-mean-square of relative error in wind-direction frequency distribution and mean wind speed per wind sector. Wind-direction errors are greater than wind-speed errors and show significant dispersion among the different upper boundary conditions, not present in wind speed, revealing a sensitivity to the initialization method. The wind-direction errors show a well-defined daily cycle, not evident in wind speed, with the minimum at noon and the maximum at dusk, but no systematic deterioration with time. The errors grow with the height of the upper boundary condition level, in particular wind direction, and double the errors obtained when the upper boundary condition is defined from the lower levels. The conclusion is that defining the model upper boundary condition from radiosonde data closer to the ground minimizes the low-level wind-field errors throughout the region.

  5. PROPERTIES AND MODELING OF UNRESOLVED FINE STRUCTURE LOOPS OBSERVED IN THE SOLAR TRANSITION REGION BY IRIS

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, David H. [College of Science, George Mason University, 4400 University Drive, Fairfax, VA 22030 (United States); Reep, Jeffrey W.; Warren, Harry P. [Space Science Division, Naval Research Laboratory, Washington, DC 20375 (United States)

    2016-08-01

    Recent observations from the Interface Region Imaging Spectrograph ( IRIS ) have discovered a new class of numerous low-lying dynamic loop structures, and it has been argued that they are the long-postulated unresolved fine structures (UFSs) that dominate the emission of the solar transition region. In this letter, we combine IRIS measurements of the properties of a sample of 108 UFSs (intensities, lengths, widths, lifetimes) with one-dimensional non-equilibrium ionization simulations, using the HYDRAD hydrodynamic model to examine whether the UFSs are now truly spatially resolved in the sense of being individual structures rather than being composed of multiple magnetic threads. We find that a simulation of an impulsively heated single strand can reproduce most of the observed properties, suggesting that the UFSs may be resolved, and the distribution of UFS widths implies that they are structured on a spatial scale of 133 km on average. Spatial scales of a few hundred kilometers appear to be typical for a range of chromospheric and coronal structures, and we conjecture that this could be an important clue for understanding the coronal heating process.

  6. Modelling 1-minute directional observations of the global irradiance.

    Science.gov (United States)

    Thejll, Peter; Pagh Nielsen, Kristian; Andersen, Elsa; Furbo, Simon

    2016-04-01

    Direct and diffuse irradiances from the sky has been collected at 1-minute intervals for about a year from the experimental station at the Technical University of Denmark for the IEA project "Solar Resource Assessment and Forecasting". These data were gathered by pyrheliometers tracking the Sun, as well as with apertured pyranometers gathering 1/8th and 1/16th of the light from the sky in 45 degree azimuthal ranges pointed around the compass. The data are gathered in order to develop detailed models of the potentially available solar energy and its variations at high temporal resolution in order to gain a more detailed understanding of the solar resource. This is important for a better understanding of the sub-grid scale cloud variation that cannot be resolved with climate and weather models. It is also important for optimizing the operation of active solar energy systems such as photovoltaic plants and thermal solar collector arrays, and for passive solar energy and lighting to buildings. We present regression-based modelling of the observed data, and focus, here, on the statistical properties of the model fits. Using models based on the one hand on what is found in the literature and on physical expectations, and on the other hand on purely statistical models, we find solutions that can explain up to 90% of the variance in global radiation. The models leaning on physical insights include terms for the direct solar radiation, a term for the circum-solar radiation, a diffuse term and a term for the horizon brightening/darkening. The purely statistical model is found using data- and formula-validation approaches picking model expressions from a general catalogue of possible formulae. The method allows nesting of expressions, and the results found are dependent on and heavily constrained by the cross-validation carried out on statistically independent testing and training data-sets. Slightly better fits -- in terms of variance explained -- is found using the purely

  7. Dispersion Relations for Electroweak Observables in Composite Higgs Models

    CERN Document Server

    Contino, Roberto

    2015-12-14

    We derive dispersion relations for the electroweak oblique observables measured at LEP in the context of $SO(5)/SO(4)$ composite Higgs models. It is shown how these relations can be used and must be modified when modeling the spectral functions through a low-energy effective description of the strong dynamics. The dispersion relation for the parameter $\\epsilon_3$ is then used to estimate the contribution from spin-1 resonances at the 1-loop level. Finally, it is shown that the sign of the contribution to the $\\hat S$ parameter from the lowest-lying spin-1 states is not necessarily positive definite, but depends on the energy scale at which the asymptotic behavior of current correlators is attained.

  8. Observations and models of simple nocturnal slope flows

    International Nuclear Information System (INIS)

    Doran, J.C.; Horst, J.W.

    1983-01-01

    Measurements of simple nocturnal slope winds were taken on Rattlesnake Mountain, a nearly ideal two-dimensional ridge. Tower and tethered balloon instrumentation allowed the determination of the wind and temperature characteristics of the katabatic layer as well as the ambient conditions. Two cases were chosen for study; these were marked by well-defined surface-based temperature inversions and a low-level maximum in the downslope wind component. The downslope development of the slope flow could be determined from the tower measurements, and showed a progressive strenghtening of the katabatic layer. Hydraulic models developed by Manins and Sawford (1979a) and Briggs (1981) gave useful estimates of drainage layer depths, but were not otherwise applicable. A simple numerical model that relates the eddy diffusivity to the local turbulent kinetic energy was found to give good agreement with the observed wind and temperature profiles of the slope flows

  9. Mismatch between observed and modeled trends in dissolved upper-ocean oxygen over the last 50 yr

    Directory of Open Access Journals (Sweden)

    L. Stramma

    2012-10-01

    Full Text Available Observations and model runs indicate trends in dissolved oxygen (DO associated with current and ongoing global warming. However, a large-scale observation-to-model comparison has been missing and is presented here. This study presents a first global compilation of DO measurements covering the last 50 yr. It shows declining upper-ocean DO levels in many regions, especially the tropical oceans, whereas areas with increasing trends are found in the subtropics and in some subpolar regions. For the Atlantic Ocean south of 20° N, the DO history could even be extended back to about 70 yr, showing decreasing DO in the subtropical South Atlantic. The global mean DO trend between 50° S and 50° N at 300 dbar for the period 1960 to 2010 is –0.066 μmol kg−1 yr−1. Results of a numerical biogeochemical Earth system model reveal that the magnitude of the observed change is consistent with CO2-induced climate change. However, the pattern correlation between simulated and observed patterns of past DO change is negative, indicating that the model does not correctly reproduce the processes responsible for observed regional oxygen changes in the past 50 yr. A negative pattern correlation is also obtained for model configurations with particularly low and particularly high diapycnal mixing, for a configuration that assumes a CO2-induced enhancement of the C : N ratios of exported organic matter and irrespective of whether climatological or realistic winds from reanalysis products are used to force the model. Depending on the model configuration the 300 dbar DO trend between 50° S and 50° N is −0.027 to –0.047 μmol kg−1 yr−1 for climatological wind forcing, with a much larger range of –0.083 to +0.027 μmol kg−1 yr−1 for different initializations of sensitivity runs with reanalysis wind forcing. Although numerical models reproduce the overall sign and, to

  10. Heliospheric modulation of cosmic rays: model and observation

    Directory of Open Access Journals (Sweden)

    Gerasimova S.K.

    2017-03-01

    Full Text Available This paper presents the basic model of cosmic ray modulation in the heliosphere, developed in Yu.G. Shafer Institute of Cosmophysical Research and Aeronomy of the Siberian Branch of the Russian Academy of Sciences. The model has only one free modulation parameter: the ratio of the regular magnetic field to the turbulent one. It may also be applied to the description of cosmic ray intensity variations in a wide energy range from 100 MeV to 100 GeV. Possible mechanisms of generation of the turbulent field are considered. The primary assumption about the electrical neutrality of the heliosphere appears to be wrong, and the zero potential needed to match the model with observations in the solar equatorial plane can be achieved if the frontal point of the heliosphere, which is flowed around by interstellar gas, lies near the plane. We have revealed that the abnormal rise of cosmic ray intensity at the end of solar cycle 23 is related to the residual modulation produced by the subsonic solar wind behind the front of a standing shock wave. The model is used to describe features of cosmic ray intensity variations in several solar activity cycles.

  11. A comparison of the probability distribution of observed substorm magnitude with that predicted by a minimal substorm model

    Directory of Open Access Journals (Sweden)

    S. K. Morley

    2007-11-01

    Full Text Available We compare the probability distributions of substorm magnetic bay magnitudes from observations and a minimal substorm model. The observed distribution was derived previously and independently using the IL index from the IMAGE magnetometer network. The model distribution is derived from a synthetic AL index time series created using real solar wind data and a minimal substorm model, which was previously shown to reproduce observed substorm waiting times. There are two free parameters in the model which scale the contributions to AL from the directly-driven DP2 electrojet and loading-unloading DP1 electrojet, respectively. In a limited region of the 2-D parameter space of the model, the probability distribution of modelled substorm bay magnitudes is not significantly different to the observed distribution. The ranges of the two parameters giving acceptable (95% confidence level agreement are consistent with expectations using results from other studies. The approximately linear relationship between the two free parameters over these ranges implies that the substorm magnitude simply scales linearly with the solar wind power input at the time of substorm onset.

  12. Land-Surface-Atmosphere Coupling in Observations and Models

    Directory of Open Access Journals (Sweden)

    Alan K Betts

    2009-07-01

    Full Text Available The diurnal cycle and the daily mean at the land-surface result from the coupling of many physical processes. The framework of this review is largely conceptual; looking for relationships and information in the coupling of processes in models and observations. Starting from the surface energy balance, the role of the surface and cloud albedos in the shortwave and longwave fluxes is discussed. A long-wave radiative scaling of the diurnal temperature range and the night-time boundary layer is summarized. Several aspects of the local surface energy partition are presented: the role of soilwater availability and clouds; vector methods for understanding mixed layer evolution, and the coupling between surface and boundary layer that determines the lifting condensation level. Moving to larger scales, evaporation-precipitation feedback in models is discussed; and the coupling of column water vapor, clouds and precipitation to vertical motion and moisture convergence over the Amazon. The final topic is a comparison of the ratio of surface shortwave cloud forcing to the diabatic precipitation forcing of the atmosphere in ERA-40 with observations.

  13. A Mouse Model That Reproduces the Developmental Pathways and Site Specificity of the Cancers Associated With the Human BRCA1 Mutation Carrier State

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2015-10-01

    Full Text Available Predisposition to breast and extrauterine Müllerian carcinomas in BRCA1 mutation carriers is due to a combination of cell-autonomous consequences of BRCA1 inactivation on cell cycle homeostasis superimposed on cell-nonautonomous hormonal factors magnified by the effects of BRCA1 mutations on hormonal changes associated with the menstrual cycle. We used the Müllerian inhibiting substance type 2 receptor (Mis2r promoter and a truncated form of the Follicle stimulating hormone receptor (Fshr promoter to introduce conditional knockouts of Brca1 and p53 not only in mouse mammary and Müllerian epithelia, but also in organs that control the estrous cycle. Sixty percent of the double mutant mice developed invasive Müllerian and mammary carcinomas. Mice carrying heterozygous mutations in Brca1 and p53 also developed invasive tumors, albeit at a lesser (30% rate, in which the wild type alleles were no longer present due to loss of heterozygosity. While mice carrying heterozygous mutations in both genes developed mammary tumors, none of the mice carrying only a heterozygous p53 mutation developed such tumors (P < 0.0001, attesting to a role for Brca1 mutations in tumor development. This mouse model is attractive to investigate cell-nonautonomous mechanisms associated with cancer predisposition in BRCA1 mutation carriers and to investigate the merit of chemo-preventive drugs targeting such mechanisms.

  14. Effect of Initial Conditions on Reproducibility of Scientific Research

    Science.gov (United States)

    Djulbegovic, Benjamin; Hozo, Iztok

    2014-01-01

    Background: It is estimated that about half of currently published research cannot be reproduced. Many reasons have been offered as explanations for failure to reproduce scientific research findings- from fraud to the issues related to design, conduct, analysis, or publishing scientific research. We also postulate a sensitive dependency on initial conditions by which small changes can result in the large differences in the research findings when attempted to be reproduced at later times. Methods: We employed a simple logistic regression equation to model the effect of covariates on the initial study findings. We then fed the input from the logistic equation into a logistic map function to model stability of the results in repeated experiments over time. We illustrate the approach by modeling effects of different factors on the choice of correct treatment. Results: We found that reproducibility of the study findings depended both on the initial values of all independent variables and the rate of change in the baseline conditions, the latter being more important. When the changes in the baseline conditions vary by about 3.5 to about 4 in between experiments, no research findings could be reproduced. However, when the rate of change between the experiments is ≤2.5 the results become highly predictable between the experiments. Conclusions: Many results cannot be reproduced because of the changes in the initial conditions between the experiments. Better control of the baseline conditions in-between the experiments may help improve reproducibility of scientific findings. PMID:25132705

  15. Observational tests for H II region models - A 'champagne party'

    Energy Technology Data Exchange (ETDEWEB)

    Alloin, D; Tenorio-Tagle, G

    1979-09-01

    Observations of several neighboring H II regions associated with a molecular cloud were performed in order to test the champagne model of H II region-molecular cloud interaction leading to the supersonic expansion of molecular cloud gas. Nine different positions in the Gum 61 nebula were observed using an image dissector scanner attached to a 3.6-m telescope, and it is found that the area corresponds to a low excitation, high density nebula, with electron densities ranging between 1400 and 2800/cu cm and larger along the boundary of the ionized gas. An observed increase in pressure and density located in an interior region of the nebula is interpreted in terms of an area between two rarefaction waves generated together with a strong isothermal shock, responsible for the champagne-like streaming, by a pressure discontinuity between the ionized molecular cloud in which star formation takes place and the intercloud gas. It is noted that a velocity field determination would provide the key in understanding the evolution of such a region.

  16. Evaluation of single- and dual-porosity models for reproducing the release of external and internal tracers from heterogeneous waste-rock piles.

    Science.gov (United States)

    Blackmore, S; Pedretti, D; Mayer, K U; Smith, L; Beckie, R D

    2018-05-30

    Accurate predictions of solute release from waste-rock piles (WRPs) are paramount for decision making in mining-related environmental processes. Tracers provide information that can be used to estimate effective transport parameters and understand mechanisms controlling the hydraulic and geochemical behavior of WRPs. It is shown that internal tracers (i.e. initially present) together with external (i.e. applied) tracers provide complementary and quantitative information to identify transport mechanisms. The analysis focuses on two experimental WRPs, Piles 4 and Pile 5 at the Antamina Mine site (Peru), where both an internal chloride tracer and externally applied bromide tracer were monitored in discharge over three years. The results suggest that external tracers provide insight into transport associated with relatively fast flow regions that are activated during higher-rate recharge events. In contrast, internal tracers provide insight into mechanisms controlling solutes release from lower-permeability zones within the piles. Rate-limited diffusive processes, which can be mimicked by nonlocal mass-transfer models, affect both internal and external tracers. The sensitivity of the mass-transfer parameters to heterogeneity is higher for external tracers than for internal tracers, as indicated by the different mean residence times characterizing the flow paths associated with each tracer. The joint use of internal and external tracers provides a more comprehensive understanding of the transport mechanisms in WRPs. In particular, the tracer tests support the notion that a multi-porosity conceptualization of WRPs is more adequate for capturing key mechanisms than a dual-porosity conceptualization. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Ionospheric detection of tsunami earthquakes: observation, modeling and ideas for future early warning

    Science.gov (United States)

    Occhipinti, G.; Manta, F.; Rolland, L.; Watada, S.; Makela, J. J.; Hill, E.; Astafieva, E.; Lognonne, P. H.

    2017-12-01

    Detection of ionospheric anomalies following the Sumatra and Tohoku earthquakes (e.g., Occhipinti 2015) demonstrated that ionosphere is sensitive to earthquake and tsunami propagation: ground and oceanic vertical displacement induces acoustic-gravity waves propagating within the neutral atmosphere and detectable in the ionosphere. Observations supported by modelling proved that ionospheric anomalies related to tsunamis are deterministic and reproducible by numerical modeling via the ocean/neutral-atmosphere/ionosphere coupling mechanism (Occhipinti et al., 2008). To prove that the tsunami signature in the ionosphere is routinely detected we show here perturbations of total electron content (TEC) measured by GPS and following tsunamigenic earthquakes from 2004 to 2011 (Rolland et al. 2010, Occhipinti et al., 2013), nominally, Sumatra (26 December, 2004 and 12 September, 2007), Chile (14 November, 2007), Samoa (29 September, 2009) and the recent Tohoku-Oki (11 Mars, 2011). Based on the observations close to the epicenter, mainly performed by GPS networks located in Sumatra, Chile and Japan, we highlight the TEC perturbation observed within the first 8 min after the seismic rupture. This perturbation contains information about the ground displacement, as well as the consequent sea surface displacement resulting in the tsunami. In addition to GNSS-TEC observations close to the epicenter, new exciting measurements in the far-field were performed by airglow measurement in Hawaii show the propagation of the internal gravity waves induced by the Tohoku tsunami (Occhipinti et al., 2011). This revolutionary imaging technique is today supported by two new observations of moderate tsunamis: Queen Charlotte (M: 7.7, 27 October, 2013) and Chile (M: 8.2, 16 September 2015). We finally detail here our recent work (Manta et al., 2017) on the case of tsunami alert failure following the Mw7.8 Mentawai event (25 October, 2010), and its twin tsunami alert response following the Mw7

  18. Empirical STORM-E Model. [I. Theoretical and Observational Basis

    Science.gov (United States)

    Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III

    2013-01-01

    Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented

  19. Fracture initiation associated with chemical degradation: observation and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Byoungho Choi; Zhenwen Zhou; Chudnovsky, Alexander [Illinois Univ., Dept. of Civil and Materials Engineering (M/C 246), Chicago, IL (United States); Stivala, Salvatore S. [Stevens Inst. of Technology, Dept. of Chemistry and Chemical Biology, Hoboken, NJ (United States); Sehanobish, Kalyan; Bosnyak, Clive P. [Dow Chemical Co., Freeport, TX (United States)

    2005-01-01

    The fracture initiation in engineering thermoplastics resulting from chemical degradation is usually observed in the form of a microcrack network within a surface layer of degraded polymer exposed to a combined action of mechanical stresses and chemically aggressive environment. Degradation of polymers is usually manifested in a reduction of molecular weight, increase of crystallinity in semi crystalline polymers, increase of material density, a subtle increase in yield strength, and a dramatic reduction in toughness. An increase in material density, i.e., shrinkage of the degraded layer is constrained by adjacent unchanged material results in a buildup of tensile stress within the degraded layer and compressive stress in the adjacent unchanged material due to increasing incompatibility between the two. These stresses are an addition to preexisting manufacturing and service stresses. At a certain level of degradation, a combination of toughness reduction and increase of tensile stress result in fracture initiation. A quantitative model of the described above processes is presented in these work. For specificity, the internally pressurized plastic pipes that transport a fluid containing a chemically aggressive (oxidizing) agent is used as the model of fracture initiation. Experimental observations of material density and toughness dependence on degradation reported elsewhere are employed in the model. An equation for determination of a critical level of degradation corresponding to the offset of fracture is constructed. The critical level of degradation for fracture initiation depends on the rates of toughness deterioration and build-up of the degradation related stresses as well as on the manufacturing and service stresses. A method for evaluation of the time interval prior to fracture initiation is also formulated. (Author)

  20. Wind Turbine Model and Observer in Takagi-Sugeno Model Structure

    International Nuclear Information System (INIS)

    Georg, Sören; Müller, Matthias; Schulte, Horst

    2014-01-01

    Based on a reduced-order, dynamic nonlinear wind turbine model in Takagi- Sugeno (TS) model structure, a TS state observer is designed as a disturbance observer to estimate the unknown effective wind speed. The TS observer model is an exact representation of the underlying nonlinear model, obtained by means of the sector-nonlinearity approach. The observer gain matrices are obtained by means of a linear matrix inequality (LMI) design approach for optimal fuzzy control, where weighting matrices for the individual system states and outputs are included. The observer is tested in simulations with the aero-elastic code FAST for the NREL 5 MW reference turbine, where it shows a stable behaviour in turbulent wind simulations

  1. Europlanet/IDIS: Combining Diverse Planetary Observations and Models

    Science.gov (United States)

    Schmidt, Walter; Capria, Maria Teresa; Chanteur, Gerard

    2013-04-01

    Planetary research involves a diversity of research fields from astrophysics and plasma physics to atmospheric physics, climatology, spectroscopy and surface imaging. Data from all these disciplines are collected from various space-borne platforms or telescopes, supported by modelling teams and laboratory work. In order to interpret one set of data often supporting data from different disciplines and other missions are needed while the scientist does not always have the detailed expertise to access and utilize these observations. The Integrated and Distributed Information System (IDIS) [1], developed in the framework of the Europlanet-RI project, implements a Virtual Observatory approach ([2] and [3]), where different data sets, stored in archives around the world and in different formats, are accessed, re-formatted and combined to meet the user's requirements without the need of familiarizing oneself with the different technical details. While observational astrophysical data from different observatories could already earlier be accessed via Virtual Observatories, this concept is now extended to diverse planetary data and related model data sets, spectral data bases etc. A dedicated XML-based Europlanet Data Model (EPN-DM) [4] was developed based on data models from the planetary science community and the Virtual Observatory approach. A dedicated editor simplifies the registration of new resources. As the EPN-DM is a super-set of existing data models existing archives as well as new spectroscopic or chemical data bases for the interpretation of atmospheric or surface observations, or even modeling facilities at research institutes in Europe or Russia can be easily integrated and accessed via a Table Access Protocol (EPN-TAP) [5] adapted from the corresponding protocol of the International Virtual Observatory Alliance [6] (IVOA-TAP). EPN-TAP allows to search catalogues, retrieve data and make them available through standard IVOA tools if the access to the archive

  2. Modeling Prairie Pothole Lakes: Linking Satellite Observation and Calibration (Invited)

    Science.gov (United States)

    Schwartz, F. W.; Liu, G.; Zhang, B.; Yu, Z.

    2009-12-01

    This paper examines the response of a complex lake wetland system to variations in climate. The focus is on the lakes and wetlands of the Missouri Coteau, which is part of the larger Prairie Pothole Region of the Central Plains of North America. Information on lake size was enumerated from satellite images, and yielded power law relationships for different hydrological conditions. More traditional lake-stage data were made available to us from the USGS Cottonwood Lake Study Site in North Dakota. A Probabilistic Hydrologic Model (PHM) was developed to simulate lake complexes comprised of tens-of-thousands or more individual closed-basin lakes and wetlands. What is new about this model is a calibration scheme that utilizes remotely-sensed data on lake area as well as stage data for individual lakes. Some ¼ million individual data points are used within a Genetic Algorithm to calibrate the model by comparing the simulated results with observed lake area-frequency power law relationships derived from Landsat images and water depths from seven individual lakes and wetlands. The simulated lake behaviors show good agreement with the observations under average, dry, and wet climatic conditions. The calibrated model is used to examine the impact of climate variability on a large lake complex in ND, in particular, the “Dust Bowl Drought” 1930s. This most famous drought of the 20th Century devastated the agricultural economy of the Great Plains with health and social impacts lingering for years afterwards. Interestingly, the drought of 1930s is unremarkable in relation to others of greater intensity and frequency before AD 1200 in the Great Plains. Major droughts and deluges have the ability to create marked variability of the power law function (e.g. up to one and a half orders of magnitude variability from the extreme Dust Bowl Drought to the extreme 1993-2001 deluge). This new probabilistic modeling approach provides a novel tool to examine the response of the

  3. MODELING ATMOSPHERIC EMISSION FOR CMB GROUND-BASED OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Errard, J.; Borrill, J. [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States); Ade, P. A. R. [School of Physics and Astronomy, Cardiff University, Cardiff CF10 3XQ (United Kingdom); Akiba, Y.; Chinone, Y. [High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801 (Japan); Arnold, K.; Atlas, M.; Barron, D.; Elleflot, T. [Department of Physics, University of California, San Diego, CA 92093-0424 (United States); Baccigalupi, C.; Fabbian, G. [International School for Advanced Studies (SISSA), Trieste I-34014 (Italy); Boettger, D. [Department of Astronomy, Pontifica Universidad Catolica de Chile (Chile); Chapman, S. [Department of Physics and Atmospheric Science, Dalhousie University, Halifax, NS, B3H 4R2 (Canada); Cukierman, A. [Department of Physics, University of California, Berkeley, CA 94720 (United States); Delabrouille, J. [AstroParticule et Cosmologie, Univ Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cité (France); Dobbs, M.; Gilbert, A. [Physics Department, McGill University, Montreal, QC H3A 0G4 (Canada); Ducout, A.; Feeney, S. [Department of Physics, Imperial College London, London SW7 2AZ (United Kingdom); Feng, C. [Department of Physics and Astronomy, University of California, Irvine (United States); and others

    2015-08-10

    Atmosphere is one of the most important noise sources for ground-based cosmic microwave background (CMB) experiments. By increasing optical loading on the detectors, it amplifies their effective noise, while its fluctuations introduce spatial and temporal correlations between detected signals. We present a physically motivated 3D-model of the atmosphere total intensity emission in the millimeter and sub-millimeter wavelengths. We derive a new analytical estimate for the correlation between detectors time-ordered data as a function of the instrument and survey design, as well as several atmospheric parameters such as wind, relative humidity, temperature and turbulence characteristics. Using an original numerical computation, we examine the effect of each physical parameter on the correlations in the time series of a given experiment. We then use a parametric-likelihood approach to validate the modeling and estimate atmosphere parameters from the polarbear-i project first season data set. We derive a new 1.0% upper limit on the linear polarization fraction of atmospheric emission. We also compare our results to previous studies and weather station measurements. The proposed model can be used for realistic simulations of future ground-based CMB observations.

  4. Two-current nucleon observables in Skyrme model

    International Nuclear Information System (INIS)

    Chemtob, M.

    1987-01-01

    Three independent two-current nucleon observables are studied within the two-flavor Skyrme model for the πρω system. The effecive lagrangian is that of the gauged chiral symmetry approach, consistent with the vector meson dominance, in the linear realization (for the vector mesons) of the global chiral symmetry. The first application deals with the nucleon electric polarizability and magnetic susceptibility. Both seagull and dispersive contributions appear and we evaluate the latter in terms of the sums over intermediate states. The results are compared with existing quark model results as well as with empirical determinations. The second application concerns the zero-point quantum correction to the skyrmion mass. We apply a chiral perturbation theory approach to evaluate the ion loop contribution to the nucleon mass. The comparison with the conventional Skyrme model result reveals an important sensitivity to the stabilization mechanism. The third application is to lepton-nucleon deep inelastic scattering in the Bjorken scaling limit. The structure tensor is calculated in terms of the representation as a commutator product of two currents. Numerical results are presented for the scaling function F 2 (x). An essential use is made of the large N c (number of colors) approximation in all these applications. In the numerical computations we ignore the distortion effects, relative to the free plane wave limit, on the pionic fluctuations. (orig.)

  5. Observational constraints on successful model of quintessential Inflation

    Energy Technology Data Exchange (ETDEWEB)

    Geng, Chao-Qiang [Chongqing University of Posts and Telecommunications, Chongqing, 400065 (China); Lee, Chung-Chi [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Sami, M. [Centre for Theoretical Physics, Jamia Millia Islamia, New Delhi 110025 (India); Saridakis, Emmanuel N. [Physics Division, National Technical University of Athens, 15780 Zografou Campus, Athens (Greece); Starobinsky, Alexei A., E-mail: geng@phys.nthu.edu.tw, E-mail: lee.chungchi16@gmail.com, E-mail: sami@iucaa.ernet.in, E-mail: Emmanuel_Saridakis@baylor.edu, E-mail: alstar@landau.ac.ru [L. D. Landau Institute for Theoretical Physics RAS, Moscow 119334 (Russian Federation)

    2017-06-01

    We study quintessential inflation using a generalized exponential potential V (φ)∝ exp(−λ φ {sup n} / M {sub Pl} {sup n} ), n >1, the model admits slow-roll inflation at early times and leads to close-to-scaling behaviour in the post inflationary era with an exit to dark energy at late times. We present detailed investigations of the inflationary stage in the light of the Planck 2015 results, study post-inflationary dynamics and analytically confirm the existence of an approximately scaling solution. Additionally, assuming that standard massive neutrinos are non-minimally coupled, makes the field φ dominant once again at late times giving rise to present accelerated expansion of the Universe. We derive observational constraints on the field and time-dependent neutrino masses. In particular, for n =6 (8), the parameter λ is constrained to be, log λ > −7.29 (−11.7); the model produces the spectral index of the power spectrum of primordial scalar (matter density) perturbations as n {sub s} = 0.959 ± 0.001 (0.961 ± 0.001) and tiny tensor-to-scalar ratio, r <1.72 × 10{sup −2} (2.32 × 10{sup −2}) respectively. Consequently, the upper bound on possible values of the sum of neutrino masses Σ m {sub ν} ∼< 2.5 eV significantly enhances compared to that in the standard ΛCDM model.

  6. Observational constraints on successful model of quintessential Inflation

    International Nuclear Information System (INIS)

    Geng, Chao-Qiang; Lee, Chung-Chi; Sami, M.; Saridakis, Emmanuel N.; Starobinsky, Alexei A.

    2017-01-01

    We study quintessential inflation using a generalized exponential potential V (φ)∝ exp(−λ φ n / M Pl n ), n >1, the model admits slow-roll inflation at early times and leads to close-to-scaling behaviour in the post inflationary era with an exit to dark energy at late times. We present detailed investigations of the inflationary stage in the light of the Planck 2015 results, study post-inflationary dynamics and analytically confirm the existence of an approximately scaling solution. Additionally, assuming that standard massive neutrinos are non-minimally coupled, makes the field φ dominant once again at late times giving rise to present accelerated expansion of the Universe. We derive observational constraints on the field and time-dependent neutrino masses. In particular, for n =6 (8), the parameter λ is constrained to be, log λ > −7.29 (−11.7); the model produces the spectral index of the power spectrum of primordial scalar (matter density) perturbations as n s = 0.959 ± 0.001 (0.961 ± 0.001) and tiny tensor-to-scalar ratio, r <1.72 × 10 −2 (2.32 × 10 −2 ) respectively. Consequently, the upper bound on possible values of the sum of neutrino masses Σ m ν ∼< 2.5 eV significantly enhances compared to that in the standard ΛCDM model.

  7. Observations

    DEFF Research Database (Denmark)

    Rossiter, John R.; Percy, Larry

    2013-01-01

    as requiring a new model of how advertising communicates and persuades, which, as the authors' textbooks explain, is sheer nonsense and contrary to the goal of integrated marketing. We provide in this article a translation of practitioners' jargon into more scientifically acceptable terminology as well...... as a classification of the new advertising formats in terms of traditional analogs with mainstream media advertising....

  8. AOD trends during 2001-2010 from observations and model simulations

    Science.gov (United States)

    Pozzer, Andrea; de Meij, Alexander; Yoon, Jongmin; Astitha, Marina

    2016-04-01

    The trend of aerosol optical depth (AOD) between 2001 and 2010 is estimated globally and regionally from remote sensed observations by the MODIS (Moderate Resolution Imaging Spectroradiometer), MISR (Multi-angle Imaging SpectroRadiometer) and SeaWIFS (Sea-viewing Wide Field-of-view Sensor) satellite sensor. The resulting trends have been compared to model results from the EMAC (ECHAM5/MESSy Atmospheric Chemistry {[1]}), model. Although interannual variability is applied only to anthropogenic and biomass-burning emissions, the model is able to quantitatively reproduce the AOD trends as observed by MODIS, while some discrepancies are found when compared to MISR and SeaWIFS. An additional numerical simulation with the same model was performed, neglecting any temporal change in the emissions, i.e. with no interannual variability for any emission source. It is shown that decreasing AOD trends over the US and Europe are due to the decrease in the (anthropogenic) emissions. On contrary over the Sahara Desert and the Middle East region, the meteorological/dynamical changes in the last decade play a major role in driving the AOD trends. Further, over Southeast Asia, both meteorology and emissions changes are equally important in defining AOD trends {[2]}. Finally, decomposing the regional AOD trends into individual aerosol components reveals that the soluble components are the most dominant contributors to the total AOD, as their influence on the total AOD is enhanced by the aerosol water content. {[1]}: Jöckel, P., Kerkweg, A., Pozzer, A., Sander, R., Tost, H., Riede, H., Baumgaertner, A., Gromov, S., and Kern, B.: Development cycle 2 of the Modular Earth Submodel System (MESSy2), Geosci. Model Dev., 3, 717-752, doi:10.5194/gmd-3-717-2010, 2010. {[2]}: Pozzer, A., de Meij, A., Yoon, J., Tost, H., Georgoulias, A. K., and Astitha, M.: AOD trends during 2001-2010 from observations and model simulations, Atmos. Chem. Phys., 15, 5521-5535, doi:10.5194/acp-15-5521-2015, 2015.

  9. Radar observations and shape model of asteroid 16 Psyche

    Science.gov (United States)

    Shepard, Michael K.; Richardson, James; Taylor, Patrick A.; Rodriguez-Ford, Linda A.; Conrad, Al; de Pater, Imke; Adamkovics, Mate; de Kleer, Katherine; Males, Jared R.; Morzinski, Katie M.; Close, Laird M.; Kaasalainen, Mikko; Viikinkoski, Matti; Timerson, Bradley; Reddy, Vishnu; Magri, Christopher; Nolan, Michael C.; Howell, Ellen S.; Benner, Lance A. M.; Giorgini, Jon D.; Warner, Brian D.; Harris, Alan W.

    2017-01-01

    Using the S-band radar at Arecibo Observatory, we observed 16 Psyche, the largest M-class asteroid in the main belt. We obtained 18 radar imaging and 6 continuous wave runs in November and December 2015, and combined these with 16 continuous wave runs from 2005 and 6 recent adaptive-optics (AO) images (Drummond et al., 2016) to generate a three-dimensional shape model of Psyche. Our model is consistent with a previously published AO image (Hanus et al., 2013) and three multi-chord occultations. Our shape model has dimensions 279 × 232 × 189 km (± 10%), Deff = 226 ± 23 km, and is 6% larger than, but within the uncertainties of, the most recently published size and shape model generated from the inversion of lightcurves (Hanus et al., 2013). Psyche is roughly ellipsoidal but displays a mass-deficit over a region spanning 90° of longitude. There is also evidence for two ∼50-70 km wide depressions near its south pole. Our size and published masses lead to an overall bulk density estimate of 4500 ± 1400 kgm-3. Psyche's mean radar albedo of 0.37 ± 0.09 is consistent with a near-surface regolith composed largely of iron-nickel and ∼40% porosity. Its radar reflectivity varies by a factor of 1.6 as the asteroid rotates, suggesting global variations in metal abundance or bulk density in the near surface. The variations in radar albedo appear to correlate with large and small-scale shape features. Our size and Psyche's published absolute magnitude lead to an optical albedo of pv = 0.15 ± 0.03, and there is evidence for albedo variegations that correlate with shape features.

  10. Asteroid 16 Psyche: Radar Observations and Shape Model

    Science.gov (United States)

    Shepard, Michael K.; Richardson, James E.; Taylor, Patrick A.; Rodriguez-Ford, Linda A.; Conrad, Al; de Pater, Imke; Adamkovics, Mate; de Kleer, Katherine R.; Males, Jared; Morzinski, Kathleen M.; Miller Close, Laird; Kaasalainen, Mikko; Viikinkoski, Matti; Timerson, Bradley; Reddy, Vishnu; Magri, Christopher; Nolan, Michael C.; Howell, Ellen S.; Warner, Brian D.; Harris, Alan W.

    2016-10-01

    We observed 16 Psyche, the largest M-class asteroid in the main belt, using the S-band radar at Arecibo Observatory. We obtained 18 radar imaging and 6 continuous wave runs in November and December 2015, and combined these with 16 continuous wave runs from 2005 and 6 recent adaptive-optics (AO) images to generate a three-dimensional shape model of Psyche. Our model is consistent with a previously published AO image [Hanus et al. Icarus 226, 1045-1057, 2013] and three multi-chord occultations. Our shape model has dimensions 279 x 232 x 189 km (±10%), Deff = 226 ± 23 km, and is 6% larger than, but within the uncertainties of, the most recently published size and shape model generated from the inversion of lightcurves [Hanus et al., 2013]. Psyche is roughly ellipsoidal but displays a mass-deficit over a region spanning 90° of longitude. There is also evidence for two ~50-70 km wide depressions near its south pole. Our size and published masses lead to an overall bulk density estimate of 4500 ± 1400 kg m-3. Psyche's mean radar albedo of 0.37 ± 0.09 is consistent with a near-surface regolith composed largely of iron-nickel and ~40% porosity. Its radar reflectivity varies by a factor of 1.6 as the asteroid rotates, suggesting global variations in metal abundance or bulk density in the near surface. The variations in radar albedo appear to correlate with large and small-scale shape features. Our size and Psyche's published absolute magnitude lead to an optical albedo of pv = 0.15 ± 0.03, and there is evidence for albedo variegations that correlate with shape features.

  11. Lightning NOx emissions over the USA constrained by TES ozone observations and the GEOS-Chem model

    Science.gov (United States)

    Jourdain, L.; Kulawik, S. S.; Worden, H. M.; Pickering, K. E.; Worden, J.; Thompson, A. M.

    2010-01-01

    Improved estimates of NOx from lightning sources are required to understand tropospheric NOx and ozone distributions, the oxidising capacity of the troposphere and corresponding feedbacks between chemistry and climate change. In this paper, we report new satellite ozone observations from the Tropospheric Emission Spectrometer (TES) instrument that can be used to test and constrain the parameterization of the lightning source of NOx in global models. Using the National Lightning Detection (NLDN) and the Long Range Lightning Detection Network (LRLDN) data as well as the HYPSLIT transport and dispersion model, we show that TES provides direct observations of ozone enhanced layers downwind of convective events over the USA in July 2006. We find that the GEOS-Chem global chemistry-transport model with a parameterization based on cloud top height, scaled regionally and monthly to OTD/LIS (Optical Transient Detector/Lightning Imaging Sensor) climatology, captures the ozone enhancements seen by TES. We show that the model's ability to reproduce the location of the enhancements is due to the fact that this model reproduces the pattern of the convective events occurrence on a daily basis during the summer of 2006 over the USA, even though it does not well represent the relative distribution of lightning intensities. However, this model with a value of 6 Tg N/yr for the lightning source (i.e.: with a mean production of 260 moles NO/Flash over the USA in summer) underestimates the intensities of the ozone enhancements seen by TES. By imposing a production of 520 moles NO/Flash for lightning occurring in midlatitudes, which better agrees with the values proposed by the most recent studies, we decrease the bias between TES and GEOS-Chem ozone over the USA in July 2006 by 40%. However, our conclusion on the strength of the lightning source of NOx is limited by the fact that the contribution from the stratosphere is underestimated in the GEOS-Chem simulations.

  12. A UNIFIED EMPIRICAL MODEL FOR INFRARED GALAXY COUNTS BASED ON THE OBSERVED PHYSICAL EVOLUTION OF DISTANT GALAXIES

    International Nuclear Information System (INIS)

    Béthermin, Matthieu; Daddi, Emanuele; Sargent, Mark T.; Elbaz, David; Mullaney, James; Pannella, Maurilio; Magdis, Georgios; Hezaveh, Yashar; Le Borgne, Damien; Buat, Véronique; Charmandaris, Vassilis; Lagache, Guilaine; Scott, Douglas

    2012-01-01

    We reproduce the mid-infrared to radio galaxy counts with a new empirical model based on our current understanding of the evolution of main-sequence (MS) and starburst (SB) galaxies. We rely on a simple spectral energy distribution (SED) library based on Herschel observations: a single SED for the MS and another one for SB, getting warmer with redshift. Our model is able to reproduce recent measurements of galaxy counts performed with Herschel, including counts per redshift slice. This agreement demonstrates the power of our 2-Star-Formation Modes (2SFM) decomposition in describing the statistical properties of infrared sources and their evolution with cosmic time. We discuss the relative contribution of MS and SB galaxies to the number counts at various wavelengths and flux densities. We also show that MS galaxies are responsible for a bump in the 1.4 GHz radio counts around 50 μJy. Material of the model (predictions, SED library, mock catalogs, etc.) is available online.

  13. Towards interoperable and reproducible QSAR analyses: Exchange of datasets.

    Science.gov (United States)

    Spjuth, Ola; Willighagen, Egon L; Guha, Rajarshi; Eklund, Martin; Wikberg, Jarl Es

    2010-06-30

    QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML) which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join, extend, combine datasets and hence work collectively, but

  14. Towards interoperable and reproducible QSAR analyses: Exchange of datasets

    Directory of Open Access Journals (Sweden)

    Spjuth Ola

    2010-06-01

    Full Text Available Abstract Background QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. Results We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Conclusions Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join

  15. Modelling of particular phenomena observed in PANDA with Gothic

    International Nuclear Information System (INIS)

    Bandurski, Th.; Putz, F.; Andreani, M.; Analytis, M.

    2000-01-01

    PANDA is a large scale facility for investigating the long-term decay heat removal from the containment of a next generation 'passive' Advanced Light Water Reactor (ALWR). The first test series was aimed at the investigation of the long-term LOCA response of the Passive Containment Cooling System (PCCS) for the General Electric (GE) Simplified Boiling Water Reactor (SBWR). Recently, the facility is used in the framework of two European projects for investigating the performance of four passive cooling systems, i.e. the Building Condenser (BC) designed by Siemens for the SWR-1000 long-term containment cooling, the Passive Containment Cooling System for the European Simplified Boiling Water Reactor (ESBWR), the Containment Plate Condenser (CPC) and the Isolation Condenser (IC) for cooling of a BWR core. The PANDA tests have the dual objectives of improving confidence in the performance of the passive heat removal mechanisms underlying the design of the tested safety systems and extending the data base available for containment analysis code qualification. Among others, the containment analysis code Gothic was chosen for the analysis of particular phenomena observed during the PANDA tests. Ibis paper presents selected safety relevant phenomena observed in the PANDA tests and identified for the analyses and possible approaches for their modeling with Gothic. (author)

  16. Evaluating climate model performance with various parameter sets using observations over the recent past

    Directory of Open Access Journals (Sweden)

    M. F. Loutre

    2011-05-01

    Full Text Available Many sources of uncertainty limit the accuracy of climate projections. Among them, we focus here on the parameter uncertainty, i.e. the imperfect knowledge of the values of many physical parameters in a climate model. Therefore, we use LOVECLIM, a global three-dimensional Earth system model of intermediate complexity and vary several parameters within a range based on the expert judgement of model developers. Nine climatic parameter sets and three carbon cycle parameter sets are selected because they yield present-day climate simulations coherent with observations and they cover a wide range of climate responses to doubled atmospheric CO2 concentration and freshwater flux perturbation in the North Atlantic. Moreover, they also lead to a large range of atmospheric CO2 concentrations in response to prescribed emissions. Consequently, we have at our disposal 27 alternative versions of LOVECLIM (each corresponding to one parameter set that provide very different responses to some climate forcings. The 27 model versions are then used to illustrate the range of responses provided over the recent past, to compare the time evolution of climate variables over the time interval for which they are available (the last few decades up to more than one century and to identify the outliers and the "best" versions over that particular time span. For example, between 1979 and 2005, the simulated global annual mean surface temperature increase ranges from 0.24 °C to 0.64 °C, while the simulated increase in atmospheric CO2 concentration varies between 40 and 50 ppmv. Measurements over the same period indicate an increase in global annual mean surface temperature of 0.45 °C (Brohan et al., 2006 and an increase in atmospheric CO2 concentration of 44 ppmv (Enting et al., 1994; GLOBALVIEW-CO2, 2006. Only a few parameter sets yield simulations that reproduce the observed key variables of the climate system over the last

  17. Reproducible research: a minority opinion

    Science.gov (United States)

    Drummond, Chris

    2018-01-01

    Reproducible research, a growing movement within many scientific fields, including machine learning, would require the code, used to generate the experimental results, be published along with any paper. Probably the most compelling argument for this is that it is simply following good scientific practice, established over the years by the greats of science. The implication is that failure to follow such a practice is unscientific, not a label any machine learning researchers would like to carry. It is further claimed that misconduct is causing a growing crisis of confidence in science. That, without this practice being enforced, science would inevitably fall into disrepute. This viewpoint is becoming ubiquitous but here I offer a differing opinion. I argue that far from being central to science, what is being promulgated is a narrow interpretation of how science works. I contend that the consequences are somewhat overstated. I would also contend that the effort necessary to meet the movement's aims, and the general attitude it engenders would not serve well any of the research disciplines, including our own.

  18. Surface Soil Moisture Memory Estimated from Models and SMAP Observations

    Science.gov (United States)

    He, Q.; Mccoll, K. A.; Li, C.; Lu, H.; Akbar, R.; Pan, M.; Entekhabi, D.

    2017-12-01

    Soil moisture memory(SMM), which is loosely defined as the time taken by soil to forget an anomaly, has been proved to be important in land-atmosphere interaction. There are many metrics to calculate the SMM timescale, for example, the timescale based on the time-series autocorrelation, the timescale ignoring the soil moisture time series and the timescale which only considers soil moisture increment. Recently, a new timescale based on `Water Cycle Fraction' (Kaighin et al., 2017), in which the impact of precipitation on soil moisture memory is considered, has been put up but not been fully evaluated in global. In this study, we compared the surface SMM derived from SMAP observations with that from land surface model simulations (i.e., the SMAP Nature Run (NR) provided by the Goddard Earth Observing System, version 5) (Rolf et al., 2014). Three timescale metrics were used to quantify the surface SMM as: T0 based on the soil moisture time series autocorrelation, deT0 based on the detrending soil moisture time series autocorrelation, and tHalf based on the Water Cycle Fraction. The comparisons indicate that: (1) there are big gaps between the T0 derived from SMAP and that from NR (2) the gaps get small for deT0 case, in which the seasonality of surface soil moisture was removed with a moving average filter; (3) the tHalf estimated from SMAP is much closer to that from NR. The results demonstrate that surface SMM can vary dramatically among different metrics, while the memory derived from land surface model differs from the one from SMAP observation. tHalf, with considering the impact of precipitation, may be a good choice to quantify surface SMM and have high potential in studies related to land atmosphere interactions. References McColl. K.A., S.H. Alemohammad, R. Akbar, A.G. Konings, S. Yueh, D. Entekhabi. The Global Distribution and Dynamics of Surface Soil Moisture, Nature Geoscience, 2017 Reichle. R., L. Qing, D.L. Gabrielle, A. Joe. The "SMAP_Nature_v03" Data

  19. Modeling MESSENGER Observations of Calcium in Mercury's Exosphere

    Science.gov (United States)

    Burger, Matthew Howard; Killen, Rosemary M.; McClintock, William E.; Vervack, Ronald J., Jr.; Merkel, Aimee W.; Sprague, Ann L.; Sarantos, Menelaos

    2012-01-01

    The Mercury Atmospheric and Surface Composition Spectrometer (MASCS) on the MESSENGER spacecraft has made the first high-spatial-resolution observations of exospheric calcium at Mercury. We use a Monte Carlo model of the exosphere to track the trajectories of calcium atoms ejected from the surface until they are photoionized, escape from the system, or stick to the surface. This model permits an exploration of exospheric source processes and interactions among neutral atoms, solar radiation, and the planetary surface. The MASCS data have suggested that a persistent, high-energy source of calcium that was enhanced in the dawn, equatorial region of Mercury was active during MESSENGER's three flybys of Mercury and during the first seven orbits for which MASCS obtained data. The total Ca source rate from the surface varied between 1.2x10(exp 23) and 2.6x10(exp 23) Ca atoms/s, if its temperature was 50,000 K. The origin of this high-energy, asymmetric source is unknown, although from this limited data set it does not appear to be consistent with micrometeoroid impact vaporization, ion sputtering, electron-stimulated desorption, or vaporization at dawn of material trapped on the cold nightside.

  20. Observation and modeling of 222Rn daughters in liquid nitrogen

    International Nuclear Information System (INIS)

    Frodyma, N.; Pelczar, K.; Wójcik, M.

    2014-01-01

    The results of alpha spectrometric measurements of the activity of 222 Rn daughters dissolved in liquefied nitrogen are presented. A direct detection method of ionized alpha-emitters from the 222 Rn decay chain ( 214 Po and 218 Po) in a cryogenic liquid in the presence of an external electric field is shown. Properties of the radioactive ions are derived from a proposed model of ion production and transport in the cryogenic liquid. Ionic life-time of the ions was found to be on the order of 10 s in liquid nitrogen (4.0 purity class). The presence of positive and negative ions was observed. - Highlights: • A direct detection method of the alpha-emitters in a cryogenic liquid is shown. • We examine electrostatic drifting of the radioactive ions in liquid nitrogen. • The ions belong to the Radon-222 decay chain; Radon-222 is dissolved in the liquid. • The model of the ions production and behaviour in the liquid is proposed. • The ion production significantly depends on the nuclear decay type (alpha or beta)

  1. Modeling the Ionosphere with GPS and Rotation Measure Observations

    Science.gov (United States)

    Malins, J. B.; Taylor, G. B.; White, S. M.; Dowell, J.

    2017-12-01

    Advances in digital processing have created new tools for looking at and examining the ionosphere. We have combined data from dual frequency GPSs, digital ionosondes and observations from The Long Wavelength Array (LWA), a 256 dipole low frequency radio telescope situated in central New Mexico in order to examine ionospheric profiles. By studying polarized pulsars, the LWA is able to very accurately determine the Faraday rotation caused by the ionosphere. By combining this data with the international geomagnetic reference field, the LWA can evaluate ionospheric profiles and how well they predict the actual Faraday rotation. Dual frequency GPS measurements of total electron content, as well as measurements from digisonde data were used to model the ionosphere, and to predict the Faraday rotation to with in 0.1 rad/m2. Additionally, it was discovered that the predicted topside profile of the digisonde data did not accurate predict faraday rotation measurements, suggesting a need to reexamine the methods for creating the topside predicted profile. I will discuss the methods used to measure rotation measure and ionosphere profiles as well as discuss possible corrections to the topside model.

  2. Reproducibility in Research: Systems, Infrastructure, Culture

    Directory of Open Access Journals (Sweden)

    Tom Crick

    2017-11-01

    Full Text Available The reproduction and replication of research results has become a major issue for a number of scientific disciplines. In computer science and related computational disciplines such as systems biology, the challenges closely revolve around the ability to implement (and exploit novel algorithms and models. Taking a new approach from the literature and applying it to a new codebase frequently requires local knowledge missing from the published manuscripts and transient project websites. Alongside this issue, benchmarking, and the lack of open, transparent and fair benchmark sets present another barrier to the verification and validation of claimed results. In this paper, we outline several recommendations to address these issues, driven by specific examples from a range of scientific domains. Based on these recommendations, we propose a high-level prototype open automated platform for scientific software development which effectively abstracts specific dependencies from the individual researcher and their workstation, allowing easy sharing and reproduction of results. This new e-infrastructure for reproducible computational science offers the potential to incentivise a culture change and drive the adoption of new techniques to improve the quality and efficiency – and thus reproducibility – of scientific exploration.

  3. Tidally modulated eruptions on Enceladus: Cassini ISS observations and models

    International Nuclear Information System (INIS)

    Nimmo, Francis; Porco, Carolyn; Mitchell, Colin

    2014-01-01

    We use images acquired by the Cassini Imaging Science Subsystem (ISS) to investigate the temporal variation of the brightness and height of the south polar plume of Enceladus. The plume's brightness peaks around the moon's apoapse, but with no systematic variation in scale height with either plume brightness or Enceladus' orbital position. We compare our results, both alone and supplemented with Cassini near-infrared observations, with predictions obtained from models in which tidal stresses are the principal control of the eruptive behavior. There are three main ways of explaining the observations: (1) the activity is controlled by right-lateral strike slip motion; (2) the activity is driven by eccentricity tides with an apparent time delay of about 5 hr; (3) the activity is driven by eccentricity tides plus a 1:1 physical libration with an amplitude of about 0.°8 (3.5 km). The second hypothesis might imply either a delayed eruptive response, or a dissipative, viscoelastic interior. The third hypothesis requires a libration amplitude an order of magnitude larger than predicted for a solid Enceladus. While we cannot currently exclude any of these hypotheses, the third, which is plausible for an Enceladus with a subsurface ocean, is testable by using repeat imaging of the moon's surface. A dissipative interior suggests that a regional background heat source should be detectable. The lack of a systematic variation in plume scale height, despite the large variations in plume brightness, is plausibly the result of supersonic flow; the details of the eruption process are yet to be understood.

  4. Odessa Tsunami of 27 June 2014: Observations and Numerical Modelling

    Science.gov (United States)

    Šepić, Jadranka; Rabinovich, Alexander B.; Sytov, Victor N.

    2018-04-01

    On 27 June, a 1-2-m high wave struck the beaches of Odessa, the third largest Ukrainian city, and the neighbouring port-town Illichevsk (northwestern Black Sea). Throughout the day, prominent seiche oscillations were observed in several other ports of the Black Sea. Tsunamigenic synoptic conditions were found over the Black Sea, stretching from Romania in the west to the Crimean Peninsula in the east. Intense air pressure disturbances and convective thunderstorm clouds were associated with these conditions; right at the time of the event, a 1.5-hPa air pressure jump was recorded at Odessa and a few hours earlier in Romania. We have utilized a barotropic ocean numerical model to test two hypotheses: (1) a tsunami-like wave was generated by an air pressure disturbance propagating directly over Odessa ("Experiment 1"); (2) a tsunami-like wave was generated by an air pressure disturbance propagating offshore, approximately 200 km to the south of Odessa, and along the shelf break ("Experiment 2"). Both experiments decisively confirm the meteorological origin of the tsunami-like waves on the coast of Odessa and imply that intensified long ocean waves in this region were generated via the Proudman resonance mechanism while propagating over the northwestern Black Sea shelf. The "Odessa tsunami" of 27 June 2014 was identified as a "beach meteotsunami", similar to events regularly observed on the beaches of Florida, USA, but different from the "harbour meteotsunamis", which occurred 1-3 days earlier in Ciutadella (Baleares, Spain), Mazara del Vallo (Sicily, Italy) and Vela Luka (Croatia) in the Mediterranean Sea, despite that they were associated with the same atmospheric system moving over the Mediterranean/Black Sea region on 23-27 June 2014.

  5. The vertical structure of the Saharan boundary layer: Observations and modelling

    Science.gov (United States)

    Garcia-Carreras, L.; Parker, D. J.; Marsham, J. H.; Rosenberg, P.; Marenco, F.; Mcquaid, J.

    2012-04-01

    The vertical structure of the Saharan atmospheric boundary layer (SABL) is investigated with the use of aircraft data from the Fennec observational campaign, and high-resolution large-eddy model (LEM) simulations. The SABL is one of the deepest on Earth, and crucial in controlling the vertical redistribution and long-range transport of dust in the Sahara. The SABL is typically made up of an actively growing convective region driven by high sensible heating at the surface, with a deep, near-neutrally stratified Saharan residual layer (SRL) above it, which is mostly well mixed in humidity and temperature and reaches a height of ~500hPa. These two layers are usually separated by a weak (≤1K) temperature inversion, making the vertical structure very sensitive to the surface fluxes. Large-eddy model (LEM) simulations initialized with radiosonde data from Bordj Bardji Mokhtar (BBM), southern Algeria, are used to improve our understanding of the turbulence structure of the stratification of the SABL, and any mixing or exchanges between the different layers. The model can reproduce the typical SABL structure from observations, and a tracer is used to illustrate the growth of the convective boundary layer into the residual layer above. The heat fluxes show a deep entrainment zone between the convective region and the SRL, potentially enhanced by the combination of a weak lid and a neutral layer above. The horizontal variability in the depth of the convective layer was also significant even with homogeneous surface fluxes. Aircraft observations from a number of flights are used to validate the model results, and to highlight the variability present in a more realistic setting, where conditions are rarely homogeneous in space. Stacked legs were performed to get an estimate of the mean flux profile of the boundary layer, as well as the variations in the vertical structure of the SABL with heterogeneous atmospheric and surface conditions. Regular radiosondes from BBM put

  6. Local Scale Radiobrightness Modeling During the Intensive Observing Period-4 of the Cold Land Processes Experiment-1

    Science.gov (United States)

    Kim, E.; Tedesco, M.; de Roo, R.; England, A. W.; Gu, H.; Pham, H.; Boprie, D.; Graf, T.; Koike, T.; Armstrong, R.; Brodzik, M.; Hardy, J.; Cline, D.

    2004-12-01

    The NASA Cold Land Processes Field Experiment (CLPX-1) was designed to provide microwave remote sensing observations and ground truth for studies of snow and frozen ground remote sensing, particularly issues related to scaling. CLPX-1 was conducted in 2002 and 2003 in Colorado, USA. One of the goals of the experiment was to test the capabilities of microwave emission models at different scales. Initial forward model validation work has concentrated on the Local-Scale Observation Site (LSOS), a 0.8~ha study site consisting of open meadows separated by trees where the most detailed measurements were made of snow depth and temperature, density, and grain size profiles. Results obtained in the case of the 3rd Intensive Observing Period (IOP3) period (February, 2003, dry snow) suggest that a model based on Dense Medium Radiative Transfer (DMRT) theory is able to model the recorded brightness temperatures using snow parameters derived from field measurements. This paper focuses on the ability of forward DMRT modelling, combined with snowpack measurements, to reproduce the radiobrightness signatures observed by the University of Michigan's Truck-Mounted Radiometer System (TMRS) at 19 and 37~GHz during the 4th IOP (IOP4) in March, 2003. Unlike in IOP3, conditions during IOP4 include both wet and dry periods, providing a valuable test of DMRT model performance. In addition, a comparison will be made for the one day of coincident observations by the University of Tokyo's Ground-Based Microwave Radiometer-7 (GBMR-7) and the TMRS. The plot-scale study in this paper establishes a baseline of DMRT performance for later studies at successively larger scales. And these scaling studies will help guide the choice of future snow retrieval algorithms and the design of future Cold Lands observing systems.

  7. Improving the representation of river-groundwater interactions in land surface modeling at the regional scale: Observational evidence and parameterization applied in the Community Land Model

    KAUST Repository

    Zampieri, Matteo

    2012-02-01

    Groundwater is an important component of the hydrological cycle, included in many land surface models to provide a lower boundary condition for soil moisture, which in turn plays a key role in the land-vegetation-atmosphere interactions and the ecosystem dynamics. In regional-scale climate applications land surface models (LSMs) are commonly coupled to atmospheric models to close the surface energy, mass and carbon balance. LSMs in these applications are used to resolve the momentum, heat, water and carbon vertical fluxes, accounting for the effect of vegetation, soil type and other surface parameters, while lack of adequate resolution prevents using them to resolve horizontal sub-grid processes. Specifically, LSMs resolve the large-scale runoff production associated with infiltration excess and sub-grid groundwater convergence, but they neglect the effect from loosing streams to groundwater. Through the analysis of observed data of soil moisture obtained from the Oklahoma Mesoscale Network stations and land surface temperature derived from MODIS we provide evidence that the regional scale soil moisture and surface temperature patterns are affected by the rivers. This is demonstrated on the basis of simulations from a land surface model (i.e., Community Land Model - CLM, version 3.5). We show that the model cannot reproduce the features of the observed soil moisture and temperature spatial patterns that are related to the underlying mechanism of reinfiltration of river water to groundwater. Therefore, we implement a simple parameterization of this process in CLM showing the ability to reproduce the soil moisture and surface temperature spatial variabilities that relate to the river distribution at regional scale. The CLM with this new parameterization is used to evaluate impacts of the improved representation of river-groundwater interactions on the simulated water cycle parameters and the surface energy budget at the regional scale. © 2011 Elsevier B.V.

  8. Observation- and model-based estimates of particulate dry nitrogen deposition to the oceans

    Directory of Open Access Journals (Sweden)

    A. R. Baker

    2017-07-01

    expected to be more robust than TM4, while TM4 gives access to speciated parameters (NO3− and NH4+ that are more relevant to the observed parameters and which are not available in ACCMIP. Dry deposition fluxes (CalDep were calculated from the observed concentrations using estimates of dry deposition velocities. Model–observation ratios (RA, n, weighted by grid-cell area and number of observations, were used to assess the performance of the models. Comparison in the three study regions suggests that TM4 overestimates NO3− concentrations (RA, n =  1.4–2.9 and underestimates NH4+ concentrations (RA, n =  0.5–0.7, with spatial distributions in the tropical Atlantic and northern Indian Ocean not being reproduced by the model. In the case of NH4+ in the Indian Ocean, this discrepancy was probably due to seasonal biases in the sampling. Similar patterns were observed in the various comparisons of CalDep to ModDep (RA, n =  0.6–2.6 for NO3−, 0.6–3.1 for NH4+. Values of RA, n for NHx CalDep–ModDep comparisons were approximately double the corresponding values for NH4+ CalDep–ModDep comparisons due to the significant fraction of gas-phase NH3 deposition incorporated in the TM4 and ACCMIP NHx model products. All of the comparisons suffered due to the scarcity of observational data and the large uncertainty in dry deposition velocities used to derive deposition fluxes from concentrations. These uncertainties have been a major limitation on estimates of the flux of material to the oceans for several decades. Recommendations are made for improvements in N deposition estimation through changes in observations, modelling and model–observation comparison procedures. Validation of modelled dry deposition requires effective comparisons to observable aerosol-phase species' concentrations, and this cannot be achieved if model products only report dry deposition flux over the ocean.

  9. Eight Year Climatologies from Observational (AIRS) and Model (MERRA) Data

    Science.gov (United States)

    Hearty, Thomas; Savtchenko, Andrey; Won, Young-In; Theobalk, Mike; Vollmer, Bruce; Manning, Evan; Smith, Peter; Ostrenga, Dana; Leptoukh, Greg

    2010-01-01

    We examine climatologies derived from eight years of temperature, water vapor, cloud, and trace gas observations made by the Atmospheric Infrared Sounder (AIRS) instrument flying on the Aqua satellite and compare them to similar climatologies constructed with data from a global assimilation model, the Modern Era Retrospective-Analysis for Research and Applications (MERRA). We use the AIRS climatologies to examine anomalies and trends in the AIRS data record. Since sampling can be an issue for infrared satellites in low earth orbit, we also use the MERRA data to examine the AIRS sampling biases. By sampling the MERRA data at the AIRS space-time locations both with and without the AIRS quality control we estimate the sampling bias of the AIRS climatology and the atmospheric conditions where AIRS has a lower sampling rate. While the AIRS temperature and water vapor sampling biases are small at low latitudes, they can be more than a few degrees in temperature or 10 percent in water vapor at higher latitudes. The largest sampling biases are over desert. The AIRS and MERRA data are available from the Goddard Earth Sciences Data and Information Services Center (GES DISC). The AIRS climatologies we used are available for analysis with the GIOVANNI data exploration tool. (see, http://disc.gsfc.nasa.gov).

  10. Observer model optimization of a spectral mammography system

    Science.gov (United States)

    Fredenberg, Erik; Åslund, Magnus; Cederström, Björn; Lundqvist, Mats; Danielsson, Mats

    2010-04-01

    Spectral imaging is a method in medical x-ray imaging to extract information about the object constituents by the material-specific energy dependence of x-ray attenuation. Contrast-enhanced spectral imaging has been thoroughly investigated, but unenhanced imaging may be more useful because it comes as a bonus to the conventional non-energy-resolved absorption image at screening; there is no additional radiation dose and no need for contrast medium. We have used a previously developed theoretical framework and system model that include quantum and anatomical noise to characterize the performance of a photon-counting spectral mammography system with two energy bins for unenhanced imaging. The theoretical framework was validated with synthesized images. Optimal combination of the energy-resolved images for detecting large unenhanced tumors corresponded closely, but not exactly, to minimization of the anatomical noise, which is commonly referred to as energy subtraction. In that case, an ideal-observer detectability index could be improved close to 50% compared to absorption imaging. Optimization with respect to the signal-to-quantum-noise ratio, commonly referred to as energy weighting, deteriorated detectability. For small microcalcifications or tumors on uniform backgrounds, however, energy subtraction was suboptimal whereas energy weighting provided a minute improvement. The performance was largely independent of beam quality, detector energy resolution, and bin count fraction. It is clear that inclusion of anatomical noise and imaging task in spectral optimization may yield completely different results than an analysis based solely on quantum noise.

  11. Electroweak precision observables in the minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Heinemeyer, S.; Hollik, W.; Weiglein, G.

    2006-01-01

    The current status of electroweak precision observables in the Minimal Supersymmetric Standard Model (MSSM) is reviewed. We focus in particular on the W boson mass, M W , the effective leptonic weak mixing angle, sin 2 θ eff , the anomalous magnetic moment of the muon (g-2) μ , and the lightest CP-even MSSM Higgs boson mass, m h . We summarize the current experimental situation and the status of the theoretical evaluations. An estimate of the current theoretical uncertainties from unknown higher-order corrections and from the experimental errors of the input parameters is given. We discuss future prospects for both the experimental accuracies and the precision of the theoretical predictions. Confronting the precision data with the theory predictions within the unconstrained MSSM and within specific SUSY-breaking scenarios, we analyse how well the data are described by the theory. The mSUGRA scenario with cosmological constraints yields a very good fit to the data, showing a clear preference for a relatively light mass scale of the SUSY particles. The constraints on the parameter space from the precision data are discussed, and it is shown that the prospective accuracy at the next generation of colliders will enhance the sensitivity of the precision tests very significantly

  12. Mapping urban air quality in near real-time using observations from low-cost sensors and model information.

    Science.gov (United States)

    Schneider, Philipp; Castell, Nuria; Vogt, Matthias; Dauge, Franck R; Lahoz, William A; Bartonova, Alena

    2017-09-01

    The recent emergence of low-cost microsensors measuring various air pollutants has significant potential for carrying out high-resolution mapping of air quality in the urban environment. However, the data obtained by such sensors are generally less reliable than that from standard equipment and they are subject to significant data gaps in both space and time. In order to overcome this issue, we present here a data fusion method based on geostatistics that allows for merging observations of air quality from a network of low-cost sensors with spatial information from an urban-scale air quality model. The performance of the methodology is evaluated for nitrogen dioxide in Oslo, Norway, using both simulated datasets and real-world measurements from a low-cost sensor network for January 2016. The results indicate that the method is capable of producing realistic hourly concentration fields of urban nitrogen dioxide that inherit the spatial patterns from the model and adjust the prior values using the information from the sensor network. The accuracy of the data fusion method is dependent on various factors including the total number of observations, their spatial distribution, their uncertainty (both in terms of systematic biases and random errors), as well as the ability of the model to provide realistic spatial patterns of urban air pollution. A validation against official data from air quality monitoring stations equipped with reference instrumentation indicates that the data fusion method is capable of reproducing city-wide averaged official values with an R 2 of 0.89 and a root mean squared error of 14.3 μg m -3 . It is further capable of reproducing the typical daily cycles of nitrogen dioxide. Overall, the results indicate that the method provides a robust way of extracting useful information from uncertain sensor data using only a time-invariant model dataset and the knowledge contained within an entire sensor network. Copyright © 2017 The Authors. Published

  13. Shear wave elastography for breast masses is highly reproducible.

    Science.gov (United States)

    Cosgrove, David O; Berg, Wendie A; Doré, Caroline J; Skyba, Danny M; Henry, Jean-Pierre; Gay, Joel; Cohen-Bacrie, Claude

    2012-05-01

    To evaluate intra- and interobserver reproducibility of shear wave elastography (SWE) for breast masses. For intraobserver reproducibility, each observer obtained three consecutive SWE images of 758 masses that were visible on ultrasound. 144 (19%) were malignant. Weighted kappa was used to assess the agreement of qualitative elastographic features; the reliability of quantitative measurements was assessed by intraclass correlation coefficients (ICC). For the interobserver reproducibility, a blinded observer reviewed images and agreement on features was determined. Mean age was 50 years; mean mass size was 13 mm. Qualitatively, SWE images were at least reasonably similar for 666/758 (87.9%). Intraclass correlation for SWE diameter, area and perimeter was almost perfect (ICC ≥ 0.94). Intraobserver reliability for maximum and mean elasticity was almost perfect (ICC = 0.84 and 0.87) and was substantial for the ratio of mass-to-fat elasticity (ICC = 0.77). Interobserver agreement was moderate for SWE homogeneity (κ = 0.57), substantial for qualitative colour assessment of maximum elasticity (κ = 0.66), fair for SWE shape (κ = 0.40), fair for B-mode mass margins (κ = 0.38), and moderate for B-mode mass shape (κ = 0.58), orientation (κ = 0.53) and BI-RADS assessment (κ = 0.59). SWE is highly reproducible for assessing elastographic features of breast masses within and across observers. SWE interpretation is at least as consistent as that of BI-RADS ultrasound B-mode features. • Shear wave ultrasound elastography can measure the stiffness of breast tissue • It provides a qualitatively and quantitatively interpretable colour-coded map of tissue stiffness • Intraobserver reproducibility of SWE is almost perfect while intraobserver reproducibility of SWE proved to be moderate to substantial • The most reproducible SWE features between observers were SWE image homogeneity and maximum elasticity.

  14. COCOA Code for Creating Mock Observations of Star Cluster Models

    OpenAIRE

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Dalessandro, Emanuele

    2017-01-01

    We introduce and present results from the COCOA (Cluster simulatiOn Comparison with ObservAtions) code that has been developed to create idealized mock photometric observations using results from numerical simulations of star cluster evolution. COCOA is able to present the output of realistic numerical simulations of star clusters carried out using Monte Carlo or \\textit{N}-body codes in a way that is useful for direct comparison with photometric observations. In this paper, we describe the C...

  15. Simulation of temperature extremes in the Tibetan Plateau from CMIP5 models and comparison with gridded observations

    Science.gov (United States)

    You, Qinglong; Jiang, Zhihong; Wang, Dai; Pepin, Nick; Kang, Shichang

    2017-09-01

    Understanding changes in temperature extremes in a warmer climate is of great importance for society and for ecosystem functioning due to potentially severe impacts of such extreme events. In this study, temperature extremes defined by the Expert Team on Climate Change Detection and Indices (ETCCDI) from CMIP5 models are evaluated by comparison with homogenized gridded observations at 0.5° resolution across the Tibetan Plateau (TP) for 1961-2005. Using statistical metrics, the models have been ranked in terms of their ability to reproduce similar patterns in extreme events to the observations. Four CMIP5 models have good performance (BNU-ESM, HadGEM2-ES, CCSM4, CanESM2) and are used to create an optimal model ensemble (OME). Most temperature extreme indices in the OME are closer to the observations than in an ensemble using all models. Best performance is given for threshold temperature indices and extreme/absolute value indices are slightly less well modelled. Thus the choice of model in the OME seems to have more influences on temperature extreme indices based on thresholds. There is no significant correlation between elevation and modelled bias of the extreme indices for both the optimal/all model ensembles. Furthermore, the minimum temperature (Tmin) is significanlty positive correlations with the longwave radiation and cloud variables, respectively, but the Tmax fails to find the correlation with the shortwave radiation and cloud variables. This suggests that the cloud-radiation differences influence the Tmin in each CMIP5 model to some extent, and result in the temperature extremes based on Tmin.

  16. Model Consistent Pseudo-Observations of Precipitation and Their Use for Bias Correcting Regional Climate Models

    Directory of Open Access Journals (Sweden)

    Peter Berg

    2015-01-01

    Full Text Available Lack of suitable observational data makes bias correction of high space and time resolution regional climate models (RCM problematic. We present a method to construct pseudo-observational precipitation data bymerging a large scale constrained RCMreanalysis downscaling simulation with coarse time and space resolution observations. The large scale constraint synchronizes the inner domain solution to the driving reanalysis model, such that the simulated weather is similar to observations on a monthly time scale. Monthly biases for each single month are corrected to the corresponding month of the observational data, and applied to the finer temporal resolution of the RCM. A low-pass filter is applied to the correction factors to retain the small spatial scale information of the RCM. The method is applied to a 12.5 km RCM simulation and proven successful in producing a reliable pseudo-observational data set. Furthermore, the constructed data set is applied as reference in a quantile mapping bias correction, and is proven skillful in retaining small scale information of the RCM, while still correcting the large scale spatial bias. The proposed method allows bias correction of high resolution model simulations without changing the fine scale spatial features, i.e., retaining the very information required by many impact models.

  17. Observing the observer (I): meta-bayesian models of learning and decision-making.

    NARCIS (Netherlands)

    Daunizeau, J.; Ouden, H.E.M. den; Pessiglione, M.; Kiebel, S.J.; Stephan, K.E.; Friston, K.J.

    2010-01-01

    In this paper, we present a generic approach that can be used to infer how subjects make optimal decisions under uncertainty. This approach induces a distinction between a subject's perceptual model, which underlies the representation of a hidden "state of affairs" and a response model, which

  18. Polarimetry of Solar System Objects: Observations vs. Models

    Science.gov (United States)

    Yanamandra-Fisher, P. A.

    2014-04-01

    results of main belt comets, asteroids with ring system, lunar studies, planned exploration of planetary satellites that may harbour sub-surface oceans, there is increasing need to include polarimetric (linear, circular and differential) as an integral observing mode of instruments and facilities. For laboratory measurements, there is a need to identify simulants that mimic the polarimetric behaviour of solar system small bodies and measure their polarimetric behavior as function of various physical process they are subject to and have undergone radiation changes of their surfaces. Therefore, inclusion of polarimetric remote sensing and development of spectropolarimeters for groundbased facilities and instruments on space missions is needed, with similar maturation of vector radiative transfer models and related laboratory measurements.

  19. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  20. Visualization in hydrological and atmospheric modeling and observation

    Science.gov (United States)

    Helbig, C.; Rink, K.; Kolditz, O.

    2013-12-01

    In recent years, visualization of geoscientific and climate data has become increasingly important due to challenges such as climate change, flood prediction or the development of water management schemes for arid and semi-arid regions. Models for simulations based on such data often have a large number of heterogeneous input data sets, ranging from remote sensing data and geometric information (such as GPS data) to sensor data from specific observations sites. Data integration using such information is not straightforward and a large number of potential problems may occur due to artifacts, inconsistencies between data sets or errors based on incorrectly calibrated or stained measurement devices. Algorithms to automatically detect various of such problems are often numerically expensive or difficult to parameterize. In contrast, combined visualization of various data sets is often a surprisingly efficient means for an expert to detect artifacts or inconsistencies as well as to discuss properties of the data. Therefore, the development of general visualization strategies for atmospheric or hydrological data will often support researchers during assessment and preprocessing of the data for model setup. When investigating specific phenomena, visualization is vital for assessing the progress of the ongoing simulation during runtime as well as evaluating the plausibility of the results. We propose a number of such strategies based on established visualization methods that - are applicable to a large range of different types of data sets, - are computationally inexpensive to allow application for time-dependent data - can be easily parameterized based on the specific focus of the research. Examples include the highlighting of certain aspects of complex data sets using, for example, an application-dependent parameterization of glyphs, iso-surfaces or streamlines. In addition, we employ basic rendering techniques allowing affine transformations, changes in opacity as well

  1. A Model to Reproduce the Response of the Gaseous Fission Product Monitor (GFPM) in a CANDU{sup R} 6 Reactor (An Estimate of Tramp Uranium Mass in a Candu Core)

    Energy Technology Data Exchange (ETDEWEB)

    Mostofian, Sara; Boss, Charles [AECL Atomic Energy of Canada Limited, 2251 Speakman Drive, Mississauga Ontario L5K 1B2 (Canada)

    2008-07-01

    In a Canada Deuterium Uranium (Candu) reactor, the fuel bundles produce gaseous and volatile fission products that are contained within the fuel matrix and the welded zircaloy sheath. Sometimes a fuel sheath can develop a defect and release the fission products into the circulating coolant. To detect fuel defects, a Gaseous Fission Product Monitoring (GFPM) system is provided in Candu reactors. The (GFPM) is a gamma ray spectrometer that measures fission products in the coolant and alerts the operator to the presence of defected fuel through an increase in measured fission product concentration. A background fission product concentration in the coolant also arises from tramp uranium. The sources of the tramp uranium are small quantities of uranium contamination on the surfaces of fuel bundles and traces of uranium on the pressure tubes, arising from the rare defected fuel element that released uranium into the core. This paper presents a dynamic model that reproduces the behaviour of a GFPM in a Candu 6 plant. The model predicts the fission product concentrations in the coolant from the chronic concentration of tramp uranium on the inner surface of the pressure tubes (PT) and the surface of the fuel bundles (FB) taking into account the on-power refuelling system. (authors)

  2. InSAR Observations and Finite Element Modeling of Crustal Deformation Around a Surging Glacier, Iceland

    Science.gov (United States)

    Spaans, K.; Auriac, A.; Sigmundsson, F.; Hooper, A. J.; Bjornsson, H.; Pálsson, F.; Pinel, V.; Feigl, K. L.

    2014-12-01

    Icelandic ice caps, covering ~11% of the country, are known to be surging glaciers. Such process implies an important local crustal subsidence due to the large ice mass being transported to the ice edge during the surge in a few months only. In 1993-1995, a glacial surge occurred at four neighboring outlet glaciers in the southwestern part of Vatnajökull ice cap, the largest ice cap in Iceland. We estimated that ~16±1 km3 of ice have been moved during this event while the fronts of some of the outlet glaciers advanced by ~1 km.Surface deformation associated with this surge has been surveyed using Interferometric Synthetic Aperture Radar (InSAR) acquisitions from 1992-2002, providing high resolution ground observations of the study area. The data show about 75 mm subsidence at the ice edge of the outlet glaciers following the transport of the large volume of ice during the surge (Fig. 1). The long time span covered by the InSAR images enabled us to remove ~12 mm/yr of uplift occurring in this area due to glacial isostatic adjustment from the retreat of Vatnajökull ice cap since the end of the Little Ice Age in Iceland. We then used finite element modeling to investigate the elastic Earth response to the surge, as well as confirm that no significant viscoelastic deformation occurred as a consequence of the surge. A statistical approach based on Bayes' rule was used to compare the models to the observations and obtain an estimate of the Young's modulus (E) and Poisson's ratio (v) in Iceland. The best-fitting models are those using a one-kilometer thick top layer with v=0.17 and E between 12.9-15.3 GPa underlain by a layer with v=0.25 and E from 67.3 to 81.9 GPa. Results demonstrate that InSAR data and finite element models can be used successfully to reproduce crustal deformation induced by ice mass variations at Icelandic ice caps.Fig. 1: Interferograms spanning 1993 July 31 to 1995 June 19, showing the surge at Tungnaárjökull (Tu.), Skaftárjökull (Sk.) and S

  3. Near Source 2007 Peru Tsunami Runup Observations and Modeling

    Science.gov (United States)

    Borrero, J. C.; Fritz, H. M.; Kalligeris, N.; Broncano, P.; Ortega, E.

    2008-12-01

    On 15 August 2007 an earthquake with moment magnitude (Mw) of 8.0 centered off the coast of central Peru, generated a tsunami with locally focused runup heights of up to 10 m. A reconnaissance team was deployed two weeks after the event and investigated the tsunami effects at 51 sites. Three tsunami fatalities were reported south of the Paracas Peninsula in a sparsely populated desert area where the largest tsunami runup heights and massive inundation distances up to 2 km were measured. Numerical modeling of the earthquake source and tsunami suggest that a region of high slip near the coastline was primarily responsible for the extreme runup heights. The town of Pisco was spared by the Paracas Peninsula, which blocked tsunami waves from propagating northward from the high slip region. As with all near field tsunamis, the waves struck within minutes of the massive ground shaking. Spontaneous evacuations coordinated by the Peruvian Coast Guard minimized the fatalities and illustrate the importance of community-based education and awareness programs. The residents of the fishing village Lagunilla were unaware of the tsunami hazard after an earthquake and did not evacuate, which resulted in 3 fatalities. Despite the relatively benign tsunami effects at Pisco from this event, the tsunami hazard for this city (and its liquefied natural gas terminal) cannot be underestimated. Between 1687 and 1868, the city of Pisco was destroyed 4 times by tsunami waves. Since then, two events (1974 and 2007) have resulted in partial inundation and moderate damage. The fact that potentially devastating tsunami runup heights were observed immediately south of the peninsula only serves to underscore this point.

  4. Reproducibility in the analysis of multigated radionuclide studies of left ventricular ejection fraction

    International Nuclear Information System (INIS)

    Gjorup, T.; Kelbaek, H.; Vestergaard, B.; Fogh, J.; Munck, O.; Jensen, A.M.

    1989-01-01

    The authors determined the reproducibility (the standard deviation [SD]) in the analysis of multigated radionuclide studies of left ventricular ejection fraction (LVEF). Radionuclide studies from a consecutive series of 38 patients suspected of ischemic heart disease were analyzed independently by four nuclear medicine physiologists and four laboratory technicians. Each study was analyzed three times by each of the observers. Based on the analyses of the eight observers, the SD could be estimated by the use of a variance component model for LVEF determinations calculated as the average of the analyses of an arbitrary number of observers making an arbitrary number of analyses. This study presents the SDs for LVEF determinations based on the analyses of one to five observers making one to five analyses each. The SD of a LVEF determination decreased from 3.96% to 2.98% when an observer increased his number of analyses from one to five. A more pronounced decrease in the SD from 3.96% to 1.77% was obtained when the LVEF determinations were based on the average of a single analysis made by one to five observers. However, when dealing with the difference between LVEF determinations from two studies, the highest reproducibility was obtained if the LVEF determinations at both studies were based on the analyses made by the same observer. No significant difference was found in the reproducibility of analyses made by nuclear medicine physicians and laboratory technicians. Our study revealed that to increase the reproducibility of LVEF determinations, special efforts should be made to standardize the outlining of the end-systolic region interest

  5. A comprehensive study on rotation reversal in KSTAR: experimental observations and modelling

    Science.gov (United States)

    Na, D. H.; Na, Yong-Su; Angioni, C.; Yang, S. M.; Kwon, J. M.; Jhang, Hogun; Camenen, Y.; Lee, S. G.; Shi, Y. J.; Ko, W. H.; Lee, J. A.; Hahm, T. S.; KSTAR Team

    2017-12-01

    Dedicated experiments have been performed in KSTAR Ohmic plasmas to investigate the detailed physics of the rotation reversal phenomena. Here we adapt the more general definition of rotation reversal, a large change of the intrinsic toroidal rotation gradient produced by minor changes in the control parameters (Camenen et al 2017 Plasma Phys. Control. Fusion 59 034001), which is commonly observed in KSTAR regardless of the operating conditions. The two main phenomenological features of the rotation reversal are the normalized toroidal rotation gradient ({{u}\\prime} ) change in the gradient region and the existence of an anchor point. For the KSTAR Ohmic plasma database including the experiment results up to the 2016 experimental campaign, both features were investigated. First, the observations show that the locations of the gradient and the anchor point region are dependent on {{q}95} . Second, a strong dependence of {{u}\\prime} on {νeff} is clearly observed in the gradient region, whereas the dependence on R/{{L}{{Ti}}} , R/{{L}{{Te}}} , and R/{{L}{{ne}}} is unclear considering the usual variation of the normalized gradient length in KSTAR. The experimental observations were compared against several theoretical models. The rotation reversal might not occur due to the transition of the dominant turbulence from the trapped electron mode to the ion temperature gradient mode or the neoclassical equilibrium effect in KSTAR. Instead, it seems that the profile shearing effects associated with a finite ballooning tilting well reproduce the experimental observations of both the gradient region and the anchor point; the difference seems to be related to the magnetic shear and the q value. Further analysis implies that the increase of {{u}\\prime} in the gradient region with the increase of the collisionality would occur when the reduction of the momentum diffusivity is comparatively larger than the reduction of the residual stress. It is supported by the perturbative

  6. Reproducing an extreme flood with uncertain post-event information

    Directory of Open Access Journals (Sweden)

    D. Fuentes-Andino

    2017-07-01

    Full Text Available Studies for the prevention and mitigation of floods require information on discharge and extent of inundation, commonly unavailable or uncertain, especially during extreme events. This study was initiated by the devastating flood in Tegucigalpa, the capital of Honduras, when Hurricane Mitch struck the city. In this study we hypothesized that it is possible to estimate, in a trustworthy way considering large data uncertainties, this extreme 1998 flood discharge and the extent of the inundations that followed from a combination of models and post-event measured data. Post-event data collected in 2000 and 2001 were used to estimate discharge peaks, times of peak, and high-water marks. These data were used in combination with rain data from two gauges to drive and constrain a combination of well-known modelling tools: TOPMODEL, Muskingum–Cunge–Todini routing, and the LISFLOOD-FP hydraulic model. Simulations were performed within the generalized likelihood uncertainty estimation (GLUE uncertainty-analysis framework. The model combination predicted peak discharge, times of peaks, and more than 90 % of the observed high-water marks within the uncertainty bounds of the evaluation data. This allowed an inundation likelihood map to be produced. Observed high-water marks could not be reproduced at a few locations on the floodplain. Identifications of these locations are useful to improve model set-up, model structure, or post-event data-estimation methods. Rainfall data were of central importance in simulating the times of peak and results would be improved by a better spatial assessment of rainfall, e.g. from radar data or a denser rain-gauge network. Our study demonstrated that it was possible, considering the uncertainty in the post-event data, to reasonably reproduce the extreme Mitch flood in Tegucigalpa in spite of no hydrometric gauging during the event. The method proposed here can be part of a Bayesian framework in which more events

  7. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  8. A model based approach in observing the activity of neuronal populations for the prediction of epileptic seizures

    International Nuclear Information System (INIS)

    Chong, M.S.; Nesic, D.; Kuhlmann, L.; Postoyan, R.; Varsavsky, A.; Cook, M.

    2010-01-01

    Full text: Epilepsy is a common neurological disease that affects 0.5-1 % of the world's population. In cases where known treatments cannot achieve complete recovery, seizure prediction is essential so that preventive measures can be undertaken to prevent resultant injury. The elcctroencephalogram (EEG) is a widely used diagnostic tool for epilepsy. However, the EEG does not provide a detailed view of the underlying seizure causing neuronal mechanisms. Knowing the dynamics of the neuronal population is useful because tracking the evolution of the neuronal mechanisms will allow us to track the brain's progression from interictal to ictal state. Wendling and colleagues proposed a parameterised mathematical model that represents the activity of interconnected neuronal populations. By modifying the parameters, this model is able to reproduce signals that are very similar to the real EEG depicting commonly observed patterns during interictal and ictal periods. The transition from non-seizure to seizure activity, as seen in the EEG. is hypothesised to be due to the impairment of inhibition. Using Wendling's model, we designed a deterministic nonlinear estimator to recover the average membrane potential of the neuronal populations from a single channel EEG signal. for any fixed and known parameter values. Our nonlinear estimator is analytically proven to asymptotically converge to the true state of the model and illustrated in simulations. We were able to computationally observe the dynamics of the three neuronal populations described in the model: excitatory, fast and slow inhibitory populations. This forms a first step towards the prediction of epileptic seiwres. (author)

  9. Impacts of bromine and iodine chemistry on tropospheric OH and HO2: comparing observations with box and global model perspectives

    Science.gov (United States)

    Stone, Daniel; Sherwen, Tomás; Evans, Mathew J.; Vaughan, Stewart; Ingham, Trevor; Whalley, Lisa K.; Edwards, Peter M.; Read, Katie A.; Lee, James D.; Moller, Sarah J.; Carpenter, Lucy J.; Lewis, Alastair C.; Heard, Dwayne E.

    2018-03-01

    The chemistry of the halogen species bromine and iodine has a range of impacts on tropospheric composition, and can affect oxidising capacity in a number of ways. However, recent studies disagree on the overall sign of the impacts of halogens on the oxidising capacity of the troposphere. We present simulations of OH and HO2 radicals for comparison with observations made in the remote tropical ocean boundary layer during the Seasonal Oxidant Study at the Cape Verde Atmospheric Observatory in 2009. We use both a constrained box model, using detailed chemistry derived from the Master Chemical Mechanism (v3.2), and the three-dimensional global chemistry transport model GEOS-Chem. Both model approaches reproduce the diurnal trends in OH and HO2. Absolute observed concentrations are well reproduced by the box model but are overpredicted by the global model, potentially owing to incomplete consideration of oceanic sourced radical sinks. The two models, however, differ in the impacts of halogen chemistry. In the box model, halogen chemistry acts to increase OH concentrations (by 9.8 % at midday at the Cape Verde Atmospheric Observatory), while the global model exhibits a small increase in OH at the Cape Verde Atmospheric Observatory (by 0.6 % at midday) but overall shows a decrease in the global annual mass-weighted mean OH of 4.5 %. These differences reflect the variety of timescales through which the halogens impact the chemical system. On short timescales, photolysis of HOBr and HOI, produced by reactions of HO2 with BrO and IO, respectively, increases the OH concentration. On longer timescales, halogen-catalysed ozone destruction cycles lead to lower primary production of OH radicals through ozone photolysis, and thus to lower OH concentrations. The global model includes more of the longer timescale responses than the constrained box model, and overall the global impact of the longer timescale response (reduced primary production due to lower O3 concentrations

  10. An observational and modeling study of the regional impacts of climate variability

    Science.gov (United States)

    Horton, Radley M.

    Climate variability has large impacts on humans and their agricultural systems. Farmers are at the center of this agricultural network, but it is often agricultural planners---regional planners, extension agents, commodity groups and cooperatives---that translate climate information for users. Global climate models (GCMs) are a leading tool for understanding and predicting climate and climate change. Armed with climate projections and forecasts, agricultural planners adapt their decision-making to optimize outcomes. This thesis explores what GCMs can, and cannot, tell us about climate variability and change at regional scales. The question is important, since high-quality regional climate projections could assist farmers and regional planners in key management decisions, contributing to better agricultural outcomes. To answer these questions, climate variability and its regional impacts are explored in observations and models for the current and future climate. The goals are to identify impacts of observed variability, assess model simulation of variability, and explore how climate variability and its impacts may change under enhanced greenhouse warming. Chapter One explores how well Goddard Institute for Space Studies (GISS) atmospheric models, forced by historical sea surface temperatures (SST), simulate climatology and large-scale features during the exceptionally strong 1997--1999 El Nino Southern Oscillation (ENSO) cycle. Reasonable performance in this 'proof of concept' test is considered a minimum requirement for further study of variability in models. All model versions produce appropriate local changes with ENSO, indicating that with correct ocean temperatures these versions are capable of simulating the large-scale effects of ENSO around the globe. A high vertical resolution model (VHR) provides the best simulation. Evidence is also presented that SST anomalies outside the tropical Pacific may play a key role in generating remote teleconnections even

  11. Modelling dust polarization observations of molecular clouds through MHD simulations

    Science.gov (United States)

    King, Patrick K.; Fissel, Laura M.; Chen, Che-Yu; Li, Zhi-Yun

    2018-03-01

    The BLASTPol observations of Vela C have provided the most detailed characterization of the polarization fraction p and dispersion in polarization angles S for a molecular cloud. We compare the observed distributions of p and S with those obtained in synthetic observations of simulations of molecular clouds, assuming homogeneous grain alignment. We find that the orientation of the mean magnetic field relative to the observer has a significant effect on the p and S distributions. These distributions for Vela C are most consistent with synthetic observations where the mean magnetic field is close to the line of sight. Our results point to apparent magnetic disorder in the Vela C molecular cloud, although it can be due to either an inclination effect (i.e. observing close to the mean field direction) or significant field tangling from strong turbulence/low magnetization. The joint correlations of p with column density and of S with column density for the synthetic observations generally agree poorly with the Vela C joint correlations, suggesting that understanding these correlations requires a more sophisticated treatment of grain alignment physics.

  12. Systemic thioridazine in combination with dicloxacillin against early aortic graft infections caused by Staphylococcus aureus in a porcine model: In vivo results do not reproduce the in vitro synergistic activity.

    Directory of Open Access Journals (Sweden)

    Michael Stenger

    Full Text Available Conservative treatment solutions against aortic prosthetic vascular graft infection (APVGI for inoperable patients are limited. The combination of antibiotics with antibacterial helper compounds, such as the neuroleptic drug thioridazine (TDZ, should be explored.To investigate the efficacy of conservative systemic treatment with dicloxacillin (DCX in combination with TDZ (DCX+TDZ, compared to DCX alone, against early APVGI caused by methicillin-sensitive Staphylococcus aureus (MSSA in a porcine model.The synergism of DCX+TDZ against MSSA was initially assessed in vitro by viability assay. Thereafter, thirty-two pigs had polyester grafts implanted in the infrarenal aorta, followed by inoculation with 106 CFU of MSSA, and were randomly administered oral systemic treatment with either 1 DCX or 2 DCX+TDZ. Treatment was initiated one week postoperatively and continued for a further 21 days. Weight, temperature, and blood samples were collected at predefined intervals. By termination, bacterial quantities from the graft surface, graft material, and perigraft tissue were obtained.Despite in vitro synergism, the porcine experiment revealed no statistical differences for bacteriological endpoints between the two treatment groups, and none of the treatments eradicated the APVGI. Accordingly, the mixed model analyses of weight, temperature, and blood samples revealed no statistical differences.Conservative systemic treatment with DCX+TDZ did not reproduce in vitro results against APVGI caused by MSSA in this porcine model. However, unexpected severe adverse effects related to the planned dose of TDZ required a considerable reduction to the administered dose of TDZ, which may have compromised the results.

  13. Observational properties of models of semidetached close binaries. Pt. 2

    International Nuclear Information System (INIS)

    Giannone, P.; Giannuzzi, M.A.; Pucillo, M.

    1975-01-01

    Binaries of Cases A and B with intermediate and small masses have been studied. Synthetic light curves are shown to be affected mainly by the assumption concerning the shape of the components. The comparison between synthetic light curves and observed data can give further information on the reliability of the hypotheses assumed in the computations of binary star evolution. The calculated properties lead to useful indications about the evolutionary stages of observed binaries. The detection of systems evolving according to Case A appears to be favoured in comparison with that of systems of Case B. Systems with undersize subgiants result comparatively difficult to observe. (orig./BJ) [de

  14. Ecosystem function in complex mountain terrain: Combining models and long-term observations to advance process-based understanding

    Science.gov (United States)

    Wieder, William R.; Knowles, John F.; Blanken, Peter D.; Swenson, Sean C.; Suding, Katharine N.

    2017-04-01

    Abiotic factors structure plant community composition and ecosystem function across many different spatial scales. Often, such variation is considered at regional or global scales, but here we ask whether ecosystem-scale simulations can be used to better understand landscape-level variation that might be particularly important in complex terrain, such as high-elevation mountains. We performed ecosystem-scale simulations by using the Community Land Model (CLM) version 4.5 to better understand how the increased length of growing seasons may impact carbon, water, and energy fluxes in an alpine tundra landscape. The model was forced with meteorological data and validated with observations from the Niwot Ridge Long Term Ecological Research Program site. Our results demonstrate that CLM is capable of reproducing the observed carbon, water, and energy fluxes for discrete vegetation patches across this heterogeneous ecosystem. We subsequently accelerated snowmelt and increased spring and summer air temperatures in order to simulate potential effects of climate change in this region. We found that vegetation communities that were characterized by different snow accumulation dynamics showed divergent biogeochemical responses to a longer growing season. Contrary to expectations, wet meadow ecosystems showed the strongest decreases in plant productivity under extended summer scenarios because of disruptions in hydrologic connectivity. These findings illustrate how Earth system models such as CLM can be used to generate testable hypotheses about the shifting nature of energy, water, and nutrient limitations across space and through time in heterogeneous landscapes; these hypotheses may ultimately guide further experimental work and model development.

  15. Szekeres Swiss-cheese model and supernova observations

    International Nuclear Information System (INIS)

    Bolejko, Krzysztof; Celerier, Marie-Noeelle

    2010-01-01

    We use different particular classes of axially symmetric Szekeres Swiss-cheese models for the study of the apparent dimming of the supernovae of type Ia. We compare the results with those obtained in the corresponding Lemaitre-Tolman Swiss-cheese models. Although the quantitative picture is different the qualitative results are comparable, i.e., one cannot fully explain the dimming of the supernovae using small-scale (∼50 Mpc) inhomogeneities. To fit successfully the data we need structures of order of 500 Mpc size or larger. However, this result might be an artifact due to the use of axial light rays in axially symmetric models. Anyhow, this work is a first step in trying to use Szekeres Swiss-cheese models in cosmology and it will be followed by the study of more physical models with still less symmetry.

  16. Reproducibility of surface roughness in reaming

    DEFF Research Database (Denmark)

    Müller, Pavel; De Chiffre, Leonardo

    An investigation on the reproducibility of surface roughness in reaming was performed to document the applicability of this approach for testing cutting fluids. Austenitic stainless steel was used as a workpiece material and HSS reamers as cutting tools. Reproducibility of the results was evaluat...

  17. WRF-Chem model simulations of a dust outbreak over the central Mediterranean and comparison with multi-sensor desert dust observations

    Science.gov (United States)

    Rizza, Umberto; Barnaba, Francesca; Marcello Miglietta, Mario; Mangia, Cristina; Di Liberto, Luca; Dionisi, Davide; Costabile, Francesca; Grasso, Fabio; Gobbi, Gian Paolo

    2017-01-01

    In this study, the Weather Research and Forecasting model with online coupled chemistry (WRF-Chem) is applied to simulate an intense Saharan dust outbreak event that took place over the Mediterranean in May 2014. Comparison of a simulation using a physics-based desert dust emission scheme with a numerical experiment using a simplified (minimal) emission scheme is included to highlight the advantages of the former. The model was found to reproduce well the synoptic meteorological conditions driving the dust outbreak: an omega-like pressure configuration associated with a cyclogenesis in the Atlantic coasts of Spain. The model performances in reproducing the atmospheric desert dust load were evaluated using a multi-platform observational dataset of aerosol and desert dust properties, including optical properties from satellite and ground-based sun photometers and lidars, plus in situ particulate matter mass concentration (PM) data. This comparison allowed us to investigate the model ability in reproducing both the horizontal and the vertical displacement of the dust plume, as well as its evolution in time. The comparison with satellite (MODIS-Terra) and sun photometers (AERONET) showed that the model is able to reproduce well the horizontal field of the aerosol optical depth (AOD) and its evolution in time (temporal correlation coefficient with AERONET of 0.85). On the vertical scale, the comparison with lidar data at a single site (Rome, Italy) confirms that the desert dust advection occurs in several, superimposed "pulses" as simulated by the model. Cross-analysis of the modeled AOD and desert dust emission fluxes further allowed for the source regions of the observed plumes to be inferred. The vertical displacement of the modeled dust plume was in rather good agreement with the lidar soundings, with correlation coefficients among aerosol extinction profiles up to 1 and mean discrepancy of about 50 %. The model-measurement comparison for PM10 and PM2.5 showed a

  18. On the dependence of the OH* Meinel emission altitude on vibrational level: SCIAMACHY observations and model simulations

    Directory of Open Access Journals (Sweden)

    J. P. Burrows

    2012-09-01

    Full Text Available Measurements of the OH Meinel emissions in the terrestrial nightglow are one of the standard ground-based techniques to retrieve upper mesospheric temperatures. It is often assumed that the emission peak altitudes are not strongly dependent on the vibrational level, although this assumption is not based on convincing experimental evidence. In this study we use Envisat/SCIAMACHY (Scanning Imaging Absorption spectroMeter for Atmospheric CHartographY observations in the near-IR spectral range to retrieve vertical volume emission rate profiles of the OH(3-1, OH(6-2 and OH(8-3 Meinel bands in order to investigate whether systematic differences in emission peak altitudes can be observed between the different OH Meinel bands. The results indicate that the emission peak altitudes are different for the different vibrational levels, with bands originating from higher vibrational levels having higher emission peak altitudes. It is shown that this finding is consistent with the majority of the previously published results. The SCIAMACHY observations yield differences in emission peak altitudes of up to about 4 km between the OH(3-1 and the OH(8-3 band. The observations are complemented by model simulations of the fractional population of the different vibrational levels and of the vibrational level dependence of the emission peak altitude. The model simulations reproduce the observed vibrational level dependence of the emission peak altitude well – both qualitatively and quantitatively – if quenching by atomic oxygen as well as multi-quantum collisional relaxation by O2 is considered. If a linear relationship between emission peak altitude and vibrational level is assumed, then a peak altitude difference of roughly 0.5 km per vibrational level is inferred from both the SCIAMACHY observations and the model simulations.

  19. Reproducibility principles, problems, practices, and prospects

    CERN Document Server

    Maasen, Sabine

    2016-01-01

    Featuring peer-reviewed contributions from noted experts in their fields of research, Reproducibility: Principles, Problems, Practices, and Prospects presents state-of-the-art approaches to reproducibility, the gold standard sound science, from multi- and interdisciplinary perspectives. Including comprehensive coverage for implementing and reflecting the norm of reproducibility in various pertinent fields of research, the book focuses on how the reproducibility of results is applied, how it may be limited, and how such limitations can be understood or even controlled in the natural sciences, computational sciences, life sciences, social sciences, and studies of science and technology. The book presents many chapters devoted to a variety of methods and techniques, as well as their epistemic and ontological underpinnings, which have been developed to safeguard reproducible research and curtail deficits and failures. The book also investigates the political, historical, and social practices that underlie repro...

  20. 3D Modeling of CMEs observed with STEREO

    Science.gov (United States)

    Bosman, E.; Bothmer, V.

    2012-04-01

    From January 2007 until end of 2010, 565 typical large-scale coronal mass ejections (CMEs) have been identified in the SECCHI/COR2 synoptic movies of the STEREO Mission. A subset comprising 114 CME events, selected based on the CME's brightness appearance in the SECCHI/COR2 images, has been modeled through the Graduated Cylindrical Shell (GCS) Model developed by Thernisien et al. (2006). This study presents an overview of the GCS forward-modeling results and an interpretation of the CME characteristics in relationship to their solar source region properties and solar cycle appearances.

  1. Observations and models of the decimetric radio emission from Jupiter

    International Nuclear Information System (INIS)

    Pater, I. de.

    1980-01-01

    The high energy electron distribution as a function of energy, pitch angle and spatial coordinates in Jupiter's inner magnetosphere was derived from a comparison of radio data and model calculations of Jupiter's synchrotron radiation. (Auth.)

  2. A sliding mode observer for hemodynamic characterization under modeling uncertainties

    KAUST Repository

    Zayane, Chadia; Laleg-Kirati, Taous-Meriem

    2014-01-01

    This paper addresses the case of physiological states reconstruction in a small region of the brain under modeling uncertainties. The misunderstood coupling between the cerebral blood volume and the oxygen extraction fraction has lead to a partial

  3. A Computational Model for Observation in Quantum Mechanics.

    Science.gov (United States)

    1987-03-16

    Interferometer experiment ............. 17 2.3 The EPR Paradox experiment ................. 22 3 The Computational Model, an Overview 28 4 Implementation 34...40 4.4 Code for the EPR paradox experiment ............... 46 4.5 Code for the double slit interferometer experiment ..... .. 50 5 Conclusions 59 A...particle run counter to fact. The EPR paradox experiment (see section 2.3) is hard to resolve with this class of models, collectively called hidden

  4. Understanding Transient Forcing with Plasma Instability Model, Ionospheric Propagation Model and GNSS Observations

    Science.gov (United States)

    Deshpande, K.; Zettergren, M. D.; Datta-Barua, S.

    2017-12-01

    Fluctuations in the Global Navigation Satellite Systems (GNSS) signals observed as amplitude and phase scintillations are produced by plasma density structures in the ionosphere. Phase scintillation events in particular occur due to structures at Fresnel scales, typically about 250 meters at ionospheric heights and GNSS frequency. Likely processes contributing to small-scale density structuring in auroral and polar regions include ionospheric gradient-drift instability (GDI) and Kelvin-Helmholtz instability (KHI), which result, generally, from magnetosphere-ionosphere interactions (e.g. reconnection) associated with cusp and auroral zone regions. Scintillation signals, ostensibly from either GDI or KHI, are frequently observed in the high latitude ionosphere and are potentially useful diagnostics of how energy from the transient forcing in the cusp or polar cap region cascades, via instabilities, to small scales. However, extracting quantitative details of instabilities leading to scintillation using GNSS data drastically benefits from both a model of the irregularities and a model of GNSS signal propagation through irregular media. This work uses a physics-based model of the generation of plasma density irregularities (GEMINI - Geospace Environment Model of Ion-Neutral Interactions) coupled to an ionospheric radio wave propagation model (SIGMA - Satellite-beacon Ionospheric-scintillation Global Model of the upper Atmosphere) to explore the cascade of density structures from medium to small (sub-kilometer) scales. Specifically, GEMINI-SIGMA is used to simulate expected scintillation from different instabilities during various stages of evolution to determine features of the scintillation that may be useful to studying ionospheric density structures. Furthermore we relate the instabilities producing GNSS scintillations to the transient space and time-dependent magnetospheric phenomena and further predict characteristics of scintillation in different geophysical

  5. Asteroseismic observations and modelling of 70 Ophiuchi AB

    Energy Technology Data Exchange (ETDEWEB)

    Eggenberger, P; Miglio, A [Institut d' Astrophysique et de Geophysique de l' Universite de Liege, 17 Allee du 6 Aout, B-4000 Liege (Belgium); Carrier, F [Institute of Astronomy, University of Leuven, Celestijnenlaan 200 D, B-3001 Leuven (Belgium); Fernandes, J [Observatorio Astronomico da Universidade de Coimbra e Departamento de Matematica, FCTUC (Portugal); Santos, N C [Centro de AstrofIsica, Universidade do Porto, Rua das Estrelas, P-4150-762 Porto (Portugal)], E-mail: eggenberger@astro.ulg.ac.be

    2008-10-15

    The analysis of solar-like oscillations for stars belonging to a binary system provides an opportunity to probe the internal stellar structure and to test our knowledge of stellar physics. We present asteroseismic observations of 70 Oph A performed with the HARPS spectrograph together with a comprehensive theoretical calibration of the 70 Ophiuchi system.

  6. Intelligent Cognitive Radio Models for Enhancing Future Radio Astronomy Observations

    Directory of Open Access Journals (Sweden)

    Ayodele Abiola Periola

    2016-01-01

    Full Text Available Radio astronomy organisations desire to optimise the terrestrial radio astronomy observations by mitigating against interference and enhancing angular resolution. Ground telescopes (GTs experience interference from intersatellite links (ISLs. Astronomy source radio signals received by GTs are analysed at the high performance computing (HPC infrastructure. Furthermore, observation limitation conditions prevent GTs from conducting radio astronomy observations all the time, thereby causing low HPC utilisation. This paper proposes mechanisms that protect GTs from ISL interference without permanent prevention of ISL data transmission and enhance angular resolution. The ISL transmits data by taking advantage of similarities in the sequence of observed astronomy sources to increase ISL connection duration. In addition, the paper proposes a mechanism that enhances angular resolution by using reconfigurable earth stations. Furthermore, the paper presents the opportunistic computing scheme (OCS to enhance HPC utilisation. OCS enables the underutilised HPC to be used to train learning algorithms of a cognitive base station. The performances of the three mechanisms are evaluated. Simulations show that the proposed mechanisms protect GTs from ISL interference, enhance angular resolution, and improve HPC utilisation.

  7. NACP Regional: Original Observation Data and Biosphere and Inverse Model Outputs

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set contains the originally-submitted observation measurement data, terrestrial biosphere model output data, and inverse model simulations that various...

  8. NACP Regional: Original Observation Data and Biosphere and Inverse Model Outputs

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: This data set contains the originally-submitted observation measurement data, terrestrial biosphere model output data, and inverse model simulations that...

  9. Citizen observations contributing to flood modelling: opportunities and challenges

    Directory of Open Access Journals (Sweden)

    T. H. Assumpção

    2018-02-01

    Full Text Available Citizen contributions to science have been successfully implemented in many fields, and water resources is one of them. Through citizens, it is possible to collect data and obtain a more integrated decision-making process. Specifically, data scarcity has always been an issue in flood modelling, which has been addressed in the last decades by remote sensing and is already being discussed in the citizen science context. With this in mind, this article aims to review the literature on the topic and analyse the opportunities and challenges that lie ahead. The literature on monitoring, mapping and modelling, was evaluated according to the flood-related variable citizens contributed to. Pros and cons of the collection/analysis methods were summarised. Then, pertinent publications were mapped into the flood modelling cycle, considering how citizen data properties (spatial and temporal coverage, uncertainty and volume are related to its integration into modelling. It was clear that the number of studies in the area is rising. There are positive experiences reported in collection and analysis methods, for instance with velocity and land cover, and also when modelling is concerned, for example by using social media mining. However, matching the data properties necessary for each part of the modelling cycle with citizen-generated data is still challenging. Nevertheless, the concept that citizen contributions can be used for simulation and forecasting is proved and further work lies in continuing to develop and improve not only methods for collection and analysis, but certainly for integration into models as well. Finally, in view of recent automated sensors and satellite technologies, it is through studies as the ones analysed in this article that the value of citizen contributions, complementing such technologies, is demonstrated.

  10. Citizen observations contributing to flood modelling: opportunities and challenges

    Science.gov (United States)

    Assumpção, Thaine H.; Popescu, Ioana; Jonoski, Andreja; Solomatine, Dimitri P.

    2018-02-01

    Citizen contributions to science have been successfully implemented in many fields, and water resources is one of them. Through citizens, it is possible to collect data and obtain a more integrated decision-making process. Specifically, data scarcity has always been an issue in flood modelling, which has been addressed in the last decades by remote sensing and is already being discussed in the citizen science context. With this in mind, this article aims to review the literature on the topic and analyse the opportunities and challenges that lie ahead. The literature on monitoring, mapping and modelling, was evaluated according to the flood-related variable citizens contributed to. Pros and cons of the collection/analysis methods were summarised. Then, pertinent publications were mapped into the flood modelling cycle, considering how citizen data properties (spatial and temporal coverage, uncertainty and volume) are related to its integration into modelling. It was clear that the number of studies in the area is rising. There are positive experiences reported in collection and analysis methods, for instance with velocity and land cover, and also when modelling is concerned, for example by using social media mining. However, matching the data properties necessary for each part of the modelling cycle with citizen-generated data is still challenging. Nevertheless, the concept that citizen contributions can be used for simulation and forecasting is proved and further work lies in continuing to develop and improve not only methods for collection and analysis, but certainly for integration into models as well. Finally, in view of recent automated sensors and satellite technologies, it is through studies as the ones analysed in this article that the value of citizen contributions, complementing such technologies, is demonstrated.

  11. Modeling and observational constraints on the sulfur cycle in the marine troposphere: a focus on reactive halogens and multiphase chemistry

    Science.gov (United States)

    Chen, Q.; Breider, T.; Schmidt, J.; Sherwen, T.; Evans, M. J.; Xie, Z.; Quinn, P.; Bates, T. S.; Alexander, B.

    2017-12-01

    The radiative forcing from marine boundary layer clouds is still highly uncertain, which partly stems from our poor understanding of cloud condensation nuclei (CCN) formation. The oxidation of dimethyl sulfide (DMS) and subsequent chemical evolution of its products (e.g. DMSO) are key processes in CCN formation, but are generally very simplified in large-scale models. Recent research has pointed out the importance of reactive halogens (e.g. BrO and Cl) and multiphase chemistry in the tropospheric sulfur cycle. In this study, we implement a series of sulfur oxidation mechanisms into the GEOS-Chem global chemical transport model, involving both gas-phase and multiphase oxidation of DMS, DMSO, MSIA and MSA, to improve our understanding of the sulfur cycle in the marine troposphere. DMS observations from six locations around the globe and MSA/nssSO42- ratio observations from two ship cruises covering a wide range of latitudes and longitudes are used to assess the model. Preliminary results reveal the important role of BrO for DMS oxidation at high latitudes (up to 50% over Southern Ocean). Oxidation of DMS by Cl radicals is small in the model (within 10% in the marine troposphere), probably due to an underrepresentation of Cl sources. Multiphase chemistry (e.g. oxidation by OH and O3 in cloud droplets) is not important for DMS oxidation but is critical for DMSO oxidation and MSA production and removal. In our model, about half of the DMSO is oxidized in clouds, leading to the formation of MSIA, which is further oxidized to form MSA. Overall, with the addition of reactive halogens and multiphase chemistry, the model is able to better reproduce observations of seasonal variations of DMS and MSA/nssSO42- ratios.

  12. Observations & modeling of solar-wind/magnetospheric interactions

    Science.gov (United States)

    Hoilijoki, Sanni; Von Alfthan, Sebastian; Pfau-Kempf, Yann; Palmroth, Minna; Ganse, Urs

    2016-07-01

    The majority of the global magnetospheric dynamics is driven by magnetic reconnection, indicating the need to understand and predict reconnection processes and their global consequences. So far, global magnetospheric dynamics has been simulated using mainly magnetohydrodynamic (MHD) models, which are approximate but fast enough to be executed in real time or near-real time. Due to their fast computation times, MHD models are currently the only possible frameworks for space weather predictions. However, in MHD models reconnection is not treated kinetically. In this presentation we will compare the results from global kinetic (hybrid-Vlasov) and global MHD simulations. Both simulations are compared with in-situ measurements. We will show that the kinetic processes at the bow shock, in the magnetosheath and at the magnetopause affect global dynamics even during steady solar wind conditions. Foreshock processes cause an asymmetry in the magnetosheath plasma, indicating that the plasma entering the magnetosphere is not symmetrical on different sides of the magnetosphere. Behind the bow shock in the magnetosheath kinetic wave modes appear. Some of these waves propagate to the magnetopause and have an effect on the magnetopause reconnection. Therefore we find that kinetic phenomena have a significant role in the interaction between the solar wind and the magnetosphere. While kinetic models cannot be executed in real time currently, they could be used to extract heuristics to be added in the faster MHD models.

  13. Inter- and intra-laboratory study to determine the reproducibility of toxicogenomics datasets.

    Science.gov (United States)

    Scott, D J; Devonshire, A S; Adeleye, Y A; Schutte, M E; Rodrigues, M R; Wilkes, T M; Sacco, M G; Gribaldo, L; Fabbri, M; Coecke, S; Whelan, M; Skinner, N; Bennett, A; White, A; Foy, C A

    2011-11-28

    The application of toxicogenomics as a predictive tool for chemical risk assessment has been under evaluation by the toxicology community for more than a decade. However, it predominately remains a tool for investigative research rather than for regulatory risk assessment. In this study, we assessed whether the current generation of microarray technology in combination with an in vitro experimental design was capable of generating robust, reproducible data of sufficient quality to show promise as a tool for regulatory risk assessment. To this end, we designed a prospective collaborative study to determine the level of inter- and intra-laboratory reproducibility between three independent laboratories. All test centres (TCs) adopted the same protocols for all aspects of the toxicogenomic experiment including cell culture, chemical exposure, RNA extraction, microarray data generation and analysis. As a case study, the genotoxic carcinogen benzo[a]pyrene (B[a]P) and the human hepatoma cell line HepG2 were used to generate three comparable toxicogenomic data sets. High levels of technical reproducibility were demonstrated using a widely employed gene expression microarray platform. While differences at the global transcriptome level were observed between the TCs, a common subset of B[a]P responsive genes (n=400 gene probes) was identified at all TCs which included many genes previously reported in the literature as B[a]P responsive. These data show promise that the current generation of microarray technology, in combination with a standard in vitro experimental design, can produce robust data that can be generated reproducibly in independent laboratories. Future work will need to determine whether such reproducible in vitro model(s) can be predictive for a range of toxic chemicals with different mechanisms of action and thus be considered as part of future testing regimes for regulatory risk assessment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Observing and modeling nonlinear dynamics in an internal combustion engine

    International Nuclear Information System (INIS)

    Daw, C.S.; Kennel, M.B.; Finney, C.E.; Connolly, F.T.

    1998-01-01

    We propose a low-dimensional, physically motivated, nonlinear map as a model for cyclic combustion variation in spark-ignited internal combustion engines. A key feature is the interaction between stochastic, small-scale fluctuations in engine parameters and nonlinear deterministic coupling between successive engine cycles. Residual cylinder gas from each cycle alters the in-cylinder fuel-air ratio and thus the combustion efficiency in succeeding cycles. The model close-quote s simplicity allows rapid simulation of thousands of engine cycles, permitting statistical studies of cyclic-variation patterns and providing physical insight into this technologically important phenomenon. Using symbol statistics to characterize the noisy dynamics, we find good quantitative matches between our model and experimental time-series measurements. copyright 1998 The American Physical Society

  15. The cosmological Janus model: comparison with observational data

    Science.gov (United States)

    Petit, Jean-Pierre; Dagostini, Gilles

    2017-01-01

    In 2014 we presented a model based on a system of two coupled field equations to describe two populations of particles, one positive and the other mass of negative mass. The analysis of this system by Newtonian approximation show that the masses of the same signs attract according to Newton's law while the masses of opposite signs repel according to an anti-Newton law. This eliminates the runaway phenomenon. It uses the time-dependent exact solution of this system to build the bolometric magnitude distribution of the red-shift. Comparing the prediction of our model -which requires adjustment with a single parameter- with the data from 740 supernovae highlighting the acceleration of the universe gives an excellent agreement. The comparison is then made with the multi-parametric Λ CDM model.

  16. Investigation of the 2006 Alexandrium fundyense Bloom in the Gulf of Maine: In situ Observations and Numerical Modeling.

    Science.gov (United States)

    Li, Yizhen; He, Ruoying; McGillicuddy, Dennis J; Anderson, Donald M; Keafer, Bruce A

    2009-09-30

    In situ observations and a coupled bio-physical model were used to study the germination, initiation, and development of the Gulf of Maine (GOM) Alexandrium fundyense bloom in 2006. Hydrographic measurements and comparisons with GOM climatology indicate that 2006 was a year with normal coastal water temperature, salinity, current and river runoff conditions. A. fundyense cyst abundance in bottom sediments preceding the 2006 bloom was at a moderate level compared to other recent annual cyst survey data. We used the coupled bio-physical model to hindcast coastal circulation and A. fundyense cell concentrations. Field data including water temperature, salinity, velocity time series and surface A. fundyense cell concentration maps were applied to gauge the model's fidelity. The coupled model is capable of reproducing the hydrodynamics and the temporal and spatial distributions of A. fundyense cell concentration reasonably well. Model hindcast solutions were further used to diagnose physical and biological factors controlling the bloom dynamics. Surface wind fields modulated the bloom's horizontal and vertical distribution. The initial cyst distribution was found to be the dominant factor affecting the severity and the interannual variability of the A. fundyense bloom. Initial cyst abundance for the 2006 bloom was about 50% of that prior to the 2005 bloom. As the result, the time-averaged gulf-wide cell concentration in 2006 was also only about 60% of that in 2005. In addition, weaker alongshore currents and episodic upwelling-favorable winds in 2006 reduced the spatial extent of the bloom as compared with 2005.

  17. Occurrence of blowing snow events at an alpine site over a 10-year period: Observations and modelling

    Science.gov (United States)

    Vionnet, V.; Guyomarc'h, G.; Naaim Bouvet, F.; Martin, E.; Durand, Y.; Bellot, H.; Bel, C.; Puglièse, P.

    2013-05-01

    Blowing snow events control the evolution of the snow pack in mountainous areas and cause inhomogeneous snow distribution. The goal of this study is to identify the main features of blowing snow events at an alpine site and assess the ability of the detailed snowpack model Crocus to reproduce the occurrence of these events in a 1D configuration. We created a database of blowing snow events observed over 10 years at our experimental site. Occurrences of blowing snow events were divided into cases with and without concurrent falling snow. Overall, snow transport is observed during 10.5% of the time in winter and occurs with concurrent falling snow 37.3% of the time. Wind speed and snow age control the frequency of occurrence. Model results illustrate the necessity of taking the wind-dependence of falling snow grain characteristics into account to simulate periods of snow transport and mass fluxes satisfactorily during those periods. The high rate of false alarms produced by the model is investigated in detail for winter 2010/2011 using measurements from snow particle counters.

  18. Impact of deep convection in the tropical tropopause layer in West Africa: in-situ observations and mesoscale modelling

    Directory of Open Access Journals (Sweden)

    F. Fierli

    2011-01-01

    Full Text Available We present the analysis of the impact of convection on the composition of the tropical tropopause layer region (TTL in West-Africa during the AMMA-SCOUT campaign. Geophysica M55 aircraft observations of water vapor, ozone, aerosol and CO2 during August 2006 show perturbed values at altitudes ranging from 14 km to 17 km (above the main convective outflow and satellite data indicates that air detrainment is likely to have originated from convective cloud east of the flights. Simulations of the BOLAM mesoscale model, nudged with infrared radiance temperatures, are used to estimate the convective impact in the upper troposphere and to assess the fraction of air processed by convection. The analysis shows that BOLAM correctly reproduces the location and the vertical structure of convective outflow. Model-aided analysis indicates that convection can influence the composition of the upper troposphere above the level of main outflow for an event of deep convection close to the observation site. Model analysis also shows that deep convection occurring in the entire Sahelian transect (up to 2000 km E of the measurement area has a non negligible role in determining TTL composition.

  19. Identifying Clusters with Mixture Models that Include Radial Velocity Observations

    Science.gov (United States)

    Czarnatowicz, Alexis; Ybarra, Jason E.

    2018-01-01

    The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).

  20. Evaluation of 11 terrestrial carbon–nitrogen cycle models against observations from two temperate Free-Air CO2 Enrichment studies

    Science.gov (United States)

    Zaehle, Sönke; Medlyn, Belinda E; De Kauwe, Martin G; Walker, Anthony P; Dietze, Michael C; Hickler, Thomas; Luo, Yiqi; Wang, Ying-Ping; El-Masri, Bassil; Thornton, Peter; Jain, Atul; Wang, Shusen; Warlind, David; Weng, Ensheng; Parton, William; Iversen, Colleen M; Gallet-Budynek, Anne; McCarthy, Heather; Finzi, Adrien; Hanson, Paul J; Prentice, I Colin; Oren, Ram; Norby, Richard J

    2014-01-01

    We analysed the responses of 11 ecosystem models to elevated atmospheric [CO2] (eCO2) at two temperate forest ecosystems (Duke and Oak Ridge National Laboratory (ORNL) Free-Air CO2 Enrichment (FACE) experiments) to test alternative representations of carbon (C)–nitrogen (N) cycle processes. We decomposed the model responses into component processes affecting the response to eCO2 and confronted these with observations from the FACE experiments. Most of the models reproduced the observed initial enhancement of net primary production (NPP) at both sites, but none was able to simulate both the sustained 10-yr enhancement at Duke and the declining response at ORNL: models generally showed signs of progressive N limitation as a result of lower than observed plant N uptake. Nonetheless, many models showed qualitative agreement with observed component processes. The results suggest that improved representation of above-ground–below-ground interactions and better constraints on plant stoichiometry are important for a predictive understanding of eCO2 effects. Improved accuracy of soil organic matter inventories is pivotal to reduce uncertainty in the observed C–N budgets. The two FACE experiments are insufficient to fully constrain terrestrial responses to eCO2, given the complexity of factors leading to the observed diverging trends, and the consequential inability of the models to explain these trends. Nevertheless, the ecosystem models were able to capture important features of the experiments, lending some support to their projections. PMID:24467623

  1. Reproducibility of neuroimaging analyses across operating systems.

    Science.gov (United States)

    Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.

  2. Learning Reproducibility with a Yearly Networking Contest

    KAUST Repository

    Canini, Marco

    2017-08-10

    Better reproducibility of networking research results is currently a major goal that the academic community is striving towards. This position paper makes the case that improving the extent and pervasiveness of reproducible research can be greatly fostered by organizing a yearly international contest. We argue that holding a contest undertaken by a plurality of students will have benefits that are two-fold. First, it will promote hands-on learning of skills that are helpful in producing artifacts at the replicable-research level. Second, it will advance the best practices regarding environments, testbeds, and tools that will aid the tasks of reproducibility evaluation committees by and large.

  3. The Economics of Reproducibility in Preclinical Research.

    Directory of Open Access Journals (Sweden)

    Leonard P Freedman

    2015-06-01

    Full Text Available Low reproducibility rates within life science research undermine cumulative knowledge production and contribute to both delays and costs of therapeutic drug development. An analysis of past studies indicates that the cumulative (total prevalence of irreproducible preclinical research exceeds 50%, resulting in approximately US$28,000,000,000 (US$28B/year spent on preclinical research that is not reproducible-in the United States alone. We outline a framework for solutions and a plan for long-term improvements in reproducibility rates that will help to accelerate the discovery of life-saving therapies and cures.

  4. Thou Shalt Be Reproducible! A Technology Perspective

    Directory of Open Access Journals (Sweden)

    Patrick Mair

    2016-07-01

    Full Text Available This article elaborates on reproducibility in psychology from a technological viewpoint. Modernopen source computational environments are shown and explained that foster reproducibilitythroughout the whole research life cycle, and to which emerging psychology researchers shouldbe sensitized, are shown and explained. First, data archiving platforms that make datasets publiclyavailable are presented. Second, R is advocated as the data-analytic lingua franca in psychologyfor achieving reproducible statistical analysis. Third, dynamic report generation environments forwriting reproducible manuscripts that integrate text, data analysis, and statistical outputs such asfigures and tables in a single document are described. Supplementary materials are provided inorder to get the reader started with these technologies.

  5. Multiband Lightcurve of Tabby’s Star: Observations and Modeling

    Science.gov (United States)

    Yin, Yao; Wilcox, Alejandro; Boyajian, Tabetha S.

    2018-06-01

    Since March 2017, The Thacher Observatory in California has been monitoring changes in brightness of KIC 8462852 (Tabby's Star), an F-type main sequence star whose irregular dimming behavior was first discovered by Tabetha Boyajian by examining Kepler data. We obtained over 20k observations over 135 nights in 2017 in 4 photometric bands, and detected 4 dip events greater than 1%. The relative magnitude of each dip compared across our 4 different photometric bands provides critical information regarding the nature of the obscuring material, and we present a preliminary analysis of these events. The Thacher Observatory is continuing its monitoring of Tabby’s Star in 2018.

  6. Aircraft-based Observations and Modeling of Wintertime Submicron Aerosol Composition over the Northeastern U.S.

    Science.gov (United States)

    Shah, V.; Jaegle, L.; Schroder, J. C.; Campuzano-Jost, P.; Jimenez, J. L.; Guo, H.; Sullivan, A.; Weber, R. J.; Green, J. R.; Fiddler, M.; Bililign, S.; Lopez-Hilfiker, F.; Lee, B. H.; Thornton, J. A.

    2017-12-01

    Submicron aerosol particles (PM1) remain a major air pollution concern in the urban areas of northeastern U.S. While SO2 and NOx emission controls have been effective at reducing summertime PM1 concentrations, this has not been the case for wintertime sulfate and nitrate concentrations, suggesting a nonlinear response during winter. During winter, organic aerosol (OA) is also an important contributor to PM1 mass despite low biogenic emissions, suggesting the presence of important urban sources. We use aircraft-based observations collected during the Wintertime INvestigation of Transport, Emissions and Reactivity (WINTER) campaign (Feb-March 2015), together with the GEOS-Chem chemical transport model, to investigate the sources and chemical processes governing wintertime PM1 over the northeastern U.S. The mean observed concentration of PM1 between the surface and 1 km was 4 μg m-3, about 30% of which was composed of sulfate, 20% nitrate, 10% ammonium, and 40% OA. The model reproduces the observed sulfate, nitrate and ammonium concentrations after updates to HNO3 production and loss, SO2 oxidation, and NH3 emissions. We find that 65% of the sulfate formation occurs in the aqueous phase, and 55% of nitrate formation through N2O5 hydrolysis, highlighting the importance of multiphase and heterogeneous processes during winter. Aqueous-phase sulfate production and the gas-particle partitioning of nitrate and ammonium are affected by atmospheric acidity, which in turn depends on the concentration of these species. We examine these couplings with GEOS-Chem, and assess the response of wintertime PM1 concentrations to further emission reductions based on the U.S. EPA projections for the year 2023. For OA, we find that the standard GEOS-Chem simulation underestimates the observed concentrations, but a simple parameterization developed from previous summer field campaigns is able to reproduce the observations and the contribution of primary and secondary OA. We find that

  7. Metric versus observable operator representation, higher spin models

    Science.gov (United States)

    Fring, Andreas; Frith, Thomas

    2018-02-01

    We elaborate further on the metric representation that is obtained by transferring the time-dependence from a Hermitian Hamiltonian to the metric operator in a related non-Hermitian system. We provide further insight into the procedure on how to employ the time-dependent Dyson relation and the quasi-Hermiticity relation to solve time-dependent Hermitian Hamiltonian systems. By solving both equations separately we argue here that it is in general easier to solve the former. We solve the mutually related time-dependent Schrödinger equation for a Hermitian and non-Hermitian spin 1/2, 1 and 3/2 model with time-independent and time-dependent metric, respectively. In all models the overdetermined coupled system of equations for the Dyson map can be decoupled algebraic manipulations and reduces to simple linear differential equations and an equation that can be converted into the non-linear Ermakov-Pinney equation.

  8. Observation of the Meissner effect in a lattice Higgs model

    Science.gov (United States)

    Damgaard, Poul H.; Heller, Urs M.

    1988-01-01

    The lattice-regularized U(1) Higgs model in an external electromagnetic field is studied by Monte Carlo techniques. In the Coulomb phase, magnetic flux can flow through uniformly. The Higgs phase splits into a region where magnetic flux can penetrate only in the form of vortices and a region where the magnetic flux is completely expelled, the relativistic analog of the Meissner effect in superconductivity. Evidence is presented for symmetry restoration in strong external fields.

  9. Uniform relativistic universe models with pressure. Part 2. Observational tests

    International Nuclear Information System (INIS)

    Krempec, J.; Krygier, B.

    1977-01-01

    The magnitude-redshift and angular diameter-redshift relations are discussed for the uniform (homogeneous and isotropic) relativistic Universe models with pressure. The inclusion of pressure into the energy-momentum tensor has given larger values of the deceleration parameter q. An increase of the deceleration parameter has led to the brightening of objects as well as to a little larger angular diameters. (author)

  10. Capturing Characteristics of Atmospheric Refractivity Using Observations and Modeling Approaches

    Science.gov (United States)

    2015-06-01

    times during TW13 where a rawinsonde was suspended from a kite or a tethered balloon , depending on the wind speed, launched from a ridged hull...appended to the bottom of a balloon sounding or a COAMPS model profile in some fashion in order to provide a complete profile for use in propagation...rawinsonde” (“radar wind-sonde”). The instrument package is suspended from a buoyant balloon which is released from the surface and often reaches heights

  11. Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program

    Science.gov (United States)

    2017-05-09

    Professor Chad Higgins , Oregon State University. Corvallis. Oregon (Host: University of Utah) Or. Stefano Serafin, University of Vienna. Austria... Chris Hocul, ARL White Sands Missile Range). • NCAR 4DWX model output has been analyzed by the University of Virginia group, which has been... Higgins , and H., Parlange, M.B., 2013: Similarity scaling over a steep alpine slope, Boundary-Layer Meteor., 147(3), 401-419. Pu, Z., H. Zhang, and J. A

  12. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    Science.gov (United States)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  13. Aligning observed and modelled behaviour based on workflow decomposition

    Science.gov (United States)

    Wang, Lu; Du, YuYue; Liu, Wei

    2017-09-01

    When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.

  14. Committed warming inferred from observations and an energy balance model

    Science.gov (United States)

    Pincus, R.; Mauritsen, T.

    2017-12-01

    Due to the lifetime of CO2 and thermal inertia of the ocean, the Earth's climate is not equilibrated with anthropogenic forcing. As a result, even if fossil fuel emissions were to suddenly cease, some level of committed warming is expected due to past emissions. Here, we provide an observational-based quantification of this committed warming using the instrument record of global-mean warming, recently-improved estimates of Earth's energy imbalance, and estimates of radiative forcing from the fifth IPCC assessment report. Compared to pre-industrial levels, we find a committed warming of 1.5K [0.9-3.6, 5-95 percentile] at equilibrium, and of 1.3K [0.9-2.3] within this century. However, when assuming that ocean carbon uptake cancels remnant greenhouse gas-induced warming on centennial timescales, committed warming is reduced to 1.1K [0.7-1.8]. Conservatively, there is a 32% risk that committed warming already exceeds the 1.5K target set in Paris, and that this will likely be crossed prior to 2053. Regular updates of these observationally-constrained committed warming estimates, though simplistic, can provide transparent guidance as uncertainty regarding transient climate sensitivity inevitably narrows and understanding the limitations of the framework is advanced.

  15. Reproducibility of Quantitative Structural and Physiological MRI Measurements

    Science.gov (United States)

    2017-08-09

    project.org/) and SPSS (IBM Corp., Armonk, NY) for data analysis. Mean and confidence inter- vals for each measure are found in Tables 1–7. To assess...visits, and was calculated using a two- way mixed model in SPSS MCV and MRD values closer to 0 are considered to be the most reproducible, and ICC

  16. A Combined Observational and Modeling Approach to Study Modern Dust Transport from the Patagonia Desert to East Antarctica

    Science.gov (United States)

    Gasso, S.; Stein, A.; Marino, F.; Castellano, E.; Udisti, R.; Ceratto, J.

    2010-01-01

    The understanding of present atmospheric transport processes from Southern Hemisphere (SH) landmasses to Antarctica can improve the interpretation of stratigraphic data in Antarctic ice cores. In addition, long range transport can deliver key nutrients normally not available to marine ecosystems in the Southern Ocean and may trigger or enhance primary productivity. However, there is a dearth of observational based studies of dust transport in the SH. This work aims to improve current understanding of dust transport in the SH by showing a characterization of two dust events originating in the Patagonia desert (south end of South America). The approach is based on a combined and complementary use of satellite retrievals (detectors MISR, MODIS, GLAS ,POLDER, OMI,), transport model simulation (HYSPLIT) and surface observations near the sources and aerosol measurements in Antarctica (Neumayer and Concordia sites). Satellite imagery and visibility observations confirm dust emission in a stretch of dry lakes along the coast of the Tierra del Fuego (TdF) island (approx.54deg S) and from the shores of the Colihue Huapi lake in Central Patagonia (approx.46deg S) in February 2005. Model simulations initialized by these observations reproduce the timing of an observed increase in dust concentration at the Concordia Station and some of the observed increases in atmospheric aerosol absorption (here used as a dust proxy) in the Neumayer station. The TdF sources were the largest contributors of dust at both sites. The transit times from TdF to the Neumayer and Concordia sites are 6-7 and 9-10 days respectively. Lidar observations and model outputs coincide in placing most of the dust cloud in the boundary layer and suggest significant de- position over the ocean immediately downwind. Boundary layer dust was detected as far as 1800 km from the source and approx.800 km north of the South Georgia Island over the central sub-Antarctic Atlantic Ocean. Although the analysis suggests the

  17. The 2010 Saturn's Great White Spot: Observations and models

    Science.gov (United States)

    Sanchez-Lavega, A.

    2011-12-01

    On December 5, 2010, a major storm erupted in Saturn's northern hemisphere at a planetographic latitude of 37.7 deg [1]. These phenomena are known as "Great White Spots" (GWS) and they have been observed once per Saturn year since the first case confidently reported in 1876. The last event occurred at Saturn's Equator in 1990 [2]. A GWS differs from similar smaller-scale storms in that it generates a planetary-scale disturbance that spreads zonally spanning the whole latitude band. We report on the evolution and motions of the 2010 GWS and its associated disturbance during the months following the outbreak, based mainly on high quality images obtained in the visual range submitted to the International Outer Planet Watch PVOL database [3], with the 1m telescope at Pic-du-Midi Observatory and the 2.2 m telescope at Calar Alto Observatory. The GWS "head source" extinguished by June 2011 implying that it survived about 6 months. Since this source is assumed to be produced by water moist convection, a reservoir of water vapor must exist at a depth of 10 bar and at the same time a disturbance producing the necessary convergence to trigger the ascending motions. The high temporal sampling and coverage allowed us to study the dynamics of the GWS in detail and the multi-wavelength observations provide information on its cloud top structure. We present non-linear simulations using the EPIC code of the evolution of the potential vorticity generated by a continuous Gaussian heat source extending from 10 bar to about 1 bar, that compare extraordinary well to the observed cloud field evolution. Acknowledgements: This work has been funded by Spanish MICIIN AYA2009-10701 with FEDER support and Grupos Gobierno Vasco IT-464-07. The presentation is done on behalf of the team listed in Reference [1]. [1]Sánchez-Lavega A., et al., Nature, 475, 71-74 (2011) [2]Sánchez-Lavega A., et al., Nature, 353, 397-401 (1991) [3]Hueso R., et al., Planet. Space Sci., 58, 1152-1159 (2010).

  18. An Exospheric Temperature Model Based On CHAMP Observations and TIEGCM Simulations

    Science.gov (United States)

    Ruan, Haibing; Lei, Jiuhou; Dou, Xiankang; Liu, Siqing; Aa, Ercha

    2018-02-01

    In this work, thermospheric densities from the accelerometer measurement on board the CHAMP satellite during 2002-2009 and the simulations from the National Center for Atmospheric Research Thermosphere Ionosphere Electrodynamics General Circulation Model (NCAR-TIEGCM) are employed to develop an empirical exospheric temperature model (ETM). The two-dimensional basis functions of the ETM are first provided from the principal component analysis of the TIEGCM simulations. Based on the exospheric temperatures derived from CHAMP thermospheric densities, a global distribution of the exospheric temperatures is reconstructed. A parameterization is conducted for each basis function amplitude as a function of solar-geophysical and seasonal conditions. Thus, the ETM can be utilized to model the thermospheric temperature and mass density under a specified condition. Our results showed that the averaged standard deviation of the ETM is generally less than 10% than approximately 30% in the MSIS model. Besides, the ETM reproduces the global thermospheric evolutions including the equatorial thermosphere anomaly.

  19. Modelling the widths of fission observables in GEF

    Directory of Open Access Journals (Sweden)

    Schmidt K.-H.

    2013-03-01

    Full Text Available The widths of the mass distributions of the different fission channels are traced back to the probability distributions of the corresponding quantum oscillators that are coupled to the heat bath, which is formed by the intrinsic degrees of freedom of the fissioning system under the influence of pairing correlations and shell effects. Following conclusion from stochastic calculations of Adeev and Pashkevich, an early freezing due to dynamical effects is assumed. It is shown that the mass width of the fission channels in low-energy fission is strongly influenced by the zero-point motion of the corresponding quantum oscillator. The observed variation of the mass widths of the asymmetric fission channels with excitation energy is attributed to the energy-dependent properties of the heat bath and not to the population of excited states of the corresponding quantum oscillator.

  20. Analysis and Modeling of Jovian Radio Emissions Observed by Galileo

    Science.gov (United States)

    Menietti, J. D.

    2003-01-01

    Our studies of Jovian radio emission have resulted in the publication of five papers in refereed journals, with three additional papers in progress. The topics of these papers include the study of narrow-band kilometric radio emission; the apparent control of radio emission by Callisto; quasi-periodic radio emission; hectometric attenuation lanes and their relationship to Io volcanic activity; and modeling of HOM attenuation lanes using ray tracing. A further study of the control of radio emission by Jovian satellites is currently in progress. Abstracts of each of these papers are contained in the Appendix. A list of the publication titles are also included.

  1. Conical Refraction: new observations and a dual cone model.

    Science.gov (United States)

    Sokolovskii, G S; Carnegie, D J; Kalkandjiev, T K; Rafailov, E U

    2013-05-06

    We propose a paraxial dual-cone model of conical refraction involving the interference of two cones of light behind the exit face of the crystal. The supporting experiment is based on beam selecting elements breaking down the conically refracted beam into two separate hollow cones which are symmetrical with one another. The shape of these cones of light is a product of a 'competition' between the divergence caused by the conical refraction and the convergence due to the focusing by the lens. The developed mathematical description of the conical refraction demonstrates an excellent agreement with experiment.

  2. Archiving Reproducible Research with R and Dataverse

    DEFF Research Database (Denmark)

    Leeper, Thomas

    2014-01-01

    Reproducible research and data archiving are increasingly important issues in research involving statistical analyses of quantitative data. This article introduces the dvn package, which allows R users to publicly archive datasets, analysis files, codebooks, and associated metadata in Dataverse...

  3. Relevant principal factors affecting the reproducibility of insect primary culture.

    Science.gov (United States)

    Ogata, Norichika; Iwabuchi, Kikuo

    2017-06-01

    The primary culture of insect cells often suffers from problems with poor reproducibility in the quality of the final cell preparations. The cellular composition of the explants (cell number and cell types), surgical methods (surgical duration and surgical isolation), and physiological and genetic differences between donors may be critical factors affecting the reproducibility of culture. However, little is known about where biological variation (interindividual differences between donors) ends and technical variation (variance in replication of culture conditions) begins. In this study, we cultured larval fat bodies from the Japanese rhinoceros beetle, Allomyrina dichotoma, and evaluated, using linear mixed models, the effect of interindividual variation between donors on the reproducibility of the culture. We also performed transcriptome analysis of the hemocyte-like cells mainly seen in the cultures using RNA sequencing and ultrastructural analyses of hemocytes using a transmission electron microscope, revealing that the cultured cells have many characteristics of insect hemocytes.

  4. Augmenting an observation network to facilitate flow and transport model discrimination.

    Science.gov (United States)

    Improving understanding of subsurface conditions includes performance comparison for competing models, independently developed or obtained via model abstraction. The model comparison and discrimination can be improved if additional observations will be included. The objective of this work was to i...

  5. Atmospheric methane variability at the Peterhof station (Russia): ground-based observations and modeling

    Science.gov (United States)

    Makarova, Maria; Kirner, Oliver; Poberovskii, Anatoliy; Imhasin, Humud; Timofeyev, Yuriy; Virolainen, Yana; Makarov, Boris

    2014-05-01

    The Peterhof station (59.88 N, 29.83 E, 20 m asl) for atmospheric monitoring was founded by Saint - Petersburg State University, Russia. FTIR (Fourier transform IR) observations of methane total column are being carried out by Bruker IFS125 HR since 2009. The study presents a joint analysis of experimental data and EMAC (ECHAM/MESSy Atmospheric Chemistry model) model simulations for Peterhof over the period of 2009-2012. It was shown that CH4 total columns (TC) and column-averaged dry-air mole fractions (MF) obtained from observations are higher than model results with the difference of 1.3% and 0.3 % respectively. The correlation coefficients between FTIR and EMAC data are statistically significant (with 95% confidence) and equal to 0.82 ± 0.08 and 0.4 ± 0.1 for TC and MF of CH4 respectively. The high correlation for TCs shows that EMAC adequately reproduces CH4 variability due to meteorological processes in the atmosphere. On the other hand, the relatively low correlation coefficient for CH4 MF probably indicates an insufficiently precise knowledge of sources and sinks of the atmospheric methane. Amplitudes of the mean annual cycle of CH4 TC for experimental and model datasets (2009-2012) are of 2.1 % and 1.5 % respectively. The same amplitudes calculated for MF are less than for TC: 1.1% for FTIR and 0.6% for EMAC. Difference between FTIR and EMAC annual variations has pronounced seasonality with a maximum in September - November. It could be attributed to the underestimation of methane natural sources in the emission inventory used for EMAC simulations or by relatively coarse horizontal grid of the model (2.8°x2.8°).