WorldWideScience

Sample records for models reproduce observed

  1. Can Computational Sediment Transport Models Reproduce the Observed Variability of Channel Networks in Modern Deltas?

    Science.gov (United States)

    Nesvold, E.; Mukerji, T.

    2017-12-01

    River deltas display complex channel networks that can be characterized through the framework of graph theory, as shown by Tejedor et al. (2015). Deltaic patterns may also be useful in a Bayesian approach to uncertainty quantification of the subsurface, but this requires a prior distribution of the networks of ancient deltas. By considering subaerial deltas, one can at least obtain a snapshot in time of the channel network spectrum across deltas. In this study, the directed graph structure is semi-automatically extracted from satellite imagery using techniques from statistical processing and machine learning. Once the network is labeled with vertices and edges, spatial trends and width and sinuosity distributions can also be found easily. Since imagery is inherently 2D, computational sediment transport models can serve as a link between 2D network structure and 3D depositional elements; the numerous empirical rules and parameters built into such models makes it necessary to validate the output with field data. For this purpose we have used a set of 110 modern deltas, with average water discharge ranging from 10 - 200,000 m3/s, as a benchmark for natural variability. Both graph theoretic and more general distributions are established. A key question is whether it is possible to reproduce this deltaic network spectrum with computational models. Delft3D was used to solve the shallow water equations coupled with sediment transport. The experimental setup was relatively simple; incoming channelized flow onto a tilted plane, with varying wave and tidal energy, sediment types and grain size distributions, river discharge and a few other input parameters. Each realization was run until a delta had fully developed: between 50 and 500 years (with a morphology acceleration factor). It is shown that input parameters should not be sampled independently from the natural ranges, since this may result in deltaic output that falls well outside the natural spectrum. Since we are

  2. Can CFMIP2 models reproduce the leading modes of cloud vertical structure in the CALIPSO-GOCCP observations?

    Science.gov (United States)

    Wang, Fang; Yang, Song

    2018-02-01

    Using principal component (PC) analysis, three leading modes of cloud vertical structure (CVS) are revealed by the GCM-Oriented CALIPSO Cloud Product (GOCCP), i.e. tropical high, subtropical anticyclonic and extratropical cyclonic cloud modes (THCM, SACM and ECCM, respectively). THCM mainly reflect the contrast between tropical high clouds and clouds in middle/high latitudes. SACM is closely associated with middle-high clouds in tropical convective cores, few-cloud regimes in subtropical anticyclonic clouds and stratocumulus over subtropical eastern oceans. ECCM mainly corresponds to clouds along extratropical cyclonic regions. Models of phase 2 of Cloud Feedback Model Intercomparison Project (CFMIP2) well reproduce the THCM, but SACM and ECCM are generally poorly simulated compared to GOCCP. Standardized PCs corresponding to CVS modes are generally captured, whereas original PCs (OPCs) are consistently underestimated (overestimated) for THCM (SACM and ECCM) by CFMIP2 models. The effects of CVS modes on relative cloud radiative forcing (RSCRF/RLCRF) (RSCRF being calculated at the surface while RLCRF at the top of atmosphere) are studied in terms of principal component regression method. Results show that CFMIP2 models tend to overestimate (underestimated or simulate the opposite sign) RSCRF/RLCRF radiative effects (REs) of ECCM (THCM and SACM) in unit global mean OPC compared to observations. These RE biases may be attributed to two factors, one of which is underestimation (overestimation) of low/middle clouds (high clouds) (also known as stronger (weaker) REs in unit low/middle (high) clouds) in simulated global mean cloud profiles, the other is eigenvector biases in CVS modes (especially for SACM and ECCM). It is suggested that much more attention should be paid on improvement of CVS, especially cloud parameterization associated with particular physical processes (e.g. downwelling regimes with the Hadley circulation, extratropical storm tracks and others), which

  3. Dynamic contrast-enhanced computed tomography in metastatic nasopharyngeal carcinoma: reproducibility analysis and observer variability of the distributed parameter model.

    Science.gov (United States)

    Ng, Quan-Sing; Thng, Choon Hua; Lim, Wan Teck; Hartono, Septian; Thian, Yee Liang; Lee, Puor Sherng; Tan, Daniel Shao-Weng; Tan, Eng Huat; Koh, Tong San

    2012-01-01

    To determine the reproducibility and observer variability of distributed parameter analysis of dynamic contrast-enhanced computed tomography (DCE-CT) data in metastatic nasopharyngeal carcinoma, and to compare 2 approaches of region-of-interest (ROI) analyses. Following ethical approval and informed consent, 17 patients with nasopharyngeal carcinoma underwent paired DCE-CT examinations on a 64-detector scanner, measuring tumor blood flow (F, mL/100 mL/min), permeability surface area product (PS, mL/100 mL/min), fractional intravascular blood volume (v1, mL/100 mL), and fractional extracellular-extravascular volume (v2, mL/100 mL). Tumor parameters were derived by fitting (i) the ROI-averaged concentration-time curve, and (ii) the median value of parameters from voxel-level concentration-time curves. Measurement reproducibility and inter- and intraobserver variability were estimated using Bland-Altman statistics. Mean F, PS, v1, and v2 are 44.9, 20.4, 7.1, and 34.1 for ROI analysis, and 49.0, 18.7, 6.7, and 34.0 for voxel analysis, respectively. Within-subject coefficients of variation are 38.8%, 49.5%, 54.2%, and 35.9% for ROI analysis, and 15.0%, 35.1%, 33.0%, and 21.0% for voxel analysis, respectively. Repeatability coefficients are 48.2, 28.0, 10.7, and 33.9 for ROI analysis, and 20.3, 18.2, 6.1 and 19.8 for voxel analysis, respectively. Intra- and interobserver correlation coefficient ranged from 0.94 to 0.97 and 0.90 to 0.95 for voxel analysis, and 0.73 to 0.87 and 0.72 to 0.94 for ROI analysis, respectively. Measurements of F and v2 appear more reproducible than PS and v1. Voxel-level analysis improves both reproducibility and observer variability compared with ROI-averaged analysis and may retain information about tumor spatial heterogeneity.

  4. The NANOGrav Observing Program: Automation and Reproducibility

    Science.gov (United States)

    Brazier, Adam; Cordes, James; Demorest, Paul; Dolch, Timothy; Ferdman, Robert; Garver-Daniels, Nathaniel; Hawkins, Steven; Lam, Michael Timothy; Lazio, T. Joseph W.

    2018-01-01

    The NANOGrav Observing Program is a decades-long search for gravitational waves using pulsar timing which relies, for its sensitivity, on large data sets from observations of many pulsars. These are constructed through an intensive, long-term observing campaign. The nature of the program requires automation in the transfer and archiving of the large volume of raw telescope data, the calibration of those data, and making these resulting data products—required for diagnostic and data exploration purposes—available to NANOGrav members. Reproducibility of results is a key goal in this project, and essential to its success; it requires treating the software itself as a data product of the research, while ensuring easy access by, and collaboration between, members of NANOGrav, the International Pulsar Timing Array consortium (of which NANOGrav is a key member), as well as the wider astronomy community and the public.

  5. Reproducing Electric Field Observations during Magnetic Storms by means of Rigorous 3-D Modelling and Distortion Matrix Co-estimation

    Science.gov (United States)

    Püthe, Christoph; Manoj, Chandrasekharan; Kuvshinov, Alexey

    2015-04-01

    Electric fields induced in the conducting Earth during magnetic storms drive currents in power transmission grids, telecommunication lines or buried pipelines. These geomagnetically induced currents (GIC) can cause severe service disruptions. The prediction of GIC is thus of great importance for public and industry. A key step in the prediction of the hazard to technological systems during magnetic storms is the calculation of the geoelectric field. To address this issue for mid-latitude regions, we developed a method that involves 3-D modelling of induction processes in a heterogeneous Earth and the construction of a model of the magnetospheric source. The latter is described by low-degree spherical harmonics; its temporal evolution is derived from observatory magnetic data. Time series of the electric field can be computed for every location on Earth's surface. The actual electric field however is known to be perturbed by galvanic effects, arising from very local near-surface heterogeneities or topography, which cannot be included in the conductivity model. Galvanic effects are commonly accounted for with a real-valued time-independent distortion matrix, which linearly relates measured and computed electric fields. Using data of various magnetic storms that occurred between 2000 and 2003, we estimated distortion matrices for observatory sites onshore and on the ocean bottom. Strong correlations between modellings and measurements validate our method. The distortion matrix estimates prove to be reliable, as they are accurately reproduced for different magnetic storms. We further show that 3-D modelling is crucial for a correct separation of galvanic and inductive effects and a precise prediction of electric field time series during magnetic storms. Since the required computational resources are negligible, our approach is suitable for a real-time prediction of GIC. For this purpose, a reliable forecast of the source field, e.g. based on data from satellites

  6. Reproducibility in Computational Neuroscience Models and Simulations

    Science.gov (United States)

    McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.

    2016-01-01

    Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845

  7. Dysplastic naevus: histological criteria and their inter-observer reproducibility.

    Science.gov (United States)

    Hastrup, N; Clemmensen, O J; Spaun, E; Søndergaard, K

    1994-06-01

    Forty melanocytic lesions were examined in a pilot study, which was followed by a final series of 100 consecutive melanocytic lesions, in order to evaluate the inter-observer reproducibility of the histological criteria proposed for the dysplastic naevus. The specimens were examined in a blind fashion by four observers. Analysis by kappa statistics showed poor reproducibility of nuclear features, while reproducibility of architectural features was acceptable, improving in the final series. Consequently, we cannot apply the combined criteria of cytological and architectural features with any confidence in the diagnosis of dysplastic naevus, and, until further studies have documented that architectural criteria alone will suffice in the diagnosis of dysplastic naevus, we, as pathologists, shall avoid this term.

  8. Modeling reproducibility of porescale multiphase flow experiments

    Science.gov (United States)

    Ling, B.; Tartakovsky, A. M.; Bao, J.; Oostrom, M.; Battiato, I.

    2017-12-01

    Multi-phase flow in porous media is widely encountered in geological systems. Understanding immiscible fluid displacement is crucial for processes including, but not limited to, CO2 sequestration, non-aqueous phase liquid contamination and oil recovery. Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  9. A reproducible canine model of esophageal varices.

    Science.gov (United States)

    Jensen, D M; Machicado, G A; Tapia, J I; Kauffman, G; Franco, P; Beilin, D

    1983-03-01

    One of the most promising nonoperative techniques for control of variceal hemorrhage is sclerosis via the fiberoptic endoscope. Many questions remain, however, about sclerosing agents, guidelines for effective use, and limitations of endoscopic techniques. A reproducible large animal model of esophageal varices would facilitate the critical evaluation of techniques for variceal hemostasis or sclerosis. Our purpose was to develop a large animal model of esophageal varices. Studies in pigs and dogs are described which led to the development of a reproducible canine model of esophageal varices. For the final model, mongrel dogs had laparotomy, side-to-side portacaval shunt, inferior vena cava ligation, placement of an ameroid constrictor around the portal vein, and liver biopsy. The mean (+/- SE) pre- and postshunt portal pressure increased significantly from 12 +/- 0.4 to 23 +/- 1 cm saline. Weekly endoscopies were performed to grade the varix size. Two-thirds of animals developed medium or large sized esophageal varices after the first operation. Three to six weeks later, a second laparotomy with complete ligation of the portal vein and liver biopsy were performed in animals with varices (one-third of the animals). All dogs developed esophageal varices and abdominal wall collateral veins of variable size 3-6 wk after the first operation. After the second operation, the varices became larger. Shunting of blood through esophageal varices via splenic and gastric veins was demonstrated by angiography. Sequential liver biopsies were normal. There was no morbidity or mortality. Ascites, encephalopathy, or spontaneous variceal bleeding did not occur. We have documented the lack of size change and the persistence of medium to large esophageal varices and abdominal collateral veins in all animals followed for more than 6 mo. Variceal bleeding could be induced by venipuncture for testing endoscopic hemostatic and sclerosis methods. We suggest other potential uses of this

  10. Towards reproducible descriptions of neuronal network models.

    Directory of Open Access Journals (Sweden)

    Eilen Nordlie

    2009-08-01

    Full Text Available Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.

  11. Can global chemistry-climate models reproduce air quality extremes?

    Science.gov (United States)

    Schnell, J.; Prather, M. J.; Holmes, C. D.

    2013-12-01

    We identify and characterize extreme ozone pollution episodes over the USA and EU through a novel analysis of ten years (2000-2010) of surface ozone measurements. An optimal interpolation scheme is developed to create grid-cell averaged values of surface ozone that can be compared with gridded model simulations. In addition, it also allows a comparison of two non-coincident observational networks in the EU. The scheme incorporates techniques borrowed from inverse distance weighting and Kriging. It uses all representative observational site data while still recognizing the heterogeneity of surface ozone. Individual, grid-cell level events are identified as an exceedance of historical percentile (10 worst days in a year, 97.3 percentile). A clustering algorithm is then used to construct the ozone episodes from the individual events. We then test the skill of the high-resolution (100 km) two-year (2005-2006) hindcast from the UCI global chemistry transport model in reproducing the events/episodes identified in the observations using the same identification criteria. Although the UCI CTM has substantial biases in surface ozone, we find that it has considerable skill in reproducing both individual grid-cell level extreme events and their connectedness in space and time with an overall skill of 24% (32%) for the US (EU). The grid-cell level extreme ozone events in both the observations and UCI CTM are found to occur mostly (~75%) in coherent, multi-day, connected episodes covering areas greater than 1000 x 1000 square km. In addition the UCI CTM has greater skill in reproducing these larger episodes. We conclude that even at relatively coarse resolution, global chemistry-climate models can be used to project major synoptic pollution episodes driven by large-scale climate and chemistry changes even with their known biases.

  12. From alginate impressions to digital virtual models: accuracy and reproducibility.

    Science.gov (United States)

    Dalstra, Michel; Melsen, Birte

    2009-03-01

    To compare the accuracy and reproducibility of measurements performed on digital virtual models with those taken on plaster casts from models poured immediately after the impression was taken, the 'gold standard', and from plaster models poured following a 3-5 day shipping procedure of the alginate impression. Direct comparison of two measuring techniques. The study was conducted at the Department of Orthodontics, School of Dentistry, University of Aarhus, Denmark in 2006/2007. Twelve randomly selected orthodontic graduate students with informed consent. Three sets of alginate impressions were taken from the participants within 1 hour. Plaster models were poured immediately from two of the sets, while the third set was kept in transit in the mail for 3-5 days. Upon return a plaster model was poured as well. Finally digital models were made from the plaster models. A number of measurements were performed on the plaster casts with a digital calliper and on the corresponding digital models using the virtual measuring tool of the accompanying software. Afterwards these measurements were compared statistically. No statistical differences were found between the three sets of plaster models. The intra- and inter-observer variability are smaller for the measurements performed on the digital models. Sending alginate impressions by mail does not affect the quality and accuracy of plaster casts poured from them afterwards. Virtual measurements performed on digital models display less variability than the corresponding measurements performed with a calliper on the actual models.

  13. SNe Ia: Can Chandrasekhar mass explosions reproduce the observed zoo?

    Energy Technology Data Exchange (ETDEWEB)

    Baron, E. [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, 400 W. Brooks, Rm 100, Norman, OK 73072-2061 (United States); Hamburger Sternwarte, Gojenbergsweg 112, 21029 Hamburg (Germany)

    2014-08-15

    The question of the nature of the progenitor of Type Ia supernovae (SNe Ia) is important both for our detailed understanding of stellar evolution and for their use as cosmological probes of the dark energy. Much of the basic features of SNe Ia can be understood directly from the nuclear physics, a fact which Gerry would have appreciated. We present an overview of the current observational and theoretical situation and show that it not incompatible with most SNe Ia being the results of thermonuclear explosions near the Chandrasekhar mass.

  14. Can a coupled meteorology–chemistry model reproduce the ...

    Science.gov (United States)

    The ability of a coupled meteorology–chemistry model, i.e., Weather Research and Forecast and Community Multiscale Air Quality (WRF-CMAQ), to reproduce the historical trend in aerosol optical depth (AOD) and clear-sky shortwave radiation (SWR) over the Northern Hemisphere has been evaluated through a comparison of 21-year simulated results with observation-derived records from 1990 to 2010. Six satellite-retrieved AOD products including AVHRR, TOMS, SeaWiFS, MISR, MODIS-Terra and MODIS-Aqua as well as long-term historical records from 11 AERONET sites were used for the comparison of AOD trends. Clear-sky SWR products derived by CERES at both the top of atmosphere (TOA) and surface as well as surface SWR data derived from seven SURFRAD sites were used for the comparison of trends in SWR. The model successfully captured increasing AOD trends along with the corresponding increased TOA SWR (upwelling) and decreased surface SWR (downwelling) in both eastern China and the northern Pacific. The model also captured declining AOD trends along with the corresponding decreased TOA SWR (upwelling) and increased surface SWR (downwelling) in the eastern US, Europe and the northern Atlantic for the period of 2000–2010. However, the model underestimated the AOD over regions with substantial natural dust aerosol contributions, such as the Sahara Desert, Arabian Desert, central Atlantic and northern Indian Ocean. Estimates of the aerosol direct radiative effect (DRE) at TOA a

  15. The substorm cycle as reproduced by global MHD models

    Science.gov (United States)

    Gordeev, E.; Sergeev, V.; Tsyganenko, N.; Kuznetsova, M.; Rastäetter, L.; Raeder, J.; Tóth, G.; Lyon, J.; Merkin, V.; Wiltberger, M.

    2017-01-01

    Recently, Gordeev et al. (2015) suggested a method to test global MHD models against statistical empirical data. They showed that four community-available global MHD models supported by the Community Coordinated Modeling Center (CCMC) produce a reasonable agreement with reality for those key parameters (the magnetospheric size, magnetic field, and pressure) that are directly related to the large-scale equilibria in the outer magnetosphere. Based on the same set of simulation runs, here we investigate how the models reproduce the global loading-unloading cycle. We found that in terms of global magnetic flux transport, three examined CCMC models display systematically different response to idealized 2 h north then 2 h south interplanetary magnetic field (IMF) Bz variation. The LFM model shows a depressed return convection and high loading rate during the growth phase as well as enhanced return convection and high unloading rate during the expansion phase, with the amount of loaded/unloaded magnetotail flux and the growth phase duration being the closest to their observed empirical values during isolated substorms. Two other models exhibit drastically different behavior. In the BATS-R-US model the plasma sheet convection shows a smooth transition to the steady convection regime after the IMF southward turning. In the Open GGCM a weak plasma sheet convection has comparable intensities during both the growth phase and the following slow unloading phase. We also demonstrate potential technical problem in the publicly available simulations which is related to postprocessing interpolation and could affect the accuracy of magnetic field tracing and of other related procedures.

  16. Cyberinfrastructure to Support Collaborative and Reproducible Computational Hydrologic Modeling

    Science.gov (United States)

    Goodall, J. L.; Castronova, A. M.; Bandaragoda, C.; Morsy, M. M.; Sadler, J. M.; Essawy, B.; Tarboton, D. G.; Malik, T.; Nijssen, B.; Clark, M. P.; Liu, Y.; Wang, S. W.

    2017-12-01

    Creating cyberinfrastructure to support reproducibility of computational hydrologic models is an important research challenge. Addressing this challenge requires open and reusable code and data with machine and human readable metadata, organized in ways that allow others to replicate results and verify published findings. Specific digital objects that must be tracked for reproducible computational hydrologic modeling include (1) raw initial datasets, (2) data processing scripts used to clean and organize the data, (3) processed model inputs, (4) model results, and (5) the model code with an itemization of all software dependencies and computational requirements. HydroShare is a cyberinfrastructure under active development designed to help users store, share, and publish digital research products in order to improve reproducibility in computational hydrology, with an architecture supporting hydrologic-specific resource metadata. Researchers can upload data required for modeling, add hydrology-specific metadata to these resources, and use the data directly within HydroShare.org for collaborative modeling using tools like CyberGIS, Sciunit-CLI, and JupyterHub that have been integrated with HydroShare to run models using notebooks, Docker containers, and cloud resources. Current research aims to implement the Structure For Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model within HydroShare to support hypothesis-driven hydrologic modeling while also taking advantage of the HydroShare cyberinfrastructure. The goal of this integration is to create the cyberinfrastructure that supports hypothesis-driven model experimentation, education, and training efforts by lowering barriers to entry, reducing the time spent on informatics technology and software development, and supporting collaborative research within and across research groups.

  17. Reproducibility and Transparency in Ocean-Climate Modeling

    Science.gov (United States)

    Hannah, N.; Adcroft, A.; Hallberg, R.; Griffies, S. M.

    2015-12-01

    Reproducibility is a cornerstone of the scientific method. Within geophysical modeling and simulation achieving reproducibility can be difficult, especially given the complexity of numerical codes, enormous and disparate data sets, and variety of supercomputing technology. We have made progress on this problem in the context of a large project - the development of new ocean and sea ice models, MOM6 and SIS2. Here we present useful techniques and experience.We use version control not only for code but the entire experiment working directory, including configuration (run-time parameters, component versions), input data and checksums on experiment output. This allows us to document when the solutions to experiments change, whether due to code updates or changes in input data. To avoid distributing large input datasets we provide the tools for generating these from the sources, rather than provide raw input data.Bugs can be a source of non-determinism and hence irreproducibility, e.g. reading from or branching on uninitialized memory. To expose these we routinely run system tests, using a memory debugger, multiple compilers and different machines. Additional confidence in the code comes from specialised tests, for example automated dimensional analysis and domain transformations. This has entailed adopting a code style where we deliberately restrict what a compiler can do when re-arranging mathematical expressions.In the spirit of open science, all development is in the public domain. This leads to a positive feedback, where increased transparency and reproducibility makes using the model easier for external collaborators, who in turn provide valuable contributions. To facilitate users installing and running the model we provide (version controlled) digital notebooks that illustrate and record analysis of output. This has the dual role of providing a gross, platform-independent, testing capability and a means to documents model output and analysis.

  18. Paleomagnetic analysis of curved thrust belts reproduced by physical models

    Science.gov (United States)

    Costa, Elisabetta; Speranza, Fabio

    2003-12-01

    This paper presents a new methodology for studying the evolution of curved mountain belts by means of paleomagnetic analyses performed on analogue models. Eleven models were designed aimed at reproducing various tectonic settings in thin-skinned tectonics. Our models analyze in particular those features reported in the literature as possible causes for peculiar rotational patterns in the outermost as well as in the more internal fronts. In all the models the sedimentary cover was reproduced by frictional low-cohesion materials (sand and glass micro-beads), which detached either on frictional or on viscous layers. These latter were reproduced in the models by silicone. The sand forming the models has been previously mixed with magnetite-dominated powder. Before deformation, the models were magnetized by means of two permanent magnets generating within each model a quasi-linear magnetic field of intensity variable between 20 and 100 mT. After deformation, the models were cut into closely spaced vertical sections and sampled by means of 1×1-cm Plexiglas cylinders at several locations along curved fronts. Care was taken to collect paleomagnetic samples only within virtually undeformed thrust sheets, avoiding zones affected by pervasive shear. Afterwards, the natural remanent magnetization of these samples was measured, and alternating field demagnetization was used to isolate the principal components. The characteristic components of magnetization isolated were used to estimate the vertical-axis rotations occurring during model deformation. We find that indenters pushing into deforming belts from behind form non-rotational curved outer fronts. The more internal fronts show oroclinal-type rotations of a smaller magnitude than that expected for a perfect orocline. Lateral symmetrical obstacles in the foreland colliding with forward propagating belts produce non-rotational outer curved fronts as well, whereas in between and inside the obstacles a perfect orocline forms

  19. A reproducible oral microcosm biofilm model for testing dental materials.

    Science.gov (United States)

    Rudney, J D; Chen, R; Lenton, P; Li, J; Li, Y; Jones, R S; Reilly, C; Fok, A S; Aparicio, C

    2012-12-01

    Most studies of biofilm effects on dental materials use single-species biofilms, or consortia. Microcosm biofilms grown directly from saliva or plaque are much more diverse, but difficult to characterize. We used the Human Oral Microbial Identification Microarray (HOMIM) to validate a reproducible oral microcosm model. Saliva and dental plaque were collected from adults and children. Hydroxyapatite and dental composite discs were inoculated with either saliva or plaque, and microcosm biofilms were grown in a CDC biofilm reactor. In later experiments, the reactor was pulsed with sucrose. DNA from inoculums and microcosms was analysed by HOMIM for 272 species. Microcosms included about 60% of species from the original inoculum. Biofilms grown on hydroxyapatite and composites were extremely similar. Sucrose pulsing decreased diversity and pH, but increased the abundance of Streptococcus and Veillonella. Biofilms from the same donor, grown at different times, clustered together. This model produced reproducible microcosm biofilms that were representative of the oral microbiota. Sucrose induced changes associated with dental caries. This is the first use of HOMIM to validate an oral microcosm model that can be used to study the effects of complex biofilms on dental materials. © 2012 The Society for Applied Microbiology.

  20. Feasibility and observer reproducibility of speckle tracking echocardiography in congenital heart disease patients.

    Science.gov (United States)

    Mokhles, Palwasha; van den Bosch, Annemien E; Vletter-McGhie, Jackie S; Van Domburg, Ron T; Ruys, Titia P E; Kauer, Floris; Geleijnse, Marcel L; Roos-Hesselink, Jolien W

    2013-09-01

    The twisting motion of the heart has an important role in the function of the left ventricle. Speckle tracking echocardiography is able to quantify left ventricular (LV) rotation and twist. So far this new technique has not been used in congenital heart disease patients. The aim of our study was to investigate the feasibility and the intra- and inter-observer reproducibility of LV rotation parameters in adult patients with congenital heart disease. The study population consisted of 66 consecutive patients seen in the outpatient clinic (67% male, mean age 31 ± 7.7 years, NYHA class 1 ± 0.3) with a variety of congenital heart disease. First, feasibility was assessed in all patients. Intra- and inter-observer reproducibility was assessed for the patients in which speckle tracking echocardiography was feasible. Adequate image quality, for performing speckle echocardiography, was found in 80% of patients. The bias for the intra-observer reproducibility of the LV twist was 0.0°, with 95% limits of agreement of -2.5° and 2.5° and for interobserver reproducibility the bias was 0.0°, with 95% limits of agreement of -3.0° and 3.0°. Intra- and inter-observer measurements showed a strong correlation (0.86 and 0.79, respectively). Also a good repeatability was seen. The mean time to complete full analysis per subject for the first and second measurement was 9 and 5 minutes, respectively. Speckle tracking echocardiography is feasible in 80% of adult patients with congenital heart disease and shows excellent intra- and inter-observer reproducibility. © 2013, Wiley Periodicals, Inc.

  1. Evaluation of Oceanic Surface Observation for Reproducing the Upper Ocean Structure in ECHAM5/MPI-OM

    Science.gov (United States)

    Luo, Hao; Zheng, Fei; Zhu, Jiang

    2017-12-01

    Better constraints of initial conditions from data assimilation are necessary for climate simulations and predictions, and they are particularly important for the ocean due to its long climate memory; as such, ocean data assimilation (ODA) is regarded as an effective tool for seasonal to decadal predictions. In this work, an ODA system is established for a coupled climate model (ECHAM5/MPI-OM), which can assimilate all available oceanic observations using an ensemble optimal interpolation approach. To validate and isolate the performance of different surface observations in reproducing air-sea climate variations in the model, a set of observing system simulation experiments (OSSEs) was performed over 150 model years. Generally, assimilating sea surface temperature, sea surface salinity, and sea surface height (SSH) can reasonably reproduce the climate variability and vertical structure of the upper ocean, and assimilating SSH achieves the best results compared to the true states. For the El Niño-Southern Oscillation (ENSO), assimilating different surface observations captures true aspects of ENSO well, but assimilating SSH can further enhance the accuracy of ENSO-related feedback processes in the coupled model, leading to a more reasonable ENSO evolution and air-sea interaction over the tropical Pacific. For ocean heat content, there are still limitations in reproducing the long time-scale variability in the North Atlantic, even if SSH has been taken into consideration. These results demonstrate the effectiveness of assimilating surface observations in capturing the interannual signal and, to some extent, the decadal signal but still highlight the necessity of assimilating profile data to reproduce specific decadal variability.

  2. A reproducible brain tumour model established from human glioblastoma biopsies

    Directory of Open Access Journals (Sweden)

    Li Xingang

    2009-12-01

    Full Text Available Abstract Background Establishing clinically relevant animal models of glioblastoma multiforme (GBM remains a challenge, and many commonly used cell line-based models do not recapitulate the invasive growth patterns of patient GBMs. Previously, we have reported the formation of highly invasive tumour xenografts in nude rats from human GBMs. However, implementing tumour models based on primary tissue requires that these models can be sufficiently standardised with consistently high take rates. Methods In this work, we collected data on growth kinetics from a material of 29 biopsies xenografted in nude rats, and characterised this model with an emphasis on neuropathological and radiological features. Results The tumour take rate for xenografted GBM biopsies were 96% and remained close to 100% at subsequent passages in vivo, whereas only one of four lower grade tumours engrafted. Average time from transplantation to the onset of symptoms was 125 days ± 11.5 SEM. Histologically, the primary xenografts recapitulated the invasive features of the parent tumours while endothelial cell proliferations and necrosis were mostly absent. After 4-5 in vivo passages, the tumours became more vascular with necrotic areas, but also appeared more circumscribed. MRI typically revealed changes related to tumour growth, several months prior to the onset of symptoms. Conclusions In vivo passaging of patient GBM biopsies produced tumours representative of the patient tumours, with high take rates and a reproducible disease course. The model provides combinations of angiogenic and invasive phenotypes and represents a good alternative to in vitro propagated cell lines for dissecting mechanisms of brain tumour progression.

  3. Development of a Consistent and Reproducible Porcine Scald Burn Model

    Science.gov (United States)

    Kempf, Margit; Kimble, Roy; Cuttle, Leila

    2016-01-01

    There are very few porcine burn models that replicate scald injuries similar to those encountered by children. We have developed a robust porcine burn model capable of creating reproducible scald burns for a wide range of burn conditions. The study was conducted with juvenile Large White pigs, creating replicates of burn combinations; 50°C for 1, 2, 5 and 10 minutes and 60°C, 70°C, 80°C and 90°C for 5 seconds. Visual wound examination, biopsies and Laser Doppler Imaging were performed at 1, 24 hours and at 3 and 7 days post-burn. A consistent water temperature was maintained within the scald device for long durations (49.8 ± 0.1°C when set at 50°C). The macroscopic and histologic appearance was consistent between replicates of burn conditions. For 50°C water, 10 minute duration burns showed significantly deeper tissue injury than all shorter durations at 24 hours post-burn (p ≤ 0.0001), with damage seen to increase until day 3 post-burn. For 5 second duration burns, by day 7 post-burn the 80°C and 90°C scalds had damage detected significantly deeper in the tissue than the 70°C scalds (p ≤ 0.001). A reliable and safe model of porcine scald burn injury has been successfully developed. The novel apparatus with continually refreshed water improves consistency of scald creation for long exposure times. This model allows the pathophysiology of scald burn wound creation and progression to be examined. PMID:27612153

  4. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    osteosarcoma model was shown to be feasible: the take rate was high, surgical mortality was negligible and the procedure was simple to perform and easily reproduced. It may be a useful tool in the investigation of antiangiogenic and anticancer therapeutics. Ultrasound was found to be a highly accurate tool for tumor diagnosis, localization and measurement and may be recommended for monitoring tumor growth in this model.

  5. Reproducible Earth observation analytics: challenges, ideas, and a study case on containerized land use change detection

    Science.gov (United States)

    Appel, Marius; Nüst, Daniel; Pebesma, Edzer

    2017-04-01

    Geoscientific analyses of Earth observation data typically involve a long path from data acquisition to scientific results and conclusions. Before starting the actual processing, scenes must be downloaded from the providers' platforms and the computing infrastructure needs to be prepared. The computing environment often requires specialized software, which in turn might have lots of dependencies. The software is often highly customized and provided without commercial support, which leads to rather ad-hoc systems and irreproducible results. To let other scientists reproduce the analyses, the full workspace including data, code, the computing environment, and documentation must be bundled and shared. Technologies such as virtualization or containerization allow for the creation of identical computing environments with relatively little effort. Challenges, however, arise when the volume of the data is too large, when computations are done in a cluster environment, or when complex software components such as databases are used. We discuss these challenges for the example of scalable Land use change detection on Landsat imagery. We present a reproducible implementation that runs R and the scalable data management and analytical system SciDB within a Docker container. Thanks to an explicit container recipe (the Dockerfile), this enables the all-in-one reproduction including the installation of software components, the ingestion of the data, and the execution of the analysis in a well-defined environment. We furthermore discuss possibilities how the implementation could be transferred to multi-container environments in order to support reproducibility on large cluster environments.

  6. An inter-observer Ki67 reproducibility study applying two different assessment methods

    DEFF Research Database (Denmark)

    Laenkholm, Anne-Vibeke; Grabau, Dorthe; Møller Talman, Maj-Lis

    2018-01-01

    INTRODUCTION: In 2011, the St. Gallen Consensus Conference introduced the use of pathology to define the intrinsic breast cancer subtypes by application of immunohistochemical (IHC) surrogate markers ER, PR, HER2 and Ki67 with a specified Ki67 cutoff (>14%) for luminal B-like definition. Reports...... concerning impaired reproducibility of Ki67 estimation and threshold inconsistency led to the initiation of this quality assurance study (2013-2015). The aim of the study was to investigate inter-observer variation for Ki67 estimation in malignant breast tumors by two different quantification methods....... 0.84 (95% CI: 0.80-0.87) by the count method. CONCLUSION: Although the study in general showed a moderate to good inter-observer agreement according to both ICC and Lights Kappa, still major discrepancies were identified in especially the mid-range of observations. Consequently, for now Ki67...

  7. How Modeling Standards, Software, and Initiatives Support Reproducibility in Systems Biology and Systems Medicine.

    Science.gov (United States)

    Waltemath, Dagmar; Wolkenhauer, Olaf

    2016-10-01

    Only reproducible results are of significance to science. The lack of suitable standards and appropriate support of standards in software tools has led to numerous publications with irreproducible results. Our objectives are to identify the key challenges of reproducible research and to highlight existing solutions. In this paper, we summarize problems concerning reproducibility in systems biology and systems medicine. We focus on initiatives, standards, and software tools that aim to improve the reproducibility of simulation studies. The long-term success of systems biology and systems medicine depends on trustworthy models and simulations. This requires openness to ensure reusability and transparency to enable reproducibility of results in these fields.

  8. NRFixer: Sentiment Based Model for Predicting the Fixability of Non-Reproducible Bugs

    Directory of Open Access Journals (Sweden)

    Anjali Goyal

    2017-08-01

    Full Text Available Software maintenance is an essential step in software development life cycle. Nowadays, software companies spend approximately 45\\% of total cost in maintenance activities. Large software projects maintain bug repositories to collect, organize and resolve bug reports. Sometimes it is difficult to reproduce the reported bug with the information present in a bug report and thus this bug is marked with resolution non-reproducible (NR. When NR bugs are reconsidered, a few of them might get fixed (NR-to-fix leaving the others with the same resolution (NR. To analyse the behaviour of developers towards NR-to-fix and NR bugs, the sentiment analysis of NR bug report textual contents has been conducted. The sentiment analysis of bug reports shows that NR bugs' sentiments incline towards more negativity than reproducible bugs. Also, there is a noticeable opinion drift found in the sentiments of NR-to-fix bug reports. Observations driven from this analysis were an inspiration to develop a model that can judge the fixability of NR bugs. Thus a framework, {NRFixer,} which predicts the probability of NR bug fixation, is proposed. {NRFixer} was evaluated with two dimensions. The first dimension considers meta-fields of bug reports (model-1 and the other dimension additionally incorporates the sentiments (model-2 of developers for prediction. Both models were compared using various machine learning classifiers (Zero-R, naive Bayes, J48, random tree and random forest. The bug reports of Firefox and Eclipse projects were used to test {NRFixer}. In Firefox and Eclipse projects, J48 and Naive Bayes classifiers achieve the best prediction accuracy, respectively. It was observed that the inclusion of sentiments in the prediction model shows a rise in the prediction accuracy ranging from 2 to 5\\% for various classifiers.

  9. NIHAO - XIV. Reproducing the observed diversity of dwarf galaxy rotation curve shapes in ΛCDM

    Science.gov (United States)

    Santos-Santos, Isabel M.; Di Cintio, Arianna; Brook, Chris B.; Macciò, Andrea; Dutton, Aaron; Domínguez-Tenreiro, Rosa

    2018-02-01

    The significant diversity of rotation curve (RC) shapes in dwarf galaxies has recently emerged as a challenge to Λ cold dark matter (ΛCDM): in dark matter (DM) only simulations, DM haloes have a universal cuspy density profile that results in self-similar RC shapes. We compare RC shapes of simulated galaxies from the NIHAO (Numerical Investigation of a Hundred Astrophysical Objects) project with observed galaxies from the homogeneous SPARC data set. The DM haloes of the NIHAO galaxies can expand to form cores, with the degree of expansion depending on their stellar-to-halo mass ratio. By means of the V2kpc-VRlast relation (where VRlast is the outermost measured rotation velocity), we show that both the average trend and the scatter in RC shapes of NIHAO galaxies are in reasonable agreement with SPARC: this represents a significant improvement compared to simulations that do not result in DM core formation, suggesting that halo expansion is a key process in matching the diversity of dwarf galaxy RCs. Note that NIHAO galaxies can reproduce even the extremely slowly rising RCs of IC 2574 and UGC 5750. Revealingly, the range where observed galaxies show the highest diversity corresponds to the range where core formation is most efficient in NIHAO simulations, 50 < VRlast/km s-1 < 100. A few observed galaxies in this range cannot be matched by any NIHAO RC nor by simulations that predict a universal halo profile. Interestingly, the majority of these are starbursts or emission-line galaxies, with steep RCs and small effective radii. Such galaxies represent an interesting observational target providing new clues to the process/viability of cusp-core transformation, the relationship between starburst and inner potential well, and the nature of DM.

  10. Evaluation of fecal mRNA reproducibility via a marginal transformed mixture modeling approach

    Directory of Open Access Journals (Sweden)

    Davidson Laurie A

    2010-01-01

    Full Text Available Abstract Background Developing and evaluating new technology that enables researchers to recover gene-expression levels of colonic cells from fecal samples could be key to a non-invasive screening tool for early detection of colon cancer. The current study, to the best of our knowledge, is the first to investigate and report the reproducibility of fecal microarray data. Using the intraclass correlation coefficient (ICC as a measure of reproducibility and the preliminary analysis of fecal and mucosal data, we assessed the reliability of mixture density estimation and the reproducibility of fecal microarray data. Using Monte Carlo-based methods, we explored whether ICC values should be modeled as a beta-mixture or transformed first and fitted with a normal-mixture. We used outcomes from bootstrapped goodness-of-fit tests to determine which approach is less sensitive toward potential violation of distributional assumptions. Results The graphical examination of both the distributions of ICC and probit-transformed ICC (PT-ICC clearly shows that there are two components in the distributions. For ICC measurements, which are between 0 and 1, the practice in literature has been to assume that the data points are from a beta-mixture distribution. Nevertheless, in our study we show that the use of a normal-mixture modeling approach on PT-ICC could provide superior performance. Conclusions When modeling ICC values of gene expression levels, using mixture of normals in the probit-transformed (PT scale is less sensitive toward model mis-specification than using mixture of betas. We show that a biased conclusion could be made if we follow the traditional approach and model the two sets of ICC values using the mixture of betas directly. The problematic estimation arises from the sensitivity of beta-mixtures toward model mis-specification, particularly when there are observations in the neighborhood of the the boundary points, 0 or 1. Since beta-mixture modeling

  11. Reproducing the optical properties of fine desert dust aerosols using ensembles of simple model particles

    International Nuclear Information System (INIS)

    Kahnert, Michael

    2004-01-01

    Single scattering optical properties are calculated for a proxy of fine dust aerosols at a wavelength of 0.55 μm. Spherical and spheroidal model particles are employed to fit the aerosol optical properties and to retrieve information about the physical parameters characterising the aerosols. It is found that spherical particles are capable of reproducing the scalar optical properties and the forward peak of the phase function of the dust aerosols. The effective size parameter of the aerosol ensemble is retrieved with high accuracy by using spherical model particles. Significant improvements are achieved by using spheroidal model particles. The aerosol phase function and the other diagonal elements of the Stokes scattering matrix can be fitted with high accuracy, whereas the off-diagonal elements are poorly reproduced. More elongated prolate and more flattened oblate spheroids contribute disproportionately strongly to the optimised shape distribution of the model particles and appear to be particularly useful for achieving a good fit of the scattering matrix. However, the clear discrepancies between the shape distribution of the aerosols and the shape distribution of the spheroidal model particles suggest that the possibilities of extracting shape information from optical observations are rather limited

  12. Reproducing Sea-Ice Deformation Distributions With Viscous-Plastic Sea-Ice Models

    Science.gov (United States)

    Bouchat, A.; Tremblay, B.

    2016-02-01

    High resolution sea-ice dynamic models offer the potential to discriminate between sea-ice rheologies based on their ability to reproduce the satellite-derived deformation fields. Recent studies have shown that sea-ice viscous-plastic (VP) models do not reproduce the observed statistical properties of the strain rate distributions of the RADARSAT Geophysical Processor System (RGPS) deformation fields [1][2]. We use the elliptical VP rheology and we compute the probability density functions (PDFs) for simulated strain rate invariants (divergence and maximum shear stress) and compare against the deformations obtained with the 3-day gridded products from RGPS. We find that the large shear deformations are well reproduced by the elliptical VP model and the deformations do not follow a Gaussian distribution as reported in Girard et al. [1][2]. On the other hand, we do find an overestimation of the shear in the range of mid-magnitude deformations in all of our VP simulations tested with different spatial resolutions and numerical parameters. Runs with no internal stress (free-drift) or with constant viscosity coefficients (Newtonian fluid) also show this overestimation. We trace back this discrepancy to the elliptical yield curve aspect ratio (e = 2) having too little shear strength, hence not resisting enough the inherent shear in the wind forcing associated with synoptic weather systems. Experiments where we simply increase the shear resistance of the ice by modifying the ellipse ratio confirm the need for a rheology with an increased shear strength. [1] Girard et al. (2009), Evaluation of high-resolution sea ice models [...], Journal of Geophysical Research, 114[2] Girard et al. (2011), A new modeling framework for sea-ice mechanics [...], Annals of Glaciology, 57, 123-132

  13. [Reproducibility and repeatability of the determination of occlusal plane on digital dental models].

    Science.gov (United States)

    Qin, Yi-fei; Xu, Tian-min

    2015-06-18

    To assess the repeatability(intraobserver comparison)and reproducibility(interobserver comparison)of two different methods for establishing the occlusal plane on digital dental models. With Angle's classification as a stratification factor,48 cases were randomly extracted from 806 ones which had integrated clinical data and had their orthodontic treatment from July 2004 to August 2008 in Department of Orthodontics ,Peking University School and Hospital of Stomatology.Post-treatment plaster casts of 48 cases were scanned by Roland LPX-1200 3D laser scanner to generate geometry data as research subjects.In a locally developed software package,one observer repeated 5 times at intervals of at least one week to localize prescriptive landmarks on each digital model to establish a group of functional occlusal planes and a group of anatomic occlusal planes, while 6 observers established two other groups of functional and anatomic occlusal planes independently.Standard deviations of dihedral angles of each group on each model were calculated and compared between the related groups.The models with the five largest standard deviations of each group were studied to explore possible factors that might influence the identification of the landmarks on the digital models. Significant difference of intraobserver variability was not detected between the functional occlusal plane and the anatomic occlusal plane (P>0.1), while that of interobserver variability was detected (Pocclusal plane was 0.2° smaller than that of the anatomic occlusal plane.The functional occlusal plane's variability of intraobserver and interobsever did not differ significantly (P>0.1), while the anatomic occlusal plane's variability of the intraobserver was significantly smaller than that of the interobserver (Pocclusal planes are suitable as a conference plane with equal repeatability. When several observers measure a large number of digital models,the functional occlusal plane is more reproducible than the

  14. Hydrological Modeling Reproducibility Through Data Management and Adaptors for Model Interoperability

    Science.gov (United States)

    Turner, M. A.

    2015-12-01

    Because of a lack of centralized planning and no widely-adopted standards among hydrological modeling research groups, research communities, and the data management teams meant to support research, there is chaos when it comes to data formats, spatio-temporal resolutions, ontologies, and data availability. All this makes true scientific reproducibility and collaborative integrated modeling impossible without some glue to piece it all together. Our Virtual Watershed Integrated Modeling System provides the tools and modeling framework hydrologists need to accelerate and fortify new scientific investigations by tracking provenance and providing adaptors for integrated, collaborative hydrologic modeling and data management. Under global warming trends where water resources are under increasing stress, reproducible hydrological modeling will be increasingly important to improve transparency and understanding of the scientific facts revealed through modeling. The Virtual Watershed Data Engine is capable of ingesting a wide variety of heterogeneous model inputs, outputs, model configurations, and metadata. We will demonstrate one example, starting from real-time raw weather station data packaged with station metadata. Our integrated modeling system will then create gridded input data via geostatistical methods along with error and uncertainty estimates. These gridded data are then used as input to hydrological models, all of which are available as web services wherever feasible. Models may be integrated in a data-centric way where the outputs too are tracked and used as inputs to "downstream" models. This work is part of an ongoing collaborative Tri-state (New Mexico, Nevada, Idaho) NSF EPSCoR Project, WC-WAVE, comprised of researchers from multiple universities in each of the three states. The tools produced and presented here have been developed collaboratively alongside watershed scientists to address specific modeling problems with an eye on the bigger picture of

  15. Reproducing Phenomenology of Peroxidation Kinetics via Model Optimization

    Science.gov (United States)

    Ruslanov, Anatole D.; Bashylau, Anton V.

    2010-06-01

    We studied mathematical modeling of lipid peroxidation using a biochemical model system of iron (II)-ascorbate-dependent lipid peroxidation of rat hepatocyte mitochondrial fractions. We found that antioxidants extracted from plants demonstrate a high intensity of peroxidation inhibition. We simplified the system of differential equations that describes the kinetics of the mathematical model to a first order equation, which can be solved analytically. Moreover, we endeavor to algorithmically and heuristically recreate the processes and construct an environment that closely resembles the corresponding natural system. Our results demonstrate that it is possible to theoretically predict both the kinetics of oxidation and the intensity of inhibition without resorting to analytical and biochemical research, which is important for cost-effective discovery and development of medical agents with antioxidant action from the medicinal plants.

  16. Using a 1-D model to reproduce diurnal SST signals

    DEFF Research Database (Denmark)

    Karagali, Ioanna; Høyer, Jacob L.

    2014-01-01

    of measurement. A generally preferred approach to bridge the gap between in situ and remotely obtained measurements is through modelling of the upper ocean temperature. This ESA supported study focuses on the implementation of the 1 dimensional General Ocean Turbulence Model (GOTM), in order to resolve...... an additional parametrisation for the total outgoing long-wave radiation and a 9-band parametrisation for the light extinction. New parametrisations for the stability functions, associated with vertical mixing, have been included. GOTM is tested using experimental data from the Woods Hole Oceanographic...

  17. Reproducible Infection Model for Clostridium perfringens in Broiler Chickens

    DEFF Research Database (Denmark)

    Pedersen, Karl; Friis-Holm, Lotte Bjerrum; Heuer, Ole Eske

    2008-01-01

    Experiments were carried out to establish an infection and disease model for Clostridium perfringens in broiler chickens. Previous experiments had failed to induce disease and only a transient colonization with challenge strains had been obtained. In the present study, two series of experiments...

  18. Intra- and inter-observer reproducibility and generalizability of first trimester uterine artery pulsatility index by transabdominal and transvaginal ultrasound

    NARCIS (Netherlands)

    Marchi, Laura; Zwertbroek, Eva; Snelder, Judith; Kloosterman, Maaike; Bilardo, Caterina Maddalena

    2016-01-01

    Objectives The primary aim of the study was to assess intra-observer and inter-observer reproducibility and generalizability (general reliability) of first trimester Doppler measurements of uterine arteries (UtA) performed both transabdominally (TA) and transvaginally (TV). Secondary aims were to

  19. Cross-species analysis of gene expression in non-model mammals: reproducibility of hybridization on high density oligonucleotide microarrays

    Directory of Open Access Journals (Sweden)

    Pita-Thomas Wolfgang

    2007-04-01

    Full Text Available Abstract Background Gene expression profiles of non-model mammals may provide valuable data for biomedical and evolutionary studies. However, due to lack of sequence information of other species, DNA microarrays are currently restricted to humans and a few model species. This limitation may be overcome by using arrays developed for a given species to analyse gene expression in a related one, an approach known as "cross-species analysis". In spite of its potential usefulness, the accuracy and reproducibility of the gene expression measures obtained in this way are still open to doubt. The present study examines whether or not hybridization values from cross-species analyses are as reproducible as those from same-species analyses when using Affymetrix oligonucleotide microarrays. Results The reproducibility of the probe data obtained hybridizing deer, Old-World primates, and human RNA samples to Affymetrix human GeneChip® U133 Plus 2.0 was compared. The results show that cross-species hybridization affected neither the distribution of the hybridization reproducibility among different categories, nor the reproducibility values of the individual probes. Our analyses also show that a 0.5% of the probes analysed in the U133 plus 2.0 GeneChip are significantly associated to un-reproducible hybridizations. Such probes-called in the text un-reproducible probe sequences- do not increase in number in cross-species analyses. Conclusion Our study demonstrates that cross-species analyses do not significantly affect hybridization reproducibility of GeneChips, at least within the range of the mammal species analysed here. The differences in reproducibility between same-species and cross-species analyses observed in previous studies were probably caused by the analytical methods used to calculate the gene expression measures. Together with previous observations on the accuracy of GeneChips for cross-species analysis, our analyses demonstrate that cross

  20. Reproducing tailing in breakthrough curves: Are statistical models equally representative and predictive?

    Science.gov (United States)

    Pedretti, Daniele; Bianchi, Marco

    2018-03-01

    Breakthrough curves (BTCs) observed during tracer tests in highly heterogeneous aquifers display strong tailing. Power laws are popular models for both the empirical fitting of these curves, and the prediction of transport using upscaling models based on best-fitted estimated parameters (e.g. the power law slope or exponent). The predictive capacity of power law based upscaling models can be however questioned due to the difficulties to link model parameters with the aquifers' physical properties. This work analyzes two aspects that can limit the use of power laws as effective predictive tools: (a) the implication of statistical subsampling, which often renders power laws undistinguishable from other heavily tailed distributions, such as the logarithmic (LOG); (b) the difficulties to reconcile fitting parameters obtained from models with different formulations, such as the presence of a late-time cutoff in the power law model. Two rigorous and systematic stochastic analyses, one based on benchmark distributions and the other on BTCs obtained from transport simulations, are considered. It is found that a power law model without cutoff (PL) results in best-fitted exponents (αPL) falling in the range of typical experimental values reported in the literature (1.5 constant αCO ≈ 1. In the PLCO model, the cutoff rate (λ) is the parameter that fully reproduces the persistence of the tailing and is shown to be inversely correlated to the LOG scale parameter (i.e. with the skewness of the distribution). The theoretical results are consistent with the fitting analysis of a tracer test performed during the MADE-5 experiment. It is shown that a simple mechanistic upscaling model based on the PLCO formulation is able to predict the ensemble of BTCs from the stochastic transport simulations without the need of any fitted parameters. The model embeds the constant αCO = 1 and relies on a stratified description of the transport mechanisms to estimate λ. The PL fails to

  1. The substorm loading-unloading cycle as reproduced by community-available global MHD magnetospheric models

    Science.gov (United States)

    Gordeev, Evgeny; Sergeev, Victor; Tsyganenko, Nikolay; Kuznetsova, Maria; Rastaetter, Lutz; Raeder, Joachim; Toth, Gabor; Lyon, John; Merkin, Vyacheslav; Wiltberger, Michael

    2017-04-01

    In this study we investigate how well the three community-available global MHD models, supported by the Community Coordinated Modeling Center (CCMC NASA), reproduce the global magnetospheric dynamics, including the loading-unloading substorm cycle. We found that in terms of global magnetic flux transport CCMC models display systematically different response to idealized 2-hour north then 2-hour south IMF Bz variation. The LFM model shows a depressed return convection in the tail plasma sheet and high rate of magnetic flux loading into the lobes during the growth phase, as well as enhanced return convection and high unloading rate during the expansion phase, with the amount of loaded/unloaded magnetotail flux and the growth phase duration being the closest to their observed empirical values during isolated substorms. BATSRUS and Open GGCM models exhibit drastically different behavior. In the BATS-R-US model the plasma sheet convection shows a smooth transition to the steady convection regime after the IMF southward turning. In the Open GGCM a weak plasma sheet convection has comparable intensities during both the growth phase and the following slow unloading phase. Our study shows that different CCMC models under the same solar wind conditions (north to south IMF variation) produce essentially different solutions in terms of global magnetospheric convection.

  2. COMBINE archive and OMEX format : One file to share all information to reproduce a modeling project

    NARCIS (Netherlands)

    Bergmann, Frank T.; Olivier, Brett G.; Soiland-Reyes, Stian

    2014-01-01

    Background: With the ever increasing use of computational models in the biosciences, the need to share models and reproduce the results of published studies efficiently and easily is becoming more important. To this end, various standards have been proposed that can be used to describe models,

  3. Why are models unable to reproduce multi-decadal trends in lower tropospheric baseline ozone levels?

    Science.gov (United States)

    Hu, L.; Liu, J.; Mickley, L. J.; Strahan, S. E.; Steenrod, S.

    2017-12-01

    Assessments of tropospheric ozone radiative forcing rely on accurate model simulations. Parrish et al (2014) found that three chemistry-climate models (CCMs) overestimate present-day O3 mixing ratios and capture only 50% of the observed O3 increase over the last five decades at 12 baseline sites in the northern mid-latitudes, indicating large uncertainties in our understanding of the ozone trends and their implications for radiative forcing. Here we present comparisons of outputs from two chemical transport models (CTMs) - GEOS-Chem and the Global Modeling Initiative model - with O3 observations from the same sites and from the global ozonesonde network. Both CTMs are driven by reanalysis meteorological data (MERRA or MERRA2) and thus are expected to be different in atmospheric transport processes relative to those freely running CCMs. We test whether recent model developments leading to more active ozone chemistry affect the computed ozone sensitivity to perturbations in emissions. Preliminary results suggest these CTMs can reproduce present-day ozone levels but fail to capture the multi-decadal trend since 1980. Both models yield widespread overpredictions of free tropospheric ozone in the 1980s. Sensitivity studies in GEOS-Chem suggest that the model estimate of natural background ozone is too high. We discuss factors that contribute to the variability and trends of tropospheric ozone over the last 30 years, with a focus on intermodel differences in spatial resolution and in the representation of stratospheric chemistry, stratosphere-troposphere exchange, halogen chemistry, and biogenic VOC emissions and chemistry. We also discuss uncertainty in the historical emission inventories used in models, and how these affect the simulated ozone trends.

  4. Voxel-level reproducibility assessment of modality independent elastography in a pre-clinical murine model

    Science.gov (United States)

    Flint, Katelyn M.; Weis, Jared A.; Yankeelov, Thomas E.; Miga, Michael I.

    2015-03-01

    Changes in tissue mechanical properties, measured non-invasively by elastography methods, have been shown to be an important diagnostic tool, particularly for cancer. Tissue elasticity information, tracked over the course of therapy, may be an important prognostic indicator of tumor response to treatment. While many elastography techniques exist, this work reports on the use of a novel form of elastography that uses image texture to reconstruct elastic property distributions in tissue (i.e., a modality independent elastography (MIE) method) within the context of a pre-clinical breast cancer system.1,2 The elasticity results have previously shown good correlation with independent mechanical testing.1 Furthermore, MIE has been successfully utilized to localize and characterize lesions in both phantom experiments and simulation experiments with clinical data.2,3 However, the reproducibility of this method has not been characterized in previous work. The goal of this study is to evaluate voxel-level reproducibility of MIE in a pre-clinical model of breast cancer. Bland-Altman analysis of co-registered repeat MIE scans in this preliminary study showed a reproducibility index of 24.7% (scaled to a percent of maximum stiffness) at the voxel level. As opposed to many reports in the magnetic resonance elastography (MRE) literature that speak to reproducibility measures of the bulk organ, these results establish MIE reproducibility at the voxel level; i.e., the reproducibility of locally-defined mechanical property measurements throughout the tumor volume.

  5. Assessment of a climate model to reproduce rainfall variability and extremes over Southern Africa

    Science.gov (United States)

    Williams, C. J. R.; Kniveton, D. R.; Layberry, R.

    2010-01-01

    It is increasingly accepted that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. A change in the distribution and magnitude of extreme rainfall events (associated with changing variability), such as droughts or flooding, may have a far greater impact on human and natural systems than a changing mean. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The sub-continent is considered especially vulnerable to and ill-equipped (in terms of adaptation) for extreme events, due to a number of factors including extensive poverty, famine, disease and political instability. Rainfall variability and the identification of rainfall extremes is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. The majority of previous climate model verification studies have compared model output with observational data at monthly timescales. In this research, the assessment of ability of a state of the art climate model to simulate climate at daily timescales is carried out using satellite-derived rainfall data from the Microwave Infrared Rainfall Algorithm (MIRA). This dataset covers the period from 1993 to 2002 and the whole of southern Africa at a spatial resolution of 0.1° longitude/latitude. This paper concentrates primarily on the ability of the model to simulate the spatial and temporal patterns of present-day rainfall variability over southern Africa and is not intended to discuss possible future changes in climate as these have been documented elsewhere. Simulations of current climate from the UK Meteorological Office Hadley Centre's climate model, in both regional and global mode, are firstly compared to the MIRA dataset at daily timescales. Secondly, the ability of the model to reproduce daily rainfall extremes is assessed, again by a comparison with

  6. Inter-observer reproducibility in reporting on renal drainage in children with hydronephrosis: a large collaborative study

    Energy Technology Data Exchange (ETDEWEB)

    Tondeur, Marianne; Piepsz, Amy [CHU Saint-Pierre, Departement des Radio-Isotopes, Brussels (Belgium); De Palma, Diego [Ospedale di Circolo, Nuclear Medicine, Varese (Italy); Roca, Isabel [Vall d' Hebron Hospital, Nuclear Medicine, Barcelona (Spain); Ham, Hamphrey [University Hospital, Department Nuclear Medicine, Ghent (Belgium)

    2008-03-15

    The goal of this study was to evaluate the inter-observer reproducibility in reporting on renal drainage obtained during {sup 99m}Tc MAG3 renography in children, when already processed data are offered to the observers. Because web site facilities were used for communication, 57 observers from five continents participated in the study. Twenty-three renograms, including furosemide stimulation and posterect postmicturition views, covering various patterns of drainage, were submitted to the observers. Images, curves and quantitative parameters were provided. Good or almost good drainage, partial drainage and poor or no drainage were the three possible responses for each kidney. An important bias was observed among the observers, some of them more systematically reporting the drainage as being good, while others had a general tendency to consider the drainage as poor. This resulted in rather poor inter-observer reproducibility, as for more than half of the kidneys, less than 80% of the observers agreed on one of the three responses. Analysis of the individual cases identified some obvious causes of discrepancy: the absence of a clear limit between partial and good or almost good drainage, the fact of including or neglecting the effect of micturition and change of patient's position, the underestimation of drainage in the case of a flat renographic curve, and the difficulties of interpretation in the case of a small, not well functioning kidney. There is an urgent need for better standardisation in estimating the quality of drainage. (orig.)

  7. A stable and reproducible human blood-brain barrier model derived from hematopoietic stem cells.

    Directory of Open Access Journals (Sweden)

    Romeo Cecchelli

    Full Text Available The human blood brain barrier (BBB is a selective barrier formed by human brain endothelial cells (hBECs, which is important to ensure adequate neuronal function and protect the central nervous system (CNS from disease. The development of human in vitro BBB models is thus of utmost importance for drug discovery programs related to CNS diseases. Here, we describe a method to generate a human BBB model using cord blood-derived hematopoietic stem cells. The cells were initially differentiated into ECs followed by the induction of BBB properties by co-culture with pericytes. The brain-like endothelial cells (BLECs express tight junctions and transporters typically observed in brain endothelium and maintain expression of most in vivo BBB properties for at least 20 days. The model is very reproducible since it can be generated from stem cells isolated from different donors and in different laboratories, and could be used to predict CNS distribution of compounds in human. Finally, we provide evidence that Wnt/β-catenin signaling pathway mediates in part the BBB inductive properties of pericytes.

  8. How well do CMIP5 Climate Models Reproduce the Hydrologic Cycle of the Colorado River Basin?

    Science.gov (United States)

    Gautam, J.; Mascaro, G.

    2017-12-01

    The Colorado River, which is the primary source of water for nearly 40 million people in the arid Southwestern states of the United States, has been experiencing an extended drought since 2000, which has led to a significant reduction in water supply. As the water demands increase, one of the major challenges for water management in the region has been the quantification of uncertainties associated with streamflow predictions in the Colorado River Basin (CRB) under potential changes of future climate. Hence, testing the reliability of model predictions in the CRB is critical in addressing this challenge. In this study, we evaluated the performances of 17 General Circulation Models (GCMs) from the Coupled Model Intercomparison Project Phase Five (CMIP5) and 4 Regional Climate Models (RCMs) in reproducing the statistical properties of the hydrologic cycle in the CRB. We evaluated the water balance components at four nested sub-basins along with the inter-annual and intra-annual changes of precipitation (P), evaporation (E), runoff (R) and temperature (T) from 1979 to 2005. Most of the models captured the net water balance fairly well in the most-upstream basin but simulated a weak hydrological cycle in the evaporation channel at the downstream locations. The simulated monthly variability of P had different patterns, with correlation coefficients ranging from -0.6 to 0.8 depending on the sub-basin and the models from same parent institution clustering together. Apart from the most-upstream sub-basin where the models were mainly characterized by a negative seasonal bias in SON (of up to -50%), most of them had a positive bias in all seasons (of up to +260%) in the other three sub-basins. The models, however, captured the monthly variability of T well at all sites with small inter-model variabilities and a relatively similar range of bias (-7 °C to +5 °C) across all seasons. Mann-Kendall test was applied to the annual P and T time-series where majority of the models

  9. Geomagnetic Observations and Models

    CERN Document Server

    Mandea, Mioara

    2011-01-01

    This volume provides comprehensive and authoritative coverage of all the main areas linked to geomagnetic field observation, from instrumentation to methodology, on ground or near-Earth. Efforts are also focused on a 21st century e-Science approach to open access to all geomagnetic data, but also to the data preservation, data discovery, data rescue, and capacity building. Finally, modeling magnetic fields with different internal origins, with their variation in space and time, is an attempt to draw together into one place the traditional work in producing models as IGRF or describing the magn

  10. Investigation of dimensional variation in parts manufactured by fused deposition modeling using Gauge Repeatability and Reproducibility

    Science.gov (United States)

    Mohamed, Omar Ahmed; Hasan Masood, Syed; Lal Bhowmik, Jahar

    2018-02-01

    In the additive manufacturing (AM) market, the question is raised by industry and AM users on how reproducible and repeatable the fused deposition modeling (FDM) process is in providing good dimensional accuracy. This paper aims to investigate and evaluate the repeatability and reproducibility of the FDM process through a systematic approach to answer this frequently asked question. A case study based on the statistical gage repeatability and reproducibility (gage R&R) technique is proposed to investigate the dimensional variations in the printed parts of the FDM process. After running the simulation and analysis of the data, the FDM process capability is evaluated, which would help the industry for better understanding the performance of FDM technology.

  11. Using the mouse to model human disease: increasing validity and reproducibility

    Directory of Open Access Journals (Sweden)

    Monica J. Justice

    2016-02-01

    Full Text Available Experiments that use the mouse as a model for disease have recently come under scrutiny because of the repeated failure of data, particularly derived from preclinical studies, to be replicated or translated to humans. The usefulness of mouse models has been questioned because of irreproducibility and poor recapitulation of human conditions. Newer studies, however, point to bias in reporting results and improper data analysis as key factors that limit reproducibility and validity of preclinical mouse research. Inaccurate and incomplete descriptions of experimental conditions also contribute. Here, we provide guidance on best practice in mouse experimentation, focusing on appropriate selection and validation of the model, sources of variation and their influence on phenotypic outcomes, minimum requirements for control sets, and the importance of rigorous statistics. Our goal is to raise the standards in mouse disease modeling to enhance reproducibility, reliability and clinical translation of findings.

  12. Interpretation of positron emission mammography and MRI by experienced breast imaging radiologists: performance and observer reproducibility.

    Science.gov (United States)

    Narayanan, Deepa; Madsen, Kathleen S; Kalinyak, Judith E; Berg, Wendie A

    2011-04-01

    In preparation for a multicenter trial of positron emission mammography (PEM) and MRI in women with newly diagnosed cancer, the two purposes of this study were to validate training of breast imagers in standardized interpretation of PEM and to validate performance of the same specialists interpreting MRI. A 2-hour didactic module was developed to train Mammography Quality Standards Act-qualified radiologist observers to interpret PEM images, consisting of a sample feature analysis lexicon analogous to BI-RADS and 12 sample cases. Observers were then asked to review separate interpretive skills tasks for PEM (49 breasts, 20 [41%] of which were malignant) and MRI (32 breasts, 11 [34%] of which were malignant), describe findings, and give assessments analogous to BI-RADS (category 1, 2, 3, 4A, 4B, 4C, or 5). Demographic experience variables were collected for 36 observers from 15 sites. Performance against histopathologic truth was determined, and interobserver agreement for classifying features and final assessments was evaluated using kappa statistics. Across 36 observers, mean sensitivity, specificity, and area under the curve (AUC) for PEM were 96% (range, 75-100%), 84% (range, 66-97%), and 0.95 (range, 0.82-1.0), respectively. Mean sensitivity, specificity, and AUC for the MRI task were 82% (range, 45-100%), 67% (range, 38-91%), and 0.80 (range, 0.48-0.96), respectively. Interobserver agreement for PEM findings ranged from moderate to substantial, with kappa values of 0.57 for lesion type and 0.63 for final assessments. With minimal training, experienced breast imagers showed high performance in interpreting PEM images. Performance in MRI interpretation by the same observers validated expected clinical practice.

  13. Rainfall variability and extremes over southern Africa: Assessment of a climate model to reproduce daily extremes

    Science.gov (United States)

    Williams, C. J. R.; Kniveton, D. R.; Layberry, R.

    2009-04-01

    It is increasingly accepted that that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. A change in the distribution and magnitude of extreme rainfall events (associated with changing variability), such as droughts or flooding, may have a far greater impact on human and natural systems than a changing mean. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The subcontinent is considered especially vulnerable to and ill-equipped (in terms of adaptation) for extreme events, due to a number of factors including extensive poverty, famine, disease and political instability. Rainfall variability and the identification of rainfall extremes is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. The majority of previous climate model verification studies have compared model output with observational data at monthly timescales. In this research, the assessment of ability of a state of the art climate model to simulate climate at daily timescales is carried out using satellite derived rainfall data from the Microwave Infra-Red Algorithm (MIRA). This dataset covers the period from 1993-2002 and the whole of southern Africa at a spatial resolution of 0.1 degree longitude/latitude. The ability of a climate model to simulate current climate provides some indication of how much confidence can be applied to its future predictions. In this paper, simulations of current climate from the UK Meteorological Office Hadley Centre's climate model, in both regional and global mode, are firstly compared to the MIRA dataset at daily timescales. This concentrates primarily on the ability of the model to simulate the spatial and temporal patterns of rainfall variability over southern Africa. Secondly, the ability of the model to reproduce daily rainfall extremes will

  14. Pharmacokinetic Modelling to Predict FVIII:C Response to Desmopressin and Its Reproducibility in Nonsevere Haemophilia A Patients.

    Science.gov (United States)

    Schütte, Lisette M; van Hest, Reinier M; Stoof, Sara C M; Leebeek, Frank W G; Cnossen, Marjon H; Kruip, Marieke J H A; Mathôt, Ron A A

    2018-04-01

     Nonsevere haemophilia A (HA) patients can be treated with desmopressin. Response of factor VIII activity (FVIII:C) differs between patients and is difficult to predict.  Our aims were to describe FVIII:C response after desmopressin and its reproducibility by population pharmacokinetic (PK) modelling.  Retrospective data of 128 nonsevere HA patients (age 7-75 years) receiving an intravenous or intranasal dose of desmopressin were used. PK modelling of FVIII:C was performed by nonlinear mixed effect modelling. Reproducibility of FVIII:C response was defined as less than 25% difference in peak FVIII:C between administrations.  A total of 623 FVIII:C measurements from 142 desmopressin administrations were available; 14 patients had received two administrations at different occasions. The FVIII:C time profile was best described by a two-compartment model with first-order absorption and elimination. Interindividual variability of the estimated baseline FVIII:C, central volume of distribution and clearance were 37, 43 and 50%, respectively. The most recently measured FVIII:C (FVIII-recent) was significantly associated with FVIII:C response to desmopressin ( p  C increase of 0.47 IU/mL (median, interquartile range: 0.32-0.65 IU/mL, n  = 142). C response was reproducible in 6 out of 14 patients receiving two desmopressin administrations.  FVIII:C response to desmopressin in nonsevere HA patients was adequately described by a population PK model. Large variability in FVIII:C response was observed, which could only partially be explained by FVIII-recent. C response was not reproducible in a small subset of patients. Therefore, monitoring FVIII:C around surgeries or bleeding might be considered. Research is needed to study this further. Schattauer Stuttgart.

  15. Reproducibility of the coil positioning in Nb$_3$Sn magnet models through magnetic measurements

    CERN Document Server

    Borgnolutti, F; Ferracin, P; Kashikhin, V V; Sabbi, G; Velev, G; Todesco, E; Zlobin, A V

    2009-01-01

    The random part of the integral field harmonics in a series of superconducting magnets has been used in the past to identify the reproducibility of the coil positioning. Using a magnetic model and a MonteCarlo approach, coil blocks are randomly moved and the amplitude that best fits the magnetic measurements is interpreted as the reproducibility of the coil position. Previous values for r.m.s. coil displacements for Nb-Ti magnets range from 0.05 to 0.01 mm. In this paper, we use this approach to estimate the reproducibility in the coil position for Nb3Sn short models that have been built in the framework of the FNAL core program (HFDA dipoles) and of the LARP program (TQ quadrupoles). Our analysis shows that the Nb3Sn models manufactured in the past years correspond to r.m.s. coil displacements of at least 5 times what is found for the series production of a mature Nb-Ti technology. On the other hand, the variability of the field harmonics along the magnet axis shows that Nb3Sn magnets have already reached va...

  16. Cellular automaton model in the fundamental diagram approach reproducing the synchronized outflow of wide moving jams

    International Nuclear Information System (INIS)

    Tian, Jun-fang; Yuan, Zhen-zhou; Jia, Bin; Fan, Hong-qiang; Wang, Tao

    2012-01-01

    Velocity effect and critical velocity are incorporated into the average space gap cellular automaton model [J.F. Tian, et al., Phys. A 391 (2012) 3129], which was able to reproduce many spatiotemporal dynamics reported by the three-phase theory except the synchronized outflow of wide moving jams. The physics of traffic breakdown has been explained. Various congested patterns induced by the on-ramp are reproduced. It is shown that the occurrence of synchronized outflow, free outflow of wide moving jams is closely related with drivers time delay in acceleration at the downstream jam front and the critical velocity, respectively. -- Highlights: ► Velocity effect is added into average space gap cellular automaton model. ► The physics of traffic breakdown has been explained. ► The probabilistic nature of traffic breakdown is simulated. ► Various congested patterns induced by the on-ramp are reproduced. ► The occurrence of synchronized outflow of jams depends on drivers time delay.

  17. Validation of EURO-CORDEX regional climate models in reproducing the variability of precipitation extremes in Romania

    Science.gov (United States)

    Dumitrescu, Alexandru; Busuioc, Aristita

    2016-04-01

    EURO-CORDEX is the European branch of the international CORDEX initiative that aims to provide improved regional climate change projections for Europe. The main objective of this paper is to document the performance of the individual models in reproducing the variability of precipitation extremes in Romania. Here three EURO-CORDEX regional climate models (RCMs) ensemble (scenario RCP4.5) are analysed and inter-compared: DMI-HIRHAM5, KNMI-RACMO2.2 and MPI-REMO. Compared to previous studies, when the RCM validation regarding the Romanian climate has mainly been made on mean state and at station scale, a more quantitative approach of precipitation extremes is proposed. In this respect, to have a more reliable comparison with observation, a high resolution daily precipitation gridded data set was used as observational reference (CLIMHYDEX project). The comparison between the RCM outputs and observed grid point values has been made by calculating three extremes precipitation indices, recommended by the Expert Team on Climate Change Detection Indices (ETCCDI), for the 1976-2005 period: R10MM, annual count of days when precipitation ≥10mm; RX5DAY, annual maximum 5-day precipitation and R95P%, precipitation fraction of annual total precipitation due to daily precipitation > 95th percentile. The RCMs capability to reproduce the mean state for these variables, as well as the main modes of their spatial variability (given by the first three EOF patterns), are analysed. The investigation confirms the ability of RCMs to simulate the main features of the precipitation extreme variability over Romania, but some deficiencies in reproducing of their regional characteristics were found (for example, overestimation of the mea state, especially over the extra Carpathian regions). This work has been realised within the research project "Changes in climate extremes and associated impact in hydrological events in Romania" (CLIMHYDEX), code PN II-ID-2011-2-0073, financed by the Romanian

  18. Stratospheric dryness: model simulations and satellite observations

    Directory of Open Access Journals (Sweden)

    J. Lelieveld

    2007-01-01

    Full Text Available The mechanisms responsible for the extreme dryness of the stratosphere have been debated for decades. A key difficulty has been the lack of comprehensive models which are able to reproduce the observations. Here we examine results from the coupled lower-middle atmosphere chemistry general circulation model ECHAM5/MESSy1 together with satellite observations. Our model results match observed temperatures in the tropical lower stratosphere and realistically represent the seasonal and inter-annual variability of water vapor. The model reproduces the very low water vapor mixing ratios (below 2 ppmv periodically observed at the tropical tropopause near 100 hPa, as well as the characteristic tape recorder signal up to about 10 hPa, providing evidence that the dehydration mechanism is well-captured. Our results confirm that the entry of tropospheric air into the tropical stratosphere is forced by large-scale wave dynamics, whereas radiative cooling regionally decelerates upwelling and can even cause downwelling. Thin cirrus forms in the cold air above cumulonimbus clouds, and the associated sedimentation of ice particles between 100 and 200 hPa reduces water mass fluxes by nearly two orders of magnitude compared to air mass fluxes. Transport into the stratosphere is supported by regional net radiative heating, to a large extent in the outer tropics. During summer very deep monsoon convection over Southeast Asia, centered over Tibet, moistens the stratosphere.

  19. Inter-observer reproducibility of semi-automatic tumor diameter measurement and volumetric analysis in patients with lung cancer.

    Science.gov (United States)

    Dinkel, J; Khalilzadeh, O; Hintze, C; Fabel, M; Puderbach, M; Eichinger, M; Schlemmer, H-P; Thorn, M; Heussel, C P; Thomas, M; Kauczor, H-U; Biederer, J

    2013-10-01

    Therapy monitoring in oncologic patient requires precise measurement methods. In order to improve the precision of measurements, we used a semi-automated generic segmentation algorithm to measure the size of large lung cancer tumors. The reproducibility of computer-assisted measurements were assessed and compared with manual measurements. CT scans of 24 consecutive lung cancer patients who were referred to our hospital over a period of 6 months were analyzed. The tumor sizes were measured manually by 3 independent radiologists, according to World Health Organization (WHO) and the Revised Response Evaluation Criteria in Solid Tumors (RECIST) guidelines. At least 10 months later, measurements were repeated semi-automatically on the same scans by the same radiologists. The inter-observer reproducibility of all measurements was assessed and compared between manual and semi-automated measurements. Manual measurements of the tumor longest diameter were significantly (p < 0.05) smaller compared with the semi-automated measurements. The intra-rater correlations coefficients were significantly higher for measurements of longest diameter (intra-class correlation coefficients: 0.998 vs. 0.986; p < 0.001) and area (0.995 vs. 0.988; p = 0.032) using semi-automated compared with manual method. The variation coefficient for manual measurement of the tumor area (WHO guideline, 15.7% vs. 7.3%) and the longest diameter (RECIST guideline, 7.7% vs. 2.7%) was 2-3 times that of semi-automated measurement. By using computer-assisted size assessment in primary lung tumor, interobserver-variability can be reduced to about half to one-third compared to standard manual measurements. This indicates a high potential value for therapy monitoring in lung cancer patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. A novel highly reproducible and lethal nonhuman primate model for orthopox virus infection.

    Directory of Open Access Journals (Sweden)

    Marit Kramski

    Full Text Available The intentional re-introduction of Variola virus (VARV, the agent of smallpox, into the human population is of great concern due its bio-terroristic potential. Moreover, zoonotic infections with Cowpox (CPXV and Monkeypox virus (MPXV cause severe diseases in humans. Smallpox vaccines presently available can have severe adverse effects that are no longer acceptable. The efficacy and safety of new vaccines and antiviral drugs for use in humans can only be demonstrated in animal models. The existing nonhuman primate models, using VARV and MPXV, need very high viral doses that have to be applied intravenously or intratracheally to induce a lethal infection in macaques. To overcome these drawbacks, the infectivity and pathogenicity of a particular CPXV was evaluated in the common marmoset (Callithrix jacchus.A CPXV named calpox virus was isolated from a lethal orthopox virus (OPV outbreak in New World monkeys. We demonstrated that marmosets infected with calpox virus, not only via the intravenous but also the intranasal route, reproducibly develop symptoms resembling smallpox in humans. Infected animals died within 1-3 days after onset of symptoms, even when very low infectious viral doses of 5x10(2 pfu were applied intranasally. Infectious virus was demonstrated in blood, saliva and all organs analyzed.We present the first characterization of a new OPV infection model inducing a disease in common marmosets comparable to smallpox in humans. Intranasal virus inoculation mimicking the natural route of smallpox infection led to reproducible infection. In vivo titration resulted in an MID(50 (minimal monkey infectious dose 50% of 8.3x10(2 pfu of calpox virus which is approximately 10,000-fold lower than MPXV and VARV doses applied in the macaque models. Therefore, the calpox virus/marmoset model is a suitable nonhuman primate model for the validation of vaccines and antiviral drugs. Furthermore, this model can help study mechanisms of OPV pathogenesis.

  1. Reproducibility, reliability and validity of measurements obtained from Cecile3 digital models

    Directory of Open Access Journals (Sweden)

    Gustavo Adolfo Watanabe-Kanno

    2009-09-01

    Full Text Available The aim of this study was to determine the reproducibility, reliability and validity of measurements in digital models compared to plaster models. Fifteen pairs of plaster models were obtained from orthodontic patients with permanent dentition before treatment. These were digitized to be evaluated with the program Cécile3 v2.554.2 beta. Two examiners measured three times the mesiodistal width of all the teeth present, intercanine, interpremolar and intermolar distances, overjet and overbite. The plaster models were measured using a digital vernier. The t-Student test for paired samples and interclass correlation coefficient (ICC were used for statistical analysis. The ICC of the digital models were 0.84 ± 0.15 (intra-examiner and 0.80 ± 0.19 (inter-examiner. The average mean difference of the digital models was 0.23 ± 0.14 and 0.24 ± 0.11 for each examiner, respectively. When the two types of measurements were compared, the values obtained from the digital models were lower than those obtained from the plaster models (p < 0.05, although the differences were considered clinically insignificant (differences < 0.1 mm. The Cécile digital models are a clinically acceptable alternative for use in Orthodontics.

  2. A computational model incorporating neural stem cell dynamics reproduces glioma incidence across the lifespan in the human population.

    Directory of Open Access Journals (Sweden)

    Roman Bauer

    Full Text Available Glioma is the most common form of primary brain tumor. Demographically, the risk of occurrence increases until old age. Here we present a novel computational model to reproduce the probability of glioma incidence across the lifespan. Previous mathematical models explaining glioma incidence are framed in a rather abstract way, and do not directly relate to empirical findings. To decrease this gap between theory and experimental observations, we incorporate recent data on cellular and molecular factors underlying gliomagenesis. Since evidence implicates the adult neural stem cell as the likely cell-of-origin of glioma, we have incorporated empirically-determined estimates of neural stem cell number, cell division rate, mutation rate and oncogenic potential into our model. We demonstrate that our model yields results which match actual demographic data in the human population. In particular, this model accounts for the observed peak incidence of glioma at approximately 80 years of age, without the need to assert differential susceptibility throughout the population. Overall, our model supports the hypothesis that glioma is caused by randomly-occurring oncogenic mutations within the neural stem cell population. Based on this model, we assess the influence of the (experimentally indicated decrease in the number of neural stem cells and increase of cell division rate during aging. Our model provides multiple testable predictions, and suggests that different temporal sequences of oncogenic mutations can lead to tumorigenesis. Finally, we conclude that four or five oncogenic mutations are sufficient for the formation of glioma.

  3. Reproducing the nonlinear dynamic behavior of a structured beam with a generalized continuum model

    Science.gov (United States)

    Vila, J.; Fernández-Sáez, J.; Zaera, R.

    2018-04-01

    In this paper we study the coupled axial-transverse nonlinear vibrations of a kind of one dimensional structured solids by application of the so called Inertia Gradient Nonlinear continuum model. To show the accuracy of this axiomatic model, previously proposed by the authors, its predictions are compared with numeric results from a previously defined finite discrete chain of lumped masses and springs, for several number of particles. A continualization of the discrete model equations based on Taylor series allowed us to set equivalent values of the mechanical properties in both discrete and axiomatic continuum models. Contrary to the classical continuum model, the inertia gradient nonlinear continuum model used herein is able to capture scale effects, which arise for modes in which the wavelength is comparable to the characteristic distance of the structured solid. The main conclusion of the work is that the proposed generalized continuum model captures the scale effects in both linear and nonlinear regimes, reproducing the behavior of the 1D nonlinear discrete model adequately.

  4. Assessment of the reliability of reproducing two-dimensional resistivity models using an image processing technique.

    Science.gov (United States)

    Ishola, Kehinde S; Nawawi, Mohd Nm; Abdullah, Khiruddin; Sabri, Ali Idriss Aboubakar; Adiat, Kola Abdulnafiu

    2014-01-01

    This study attempts to combine the results of geophysical images obtained from three commonly used electrode configurations using an image processing technique in order to assess their capabilities to reproduce two-dimensional (2-D) resistivity models. All the inverse resistivity models were processed using the PCI Geomatica software package commonly used for remote sensing data sets. Preprocessing of the 2-D inverse models was carried out to facilitate further processing and statistical analyses. Four Raster layers were created, three of these layers were used for the input images and the fourth layer was used as the output of the combined images. The data sets were merged using basic statistical approach. Interpreted results show that all images resolved and reconstructed the essential features of the models. An assessment of the accuracy of the images for the four geologic models was performed using four criteria: the mean absolute error and mean percentage absolute error, resistivity values of the reconstructed blocks and their displacements from the true models. Generally, the blocks of the images of maximum approach give the least estimated errors. Also, the displacement of the reconstructed blocks from the true blocks is the least and the reconstructed resistivities of the blocks are closer to the true blocks than any other combined used. Thus, it is corroborated that when inverse resistivity models are combined, most reliable and detailed information about the geologic models is obtained than using individual data sets.

  5. Intra- and inter-observer reproducibility of global and regional magnetic resonance feature tracking derived strain parameters of the left and right ventricle

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Björn, E-mail: bjoernschmidt1989@gmx.de [Department of Radiology, University Hospital of Cologne, Kerpener Str. 62, D-50937, Cologne (Germany); Dick, Anastasia, E-mail: anastasia-dick@web.de [Department of Radiology, University Hospital of Cologne, Kerpener Str. 62, D-50937, Cologne (Germany); Treutlein, Melanie, E-mail: melanie-treutlein@web.de [Department of Radiology, University Hospital of Cologne, Kerpener Str. 62, D-50937, Cologne (Germany); Schiller, Petra, E-mail: petra.schiller@uni-koeln.de [Institute of Medical Statistics, Informatics and Epidemiology, University of Cologne, Kerpener Str. 62, D-50937, Cologne (Germany); Bunck, Alexander C., E-mail: alexander.bunck@uk-koeln.de [Department of Radiology, University Hospital of Cologne, Kerpener Str. 62, D-50937, Cologne (Germany); Maintz, David, E-mail: david.maintz@uk-koeln.de [Department of Radiology, University Hospital of Cologne, Kerpener Str. 62, D-50937, Cologne (Germany); Baeßler, Bettina, E-mail: bettina.baessler@uk-koeln.de [Department of Radiology, University Hospital of Cologne, Kerpener Str. 62, D-50937, Cologne (Germany)

    2017-04-15

    Highlights: • Left and right ventricular CMR feature tracking is highly reproducible. • The only exception is radial strain and strain rate. • Sample size estimations are presented as a practical reference for future studies. - Abstract: Objectives: To investigate the reproducibility of regional and global strain and strain rate (SR) parameters of both ventricles and to determine sample sizes for all investigated strain and SR parameters in order to generate a practical reference for future studies. Materials and methods: The study population consisted of 20 healthy individuals and 20 patients with acute myocarditis. Cine sequences in three horizontal long axis views and a stack of short axis views covering the entire left and right ventricle (LV, RV) were retrospectively analysed using a dedicated feature tracking (FT) software algorithm (TOMTEC). For intra-observer analysis, one observer analysed CMR images of all patients and volunteers twice. For inter-observer analysis, three additional blinded observers analysed the same datasets once. Intra- and inter-observer reproducibility were tested in all patients and controls using Bland-Altman analyses, intra-class correlation coefficients (ICCs) and coefficients of variation. Results: Intra-observer reproducibility of global LV strain and SR parameters was excellent (range of ICCs: 0.81–1.00), the only exception being global radial SR with a poor reproducibility (ICC 0.23). On a regional level, basal and midventricular strain and SR parameters were more reproducible when compared to apical parameters. Inter-observer reproducibility of all LV parameters was slightly lower than intra-observer reproducibility, yet still good to excellent for all global and regional longitudinal and circumferential strain and SR parameters (range of ICCs: 0.66–0.93). Similar to the LV, all global RV longitudinal and circumferential strain and SR parameters showed an excellent reproducibility, (range of ICCs: 0.75–0

  6. Intra- and inter-observer reproducibility of global and regional magnetic resonance feature tracking derived strain parameters of the left and right ventricle

    International Nuclear Information System (INIS)

    Schmidt, Björn; Dick, Anastasia; Treutlein, Melanie; Schiller, Petra; Bunck, Alexander C.; Maintz, David; Baeßler, Bettina

    2017-01-01

    Highlights: • Left and right ventricular CMR feature tracking is highly reproducible. • The only exception is radial strain and strain rate. • Sample size estimations are presented as a practical reference for future studies. - Abstract: Objectives: To investigate the reproducibility of regional and global strain and strain rate (SR) parameters of both ventricles and to determine sample sizes for all investigated strain and SR parameters in order to generate a practical reference for future studies. Materials and methods: The study population consisted of 20 healthy individuals and 20 patients with acute myocarditis. Cine sequences in three horizontal long axis views and a stack of short axis views covering the entire left and right ventricle (LV, RV) were retrospectively analysed using a dedicated feature tracking (FT) software algorithm (TOMTEC). For intra-observer analysis, one observer analysed CMR images of all patients and volunteers twice. For inter-observer analysis, three additional blinded observers analysed the same datasets once. Intra- and inter-observer reproducibility were tested in all patients and controls using Bland-Altman analyses, intra-class correlation coefficients (ICCs) and coefficients of variation. Results: Intra-observer reproducibility of global LV strain and SR parameters was excellent (range of ICCs: 0.81–1.00), the only exception being global radial SR with a poor reproducibility (ICC 0.23). On a regional level, basal and midventricular strain and SR parameters were more reproducible when compared to apical parameters. Inter-observer reproducibility of all LV parameters was slightly lower than intra-observer reproducibility, yet still good to excellent for all global and regional longitudinal and circumferential strain and SR parameters (range of ICCs: 0.66–0.93). Similar to the LV, all global RV longitudinal and circumferential strain and SR parameters showed an excellent reproducibility, (range of ICCs: 0.75–0

  7. Mouse Models of Diet-Induced Nonalcoholic Steatohepatitis Reproduce the Heterogeneity of the Human Disease

    Science.gov (United States)

    Machado, Mariana Verdelho; Michelotti, Gregory Alexander; Xie, Guanhua; de Almeida, Thiago Pereira; Boursier, Jerome; Bohnic, Brittany; Guy, Cynthia D.; Diehl, Anna Mae

    2015-01-01

    Background and aims Non-alcoholic steatohepatitis (NASH), the potentially progressive form of nonalcoholic fatty liver disease (NAFLD), is the pandemic liver disease of our time. Although there are several animal models of NASH, consensus regarding the optimal model is lacking. We aimed to compare features of NASH in the two most widely-used mouse models: methionine-choline deficient (MCD) diet and Western diet. Methods Mice were fed standard chow, MCD diet for 8 weeks, or Western diet (45% energy from fat, predominantly saturated fat, with 0.2% cholesterol, plus drinking water supplemented with fructose and glucose) for 16 weeks. Liver pathology and metabolic profile were compared. Results The metabolic profile associated with human NASH was better mimicked by Western diet. Although hepatic steatosis (i.e., triglyceride accumulation) was also more severe, liver non-esterified fatty acid content was lower than in the MCD diet group. NASH was also less severe and less reproducible in the Western diet model, as evidenced by less liver cell death/apoptosis, inflammation, ductular reaction, and fibrosis. Various mechanisms implicated in human NASH pathogenesis/progression were also less robust in the Western diet model, including oxidative stress, ER stress, autophagy deregulation, and hedgehog pathway activation. Conclusion Feeding mice a Western diet models metabolic perturbations that are common in humans with mild NASH, whereas administration of a MCD diet better models the pathobiological mechanisms that cause human NAFLD to progress to advanced NASH. PMID:26017539

  8. Mouse models of diet-induced nonalcoholic steatohepatitis reproduce the heterogeneity of the human disease.

    Directory of Open Access Journals (Sweden)

    Mariana Verdelho Machado

    Full Text Available Non-alcoholic steatohepatitis (NASH, the potentially progressive form of nonalcoholic fatty liver disease (NAFLD, is the pandemic liver disease of our time. Although there are several animal models of NASH, consensus regarding the optimal model is lacking. We aimed to compare features of NASH in the two most widely-used mouse models: methionine-choline deficient (MCD diet and Western diet.Mice were fed standard chow, MCD diet for 8 weeks, or Western diet (45% energy from fat, predominantly saturated fat, with 0.2% cholesterol, plus drinking water supplemented with fructose and glucose for 16 weeks. Liver pathology and metabolic profile were compared.The metabolic profile associated with human NASH was better mimicked by Western diet. Although hepatic steatosis (i.e., triglyceride accumulation was also more severe, liver non-esterified fatty acid content was lower than in the MCD diet group. NASH was also less severe and less reproducible in the Western diet model, as evidenced by less liver cell death/apoptosis, inflammation, ductular reaction, and fibrosis. Various mechanisms implicated in human NASH pathogenesis/progression were also less robust in the Western diet model, including oxidative stress, ER stress, autophagy deregulation, and hedgehog pathway activation.Feeding mice a Western diet models metabolic perturbations that are common in humans with mild NASH, whereas administration of a MCD diet better models the pathobiological mechanisms that cause human NAFLD to progress to advanced NASH.

  9. Circuit modeling of the electrical impedance: II. Normal subjects and system reproducibility

    International Nuclear Information System (INIS)

    Shiffman, C A; Rutkove, S B

    2013-01-01

    Part I of this series showed that the five-element circuit model accurately mimics impedances measured using multi-frequency electrical impedance myography (MFEIM), focusing on changes brought on by disease. This paper addresses two requirements which must be met if the method is to qualify for clinical use. First, the extracted parameters must be reproducible over long time periods such as those involved in the treatment of muscular disease, and second, differences amongst normal subjects should be attributable to known differences in the properties of healthy muscle. It applies the method to five muscle groups in 62 healthy subjects, closely following the procedure used earlier for the diseased subjects. Test–retest comparisons show that parameters are reproducible at levels from 6 to 16% (depending on the parameter) over time spans of up to 267 days, levels far below the changes occurring in serious disease. Also, variations with age, gender and muscle location are found to be consistent with established expectations for healthy muscle tissue. We conclude that the combination of MFEIM measurements and five-element circuit analysis genuinely reflects properties of muscle and is reliable enough to recommend its use in following neuromuscular disease. (paper)

  10. Reproducibility analysis of measurements with a mechanical semiautomatic eye model for evaluation of intraocular lenses

    Science.gov (United States)

    Rank, Elisabet; Traxler, Lukas; Bayer, Natascha; Reutterer, Bernd; Lux, Kirsten; Drauschke, Andreas

    2014-03-01

    Mechanical eye models are used to validate ex vivo the optical quality of intraocular lenses (IOLs). The quality measurement and test instructions for IOLs are defined in the ISO 11979-2. However, it was mentioned in literature that these test instructions could lead to inaccurate measurements in case of some modern IOL designs. Reproducibility of alignment and measurement processes are presented, performed with a semiautomatic mechanical ex vivo eye model based on optical properties published by Liou and Brennan in the scale 1:1. The cornea, the iris aperture and the IOL itself are separately changeable within the eye model. The adjustment of the IOL can be manipulated by automatic decentration and tilt of the IOL in reference to the optical axis of the whole system, which is defined by the connection line of the central point of the artificial cornea and the iris aperture. With the presented measurement setup two quality criteria are measurable: the modulation transfer function (MTF) and the Strehl ratio. First the reproducibility of the alignment process for definition of initial conditions of the lateral position and tilt in reference to the optical axis of the system is investigated. Furthermore, different IOL holders are tested related to the stable holding of the IOL. The measurement is performed by a before-after comparison of the lens position using a typical decentration and tilt tolerance analysis path. Modulation transfer function MTF and Strehl ratio S before and after this tolerance analysis are compared and requirements for lens holder construction are deduced from the presented results.

  11. Hippocampal Astrocyte Cultures from Adult and Aged Rats Reproduce Changes in Glial Functionality Observed in the Aging Brain.

    Science.gov (United States)

    Bellaver, Bruna; Souza, Débora Guerini; Souza, Diogo Onofre; Quincozes-Santos, André

    2017-05-01

    Astrocytes are dynamic cells that maintain brain homeostasis, regulate neurotransmitter systems, and process synaptic information, energy metabolism, antioxidant defenses, and inflammatory response. Aging is a biological process that is closely associated with hippocampal astrocyte dysfunction. In this sense, we demonstrated that hippocampal astrocytes from adult and aged Wistar rats reproduce the glial functionality alterations observed in aging by evaluating several senescence, glutamatergic, oxidative and inflammatory parameters commonly associated with the aging process. Here, we show that the p21 senescence-associated gene and classical astrocyte markers, such as glial fibrillary acidic protein (GFAP), vimentin, and actin, changed their expressions in adult and aged astrocytes. Age-dependent changes were also observed in glutamate transporters (glutamate aspartate transporter (GLAST) and glutamate transporter-1 (GLT-1)) and glutamine synthetase immunolabeling and activity. Additionally, according to in vivo aging, astrocytes from adult and aged rats showed an increase in oxidative/nitrosative stress with mitochondrial dysfunction, an increase in RNA oxidation, NADPH oxidase (NOX) activity, superoxide levels, and inducible nitric oxide synthase (iNOS) expression levels. Changes in antioxidant defenses were also observed. Hippocampal astrocytes also displayed age-dependent inflammatory response with augmentation of proinflammatory cytokine levels, such as TNF-α, IL-1β, IL-6, IL-18, and messenger RNA (mRNA) levels of cyclo-oxygenase 2 (COX-2). Furthermore, these cells secrete neurotrophic factors, including glia-derived neurotrophic factor (GDNF), brain-derived neurotrophic factor (BDNF), S100 calcium-binding protein B (S100B) protein, and transforming growth factor-β (TGF-β), which changed in an age-dependent manner. Classical signaling pathways associated with aging, such as nuclear factor erythroid-derived 2-like 2 (Nrf2), nuclear factor kappa B (NFκ

  12. Acute multi-sgRNA knockdown of KEOPS complex genes reproduces the microcephaly phenotype of the stable knockout zebrafish model.

    Directory of Open Access Journals (Sweden)

    Tilman Jobst-Schwan

    Full Text Available Until recently, morpholino oligonucleotides have been widely employed in zebrafish as an acute and efficient loss-of-function assay. However, off-target effects and reproducibility issues when compared to stable knockout lines have compromised their further use. Here we employed an acute CRISPR/Cas approach using multiple single guide RNAs targeting simultaneously different positions in two exemplar genes (osgep or tprkb to increase the likelihood of generating mutations on both alleles in the injected F0 generation and to achieve a similar effect as morpholinos but with the reproducibility of stable lines. This multi single guide RNA approach resulted in median likelihoods for at least one mutation on each allele of >99% and sgRNA specific insertion/deletion profiles as revealed by deep-sequencing. Immunoblot showed a significant reduction for Osgep and Tprkb proteins. For both genes, the acute multi-sgRNA knockout recapitulated the microcephaly phenotype and reduction in survival that we observed previously in stable knockout lines, though milder in the acute multi-sgRNA knockout. Finally, we quantify the degree of mutagenesis by deep sequencing, and provide a mathematical model to quantitate the chance for a biallelic loss-of-function mutation. Our findings can be generalized to acute and stable CRISPR/Cas targeting for any zebrafish gene of interest.

  13. Stochastic model of financial markets reproducing scaling and memory in volatility return intervals

    Science.gov (United States)

    Gontis, V.; Havlin, S.; Kononovicius, A.; Podobnik, B.; Stanley, H. E.

    2016-11-01

    We investigate the volatility return intervals in the NYSE and FOREX markets. We explain previous empirical findings using a model based on the interacting agent hypothesis instead of the widely-used efficient market hypothesis. We derive macroscopic equations based on the microscopic herding interactions of agents and find that they are able to reproduce various stylized facts of different markets and different assets with the same set of model parameters. We show that the power-law properties and the scaling of return intervals and other financial variables have a similar origin and could be a result of a general class of non-linear stochastic differential equations derived from a master equation of an agent system that is coupled by herding interactions. Specifically, we find that this approach enables us to recover the volatility return interval statistics as well as volatility probability and spectral densities for the NYSE and FOREX markets, for different assets, and for different time-scales. We find also that the historical S&P500 monthly series exhibits the same volatility return interval properties recovered by our proposed model. Our statistical results suggest that human herding is so strong that it persists even when other evolving fluctuations perturbate the financial system.

  14. Synaptic augmentation in a cortical circuit model reproduces serial dependence in visual working memory.

    Directory of Open Access Journals (Sweden)

    Daniel P Bliss

    Full Text Available Recent work has established that visual working memory is subject to serial dependence: current information in memory blends with that from the recent past as a function of their similarity. This tuned temporal smoothing likely promotes the stability of memory in the face of noise and occlusion. Serial dependence accumulates over several seconds in memory and deteriorates with increased separation between trials. While this phenomenon has been extensively characterized in behavior, its neural mechanism is unknown. In the present study, we investigate the circuit-level origins of serial dependence in a biophysical model of cortex. We explore two distinct kinds of mechanisms: stable persistent activity during the memory delay period and dynamic "activity-silent" synaptic plasticity. We find that networks endowed with both strong reverberation to support persistent activity and dynamic synapses can closely reproduce behavioral serial dependence. Specifically, elevated activity drives synaptic augmentation, which biases activity on the subsequent trial, giving rise to a spatiotemporally tuned shift in the population response. Our hybrid neural model is a theoretical advance beyond abstract mathematical characterizations, offers testable hypotheses for physiological research, and demonstrates the power of biological insights to provide a quantitative explanation of human behavior.

  15. Fast bootstrapping and permutation testing for assessing reproducibility and interpretability of multivariate fMRI decoding models.

    Directory of Open Access Journals (Sweden)

    Bryan R Conroy

    Full Text Available Multivariate decoding models are increasingly being applied to functional magnetic imaging (fMRI data to interpret the distributed neural activity in the human brain. These models are typically formulated to optimize an objective function that maximizes decoding accuracy. For decoding models trained on full-brain data, this can result in multiple models that yield the same classification accuracy, though some may be more reproducible than others--i.e. small changes to the training set may result in very different voxels being selected. This issue of reproducibility can be partially controlled by regularizing the decoding model. Regularization, along with the cross-validation used to estimate decoding accuracy, typically requires retraining many (often on the order of thousands of related decoding models. In this paper we describe an approach that uses a combination of bootstrapping and permutation testing to construct both a measure of cross-validated prediction accuracy and model reproducibility of the learned brain maps. This requires re-training our classification method on many re-sampled versions of the fMRI data. Given the size of fMRI datasets, this is normally a time-consuming process. Our approach leverages an algorithm called fast simultaneous training of generalized linear models (FaSTGLZ to create a family of classifiers in the space of accuracy vs. reproducibility. The convex hull of this family of classifiers can be used to identify a subset of Pareto optimal classifiers, with a single-optimal classifier selectable based on the relative cost of accuracy vs. reproducibility. We demonstrate our approach using full-brain analysis of elastic-net classifiers trained to discriminate stimulus type in an auditory and visual oddball event-related fMRI design. Our approach and results argue for a computational approach to fMRI decoding models in which the value of the interpretation of the decoding model ultimately depends upon optimizing a

  16. Sprague-Dawley rats are a sustainable and reproducible animal model for induction and study of oral submucous fibrosis

    Directory of Open Access Journals (Sweden)

    Shilpa Maria

    2015-01-01

    Full Text Available Background: Oral submucous fibrosis (OSF is a chronic debilitating disease predominantly affecting the oral cavity and oropharynx. Characteristic histological traits of OSF include epithelial atrophy, inflammation, and a generalized submucosal fibrosis. Several studies and epidemiological surveys provide substantial evidence that areca nut is the main etiological factor for OSF. Hesitance of patients to undergo biopsy procedure together with clinicians becoming increasingly reluctant to take biopsies in cases of OSF has prompted researchers to develop animal models to study the disease process. Materials and Methods: The present study evaluates the efficacy, sustainability, and reproducibility of using Sprague-Dawley (SD rats as a possible model in the induction and progression of OSF. Buccal mucosa of SD rats was injected with areca nut and pan masala solutions on alternate days over a period of 48 weeks. The control group was treated with saline. The influence of areca nut and pan masala on the oral epithelium and connective tissue was evaluated by light microscopy. Results: Oral submucous fibrosis-like lesions were seen in both the areca nut and pan masala treated groups. The histological changes observed included: Atrophic epithelium, partial or complete loss of rete ridges, juxta-epithelial hyalinization, inflammation and accumulation of dense bundles of collagen fibers subepithelially. Conclusions: Histopathological changes in SD rats following treatment with areca nut and pan masala solutions bears a close semblance to that seen in humans with OSF. The SD rats seem to be a cheap and efficient, sustainable and reproducible model for the induction and development of OSF.

  17. Can the CMIP5 models reproduce interannual to interdecadal southern African summer rainfall variability and their teleconnections?

    Science.gov (United States)

    Dieppois, Bastien; Pohl, Benjamin; Crétat, Julien; Keenlyside, Noel; New, Mark

    2017-04-01

    This study examines for the first time the ability of 28 global climate models from the Coupled Model Intercomparison Project 5 (CMIP5) to reproduce southern African summer rainfall variability and their teleconnections with large-scale modes of climate variability across the dominant timescales. In observations, summer southern African rainfall exhibits three significant timescales of variability over the twentieth century: interdecadal (15-28 years), quasi-decadal (8-13 years), and interannual (2-8 years). Most of CMIP5 simulations underestimate southern African summer rainfall variability at these three timescales, and this bias is proportionally stronger from high- to low-frequency. The inter-model spread is as important as the spread between the ensemble members of a given model, which suggests a strong influence of internal climate variability, and/or large model uncertainties. The underestimated amplitude of rainfall variability for each timescale are linked to unrealistic spatial distributions of these fluctuations over the subcontinent in most CMIP5 models. This is, at least partially, due to a poor representation of the tropical/subtropical teleconnections, which are known to favour wet conditions over southern African rainfall in the observations. Most CMIP5 realisations (85%) fail at simulating sea-surface temperature (SST) anomalies related to a negative Pacific Decadal Oscillation during wetter conditions at the interdecadal timescale. At the quasi-decadal timescale, only one-third of simulations display a negative Interdecadal Pacific Oscillation during wetter conditions, but these SST anomalies are anomalously shifted westward and poleward when compared to observed anomalies. Similar biases in simulating La Niña SST anomalies are identified in more than 50% of CMIP5 simulations at the interannual timescale. These biases in Pacific SST anomalies result in important shifts in the Walker circulation. This impacts southern Africa rainfall variability

  18. Model for a reproducible curriculum infrastructure to provide international nurse anesthesia continuing education.

    Science.gov (United States)

    Collins, Shawn Bryant

    2011-12-01

    There are no set standards for nurse anesthesia education in developing countries, yet one of the keys to the standards in global professional practice is competency assurance for individuals. Nurse anesthetists in developing countries have difficulty obtaining educational materials. These difficulties include, but are not limited to, financial constraints, lack of anesthesia textbooks, and distance from educational sites. There is increasing evidence that the application of knowledge in developing countries is failing. One reason is that many anesthetists in developing countries are trained for considerably less than acceptable time periods and are often supervised by poorly trained practitioners, who then pass on less-than-desirable practice skills, thus exacerbating difficulties. Sustainability of development can come only through anesthetists who are both well trained and able to pass on their training to others. The international nurse anesthesia continuing education project was developed in response to the difficulty that nurse anesthetists in developing countries face in accessing continuing education. The purpose of this project was to develop a nonprofit, volunteer-based model for providing nurse anesthesia continuing education that can be reproduced and used in any developing country.

  19. Composite model to reproduce the mechanical behaviour of methane hydrate bearing soils

    Science.gov (United States)

    De la Fuente, Maria

    2016-04-01

    Methane hydrate bearing sediments (MHBS) are naturally-occurring materials containing different components in the pores that may suffer phase changes under relative small temperature and pressure variations for conditions typically prevailing a few hundreds of meters below sea level. Their modelling needs to account for heat and mass balance equations of the different components, and several strategies already exist to combine them (e.g., Rutqvist & Moridis, 2009; Sánchez et al. 2014). These equations have to be completed by restrictions and constitutive laws reproducing the phenomenology of heat and fluid flows, phase change conditions and mechanical response. While the formulation of the non-mechanical laws generally includes explicitly the mass fraction of methane in each phase, which allows for a natural update of parameters during phase changes, mechanical laws are, in most cases, stated for the whole solid skeleton (Uchida et al., 2012; Soga et al. 2006). In this paper, a mechanical model is proposed to cope with the response of MHBS. It is based on a composite approach that allows defining the thermo-hydro-mechanical response of mineral skeleton and solid hydrates independently. The global stress-strain-temperature response of the solid phase (grains + hydrate) is then obtained by combining both responses according to energy principle following the work by Pinyol et al. (2007). In this way, dissociation of MH can be assessed on the basis of the stress state and temperature prevailing locally within the hydrate component. Besides, its structuring effect is naturally accounted for by the model according to patterns of MH inclusions within soil pores. This paper describes the fundamental hypothesis behind the model and its formulation. Its performance is assessed by comparison with laboratory data presented in the literature. An analysis of MHBS response to several stress-temperature paths representing potential field cases is finally presented. References

  20. Evaluation of NASA's MERRA Precipitation Product in Reproducing the Observed Trend and Distribution of Extreme Precipitation Events in the United States

    Science.gov (United States)

    Ashouri, Hamed; Sorooshian, Soroosh; Hsu, Kuo-Lin; Bosilovich, Michael G.; Lee, Jaechoul; Wehner, Michael F.; Collow, Allison

    2016-01-01

    This study evaluates the performance of NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) precipitation product in reproducing the trend and distribution of extreme precipitation events. Utilizing the extreme value theory, time-invariant and time-variant extreme value distributions are developed to model the trends and changes in the patterns of extreme precipitation events over the contiguous United States during 1979-2010. The Climate Prediction Center (CPC) U.S.Unified gridded observation data are used as the observational dataset. The CPC analysis shows that the eastern and western parts of the United States are experiencing positive and negative trends in annual maxima, respectively. The continental-scale patterns of change found in MERRA seem to reasonably mirror the observed patterns of change found in CPC. This is not previously expected, given the difficulty in constraining precipitation in reanalysis products. MERRA tends to overestimate the frequency at which the 99th percentile of precipitation is exceeded because this threshold tends to be lower in MERRA, making it easier to be exceeded. This feature is dominant during the summer months. MERRA tends to reproduce spatial patterns of the scale and location parameters of the generalized extreme value and generalized Pareto distributions. However, MERRA underestimates these parameters, particularly over the Gulf Coast states, leading to lower magnitudes in extreme precipitation events. Two issues in MERRA are identified: 1) MERRA shows a spurious negative trend in Nebraska and Kansas, which is most likely related to the changes in the satellite observing system over time that has apparently affected the water cycle in the central United States, and 2) the patterns of positive trend over the Gulf Coast states and along the East Coast seem to be correlated with the tropical cyclones in these regions. The analysis of the trends in the seasonal precipitation extremes indicates that

  1. Conceptual model suitability for reproducing preferential flow paths in waste rock piles

    Science.gov (United States)

    Broda, S.; Blessent, D.; Aubertin, M.

    2012-12-01

    Waste rocks are typically deposited on mining sites forming waste rock piles (WRP). Acid mine drainage (AMD) or contaminated neutral drainage (CND) with metal leaching from the sulphidic minerals adversely impact soil and water composition on and beyond the mining sites. The deposition method and the highly heterogeneous hydrogeological and geochemical properties of waste rock have a major impact on water and oxygen movement and pore water pressure distribution in the WRP, controlling AMD/CND production. However, the prediction and interpretation of water distribution in WRP is a challenging problem and many attempted numerical investigations of short and long term forecasts were found unreliable. Various forms of unsaturated localized preferential flow processes have been identified, for instance flow in macropores and fractures, heterogeneity-driven and gravity-driven unstable flow, with local hydraulic conductivities reaching several dozen meters per day. Such phenomena have been entirely neglected in numerical WRP modelling and are unattainable with the classical equivalent porous media conceptual approach typically used in this field. An additional complicating circumstance is the unknown location of macropores and fractures a priori. In this study, modeling techniques originally designed for massive fractured rock aquifers are applied. The properties of the waste rock material, found at the Tio mine at Havre Saint-Pierre, Québec (Canada), used in this modelling study were retrieved from laboratory permeability and water retention tests. These column tests were reproduced with the numerical 3D fully-integrated surface/subsurface flow model HydroGeoSphere, where material heterogeneity is represented by means of i) the dual continuum approach, ii) discrete fractures, and iii) a stochastic facies distribution framework using TPROGS. Comparisons with measured pore water pressures, tracer concentrations and exiting water volumes allowed defining limits and

  2. Intra-observer reproducibility and interobserver reliability of the radiographic parameters in the Spinal Deformity Study Group's AIS Radiographic Measurement Manual.

    Science.gov (United States)

    Dang, Natasha Radhika; Moreau, Marc J; Hill, Douglas L; Mahood, James K; Raso, James

    2005-05-01

    Retrospective cross-sectional assessment of the reproducibility and reliability of radiographic parameters. To measure the intra-examiner and interexaminer reproducibility and reliability of salient radiographic features. The management and treatment of adolescent idiopathic scoliosis (AIS) depends on accurate and reproducible radiographic measurements of the deformity. Ten sets of radiographs were randomly selected from a sample of patients with AIS, with initial curves between 20 degrees and 45 degrees. Fourteen measures of the deformity were measured from posteroanterior and lateral radiographs by 2 examiners, and were repeated 5 times at intervals of 3-5 days. Intra-examiner and interexaminer differences were examined. The parameters include measures of curve size, spinal imbalance, sagittal kyphosis and alignment, maximum apical vertebral rotation, T1 tilt, spondylolysis/spondylolisthesis, and skeletal age. Intra-examiner reproducibility was generally excellent for parameters measured from the posteroanterior radiographs but only fair to good for parameters from the lateral radiographs, in which some landmarks were not clearly visible. Of the 13 parameters observed, 7 had excellent interobserver reliability. The measurements from the lateral radiograph were less reproducible and reliable and, thus, may not add value to the assessment of AIS. Taking additional measures encourages a systematic and comprehensive assessment of spinal radiographs.

  3. Observations involving broadband impedance modelling

    International Nuclear Information System (INIS)

    Berg, J.S.

    1995-08-01

    Results for single- and multi-bunch instabilities can be significantly affected by the precise model that is used for the broadband impendance. This paper discusses three aspects of broadband impendance modeling. The first is an observation of the effect that a seemingly minor change in an impedance model has on the single-bunch mode coupling threshold. The second is a successful attempt to construct a model for the high-frequency tails of an r.f cavity. The last is a discussion of requirements for the mathematical form of an impendance which follow from the general properties of impendances

  4. Observations involving broadband impedance modelling

    Energy Technology Data Exchange (ETDEWEB)

    Berg, J.S. [Stanford Linear Accelerator Center, Menlo Park, CA (United States)

    1996-08-01

    Results for single- and multi-bunch instabilities can be significantly affected by the precise model that is used for the broadband impedance. This paper discusses three aspects of broadband impedance modelling. The first is an observation of the effect that a seemingly minor change in an impedance model has on the single-bunch mode coupling threshold. The second is a successful attempt to construct a model for the high-frequency tails of an r.f. cavity. The last is a discussion of requirements for the mathematical form of an impedance which follow from the general properties of impedances. (author)

  5. Energy and nutrient deposition and excretion in the reproducing sow: model development and evaluation

    DEFF Research Database (Denmark)

    Hansen, A V; Strathe, A B; Theil, Peter Kappel

    2014-01-01

    was related to predictions of body fat and protein loss from the lactation model. Nitrogen intake, urine N, fecal N, and milk N were predicted with RMSPE as percentage of observed mean of 9.7, 17.9, 10.0, and 7.7%, respectively. The model provided a framework, but more refinements and improvements in accuracy......Air and nutrient emissions from swine operations raise environmental concerns. During the reproduction phase, sows consume and excrete large quantities of nutrients. The objective of this study was to develop a mathematical model to describe energy and nutrient partitioning and predict manure...... excretion and composition and methane emissions on a daily basis. The model was structured to contain gestation and lactation modules, which can be run separately or sequentially, with outputs from the gestation module used as inputs to the lactation module. In the gestating module, energy and protein...

  6. A novel, comprehensive, and reproducible porcine model for determining the timing of bruises in forensic pathology

    DEFF Research Database (Denmark)

    Barington, Kristiane; Jensen, Henrik Elvang

    2016-01-01

    in order to identify gross and histological parameters that may be useful in determining the age of a bruise. Methods The mechanical device was able to apply a single reproducible stroke with a plastic tube that was equivalent to being struck by a man. In each of 10 anesthetized pigs, four strokes...

  7. Observational modeling of topological spaces

    International Nuclear Information System (INIS)

    Molaei, M.R.

    2009-01-01

    In this paper a model for a multi-dimensional observer by using of the fuzzy theory is presented. Relative form of Tychonoff theorem is proved. The notion of topological entropy is extended. The persistence of relative topological entropy under relative conjugate relation is proved.

  8. Inter-observer reproducibility of back surface topography parameters allowing assessment of scoliotic thoracic gibbosity and comparison with two standard postures.

    Science.gov (United States)

    de Sèze, M; Randriaminahisoa, T; Gaunelle, A; de Korvin, G; Mazaux, J-M

    2013-12-01

    The objective of this work was to analyze the inter-observer reproducibility of an upright posture designed to bring out the thoracic humps by folding the upper limbs. The effect of this posture on back surface parameters was also compared with two standard radiological postures. A back surface topography was performed on 46 patients (40 girls and 6 boys) with a minimum of 15° Cobb angle on coronal spinal radiographs. Inter-observer reliability was evaluated using the typical error measurement (TEM) and Intraclass Correlation Coefficient (ICC). Variations between postures were assessed using a Student's t test. The inter-observer reproducibility is good enough for the three postures. The proposed posture leads to significant changes in the sagittal plane as well as in the identification of thoracic humps. This study shows the reproducibility of the proposed posture in order to explore the thoracic humps and highlights its relevance to explore scoliosis with back surface topography systems. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  9. A rat model of post-traumatic stress disorder reproduces the hippocampal deficits seen in the human syndrome

    Directory of Open Access Journals (Sweden)

    Sonal eGoswami

    2012-06-01

    Full Text Available Despite recent progress, the causes and pathophysiology of post-traumatic stress disorder (PTSD remain poorly understood, partly because of ethical limitations inherent to human studies. One approach to circumvent this obstacle is to study PTSD in a valid animal model of the human syndrome. In one such model, extreme and long-lasting behavioral manifestations of anxiety develop in a subset of Lewis rats after exposure to an intense predatory threat that mimics the type of life-and-death situation known to precipitate PTSD in humans. This study aimed to assess whether the hippocampus-associated deficits observed in the human syndrome are reproduced in this rodent model. Prior to predatory threat, different groups of rats were each tested on one of three object recognition memory tasks that varied in the types of contextual clues (i.e. that require the hippocampus or not the rats could use to identify novel items. After task completion, the rats were subjected to predatory threat and, one week later, tested on the elevated plus maze. Based on their exploratory behavior in the plus maze, rats were then classified as resilient or PTSD-like and their performance on the pre-threat object recognition tasks compared. The performance of PTSD-like rats was inferior to that of resilient rats but only when subjects relied on an allocentric frame of reference to identify novel items, a process thought to be critically dependent on the hippocampus. Therefore, these results suggest that even prior to trauma, PTSD-like rats show a deficit in hippocampal-dependent functions, as reported in twin studies of human PTSD.

  10. A discrete particle model reproducing collective dynamics of a bee swarm.

    Science.gov (United States)

    Bernardi, Sara; Colombi, Annachiara; Scianna, Marco

    2018-02-01

    In this article, we present a microscopic discrete mathematical model describing collective dynamics of a bee swarm. More specifically, each bee is set to move according to individual strategies and social interactions, the former involving the desire to reach a target destination, the latter accounting for repulsive/attractive stimuli and for alignment processes. The insects tend in fact to remain sufficiently close to the rest of the population, while avoiding collisions, and they are able to track and synchronize their movement to the flight of a given set of neighbors within their visual field. The resulting collective behavior of the bee cloud therefore emerges from non-local short/long-range interactions. Differently from similar approaches present in the literature, we here test different alignment mechanisms (i.e., based either on an Euclidean or on a topological neighborhood metric), which have an impact also on the other social components characterizing insect behavior. A series of numerical realizations then shows the phenomenology of the swarm (in terms of pattern configuration, collective productive movement, and flight synchronization) in different regions of the space of free model parameters (i.e., strength of attractive/repulsive forces, extension of the interaction regions). In this respect, constraints in the possible variations of such coefficients are here given both by reasonable empirical observations and by analytical results on some stability characteristics of the defined pairwise interaction kernels, which have to assure a realistic crystalline configuration of the swarm. An analysis of the effect of unconscious random fluctuations of bee dynamics is also provided. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. The diverse broad-band light-curves of Swift GRBs reproduced with the cannonball model

    CERN Document Server

    Dado, Shlomo; De Rújula, A

    2009-01-01

    Two radiation mechanisms, inverse Compton scattering (ICS) and synchrotron radiation (SR), suffice within the cannonball (CB) model of long gamma ray bursts (LGRBs) and X-ray flashes (XRFs) to provide a very simple and accurate description of their observed prompt emission and afterglows. Simple as they are, the two mechanisms and the burst environment generate the rich structure of the light curves at all frequencies and times. This is demonstrated for 33 selected Swift LGRBs and XRFs, which are well sampled from early time until late time and well represent the entire diversity of the broad band light curves of Swift LGRBs and XRFs. Their prompt gamma-ray and X-ray emission is dominated by ICS of glory light. During their fast decline phase, ICS is taken over by SR which dominates their broad band afterglow. The pulse shape and spectral evolution of the gamma-ray peaks and the early-time X-ray flares, and even the delayed optical `humps' in XRFs, are correctly predicted. The canonical and non-canonical X-ra...

  12. Intra- and inter-observer reproducibility study of gestational age estimation using three common foetal biometric parameters: Experienced versus inexperienced sonographer

    International Nuclear Information System (INIS)

    Ohagwu, C.C.; Onoduagu, H.I.; Eze, C.U.; Ochie, K.; Ohagwu, C.I.

    2015-01-01

    Aim: To assess reproducibility of estimating gestational age (GA) of foetus using femur length (FL), biparietal diameter (BPD) and abdominal circumference (AC) within experienced and inexperienced sonographers and between the two. Patients and methods: Two sets of GA estimates each were obtained for FL, BPD and AC by the two observers in 20 normal singleton foetuses. The first estimates for the three biometric parameters were made by the experienced sonographer. Subsequently, the inexperienced sonographer, blind to the estimates of the first observer obtained his own estimates for the same biometric parameters. After a time interval of ten minutes the process was repeated for the second set of GA estimates. All the gestational age estimates were made following standard protocol. Statistical analysis was performed by Pearson's and intraclass correlations, coefficient of variation and Bland–Altman plots. Statistical inferences were drawn at p < 0.05. Results: The Pearson's and intraclass correlations of between GA estimates within and between both observers from measurement of FL, BPD and AC were very high and statistically significant (p < 0.05). Coefficient of variation for duplicate measurements for GA estimates within observers and between observers were quite negligible. Between observers, the first and second GA estimates from FL measurements showed the least variation. Estimates from BPD and AC measurements showed greater degree of variation between the observers. Conclusion: Reproducibility of GA estimation using FL, BPD and AC within experienced and inexperienced sonographers and between the two was excellent. Therefore, a fresh Nigerian radiography graduate with adequate exposure in obstetric ultrasound can correctly determine the gestational age of foetus in routine obstetric ultrasound without supervision

  13. An inter-observer Ki67 reproducibility study applying two different assessment methods: on behalf of the Danish Scientific Committee of Pathology, Danish breast cancer cooperative group (DBCG).

    Science.gov (United States)

    Laenkholm, Anne-Vibeke; Grabau, Dorthe; Møller Talman, Maj-Lis; Balslev, Eva; Bak Jylling, Anne Marie; Tabor, Tomasz Piotr; Johansen, Morten; Brügmann, Anja; Lelkaitis, Giedrius; Di Caterino, Tina; Mygind, Henrik; Poulsen, Thomas; Mertz, Henrik; Søndergaard, Gorm; Bruun Rasmussen, Birgitte

    2018-01-01

    In 2011, the St. Gallen Consensus Conference introduced the use of pathology to define the intrinsic breast cancer subtypes by application of immunohistochemical (IHC) surrogate markers ER, PR, HER2 and Ki67 with a specified Ki67 cutoff (>14%) for luminal B-like definition. Reports concerning impaired reproducibility of Ki67 estimation and threshold inconsistency led to the initiation of this quality assurance study (2013-2015). The aim of the study was to investigate inter-observer variation for Ki67 estimation in malignant breast tumors by two different quantification methods (assessment method and count method) including measure of agreement between methods. Fourteen experienced breast pathologists from 12 pathology departments evaluated 118 slides from a consecutive series of malignant breast tumors. The staining interpretation was performed according to both the Danish and Swedish guidelines. Reproducibility was quantified by intra-class correlation coefficient (ICC) and Lights Kappa with dichotomization of observations at the larger than (>) 20% threshold. The agreement between observations by the two quantification methods was evaluated by Bland-Altman plot. For the fourteen raters the median ranged from 20% to 40% by the assessment method and from 22.5% to 36.5% by the count method. Light's Kappa was 0.664 for observation by the assessment method and 0.649 by the count method. The ICC was 0.82 (95% CI: 0.77-0.86) by the assessment method vs. 0.84 (95% CI: 0.80-0.87) by the count method. Although the study in general showed a moderate to good inter-observer agreement according to both ICC and Lights Kappa, still major discrepancies were identified in especially the mid-range of observations. Consequently, for now Ki67 estimation is not implemented in the DBCG treatment algorithm.

  14. Three-dimensional surgical modelling with an open-source software protocol: study of precision and reproducibility in mandibular reconstruction with the fibula free flap.

    Science.gov (United States)

    Ganry, L; Quilichini, J; Bandini, C M; Leyder, P; Hersant, B; Meningaud, J P

    2017-08-01

    Very few surgical teams currently use totally independent and free solutions to perform three-dimensional (3D) surgical modelling for osseous free flaps in reconstructive surgery. This study assessed the precision and technical reproducibility of a 3D surgical modelling protocol using free open-source software in mandibular reconstruction with fibula free flaps and surgical guides. Precision was assessed through comparisons of the 3D surgical guide to the sterilized 3D-printed guide, determining accuracy to the millimetre level. Reproducibility was assessed in three surgical cases by volumetric comparison to the millimetre level. For the 3D surgical modelling, a difference of less than 0.1mm was observed. Almost no deformations (free flap modelling was between 0.1mm and 0.4mm, and the average precision of the complete reconstructed mandible was less than 1mm. The open-source software protocol demonstrated high accuracy without complications. However, the precision of the surgical case depends on the surgeon's 3D surgical modelling. Therefore, surgeons need training on the use of this protocol before applying it to surgical cases; this constitutes a limitation. Further studies should address the transfer of expertise. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  15. A novel porcine model of ataxia telangiectasia reproduces neurological features and motor deficits of human disease.

    Science.gov (United States)

    Beraldi, Rosanna; Chan, Chun-Hung; Rogers, Christopher S; Kovács, Attila D; Meyerholz, David K; Trantzas, Constantin; Lambertz, Allyn M; Darbro, Benjamin W; Weber, Krystal L; White, Katherine A M; Rheeden, Richard V; Kruer, Michael C; Dacken, Brian A; Wang, Xiao-Jun; Davis, Bryan T; Rohret, Judy A; Struzynski, Jason T; Rohret, Frank A; Weimer, Jill M; Pearce, David A

    2015-11-15

    Ataxia telangiectasia (AT) is a progressive multisystem disorder caused by mutations in the AT-mutated (ATM) gene. AT is a neurodegenerative disease primarily characterized by cerebellar degeneration in children leading to motor impairment. The disease progresses with other clinical manifestations including oculocutaneous telangiectasia, immune disorders, increased susceptibly to cancer and respiratory infections. Although genetic investigations and physiological models have established the linkage of ATM with AT onset, the mechanisms linking ATM to neurodegeneration remain undetermined, hindering therapeutic development. Several murine models of AT have been successfully generated showing some of the clinical manifestations of the disease, however they do not fully recapitulate the hallmark neurological phenotype, thus highlighting the need for a more suitable animal model. We engineered a novel porcine model of AT to better phenocopy the disease and bridge the gap between human and current animal models. The initial characterization of AT pigs revealed early cerebellar lesions including loss of Purkinje cells (PCs) and altered cytoarchitecture suggesting a developmental etiology for AT and could advocate for early therapies for AT patients. In addition, similar to patients, AT pigs show growth retardation and develop motor deficit phenotypes. By using the porcine system to model human AT, we established the first animal model showing PC loss and motor features of the human disease. The novel AT pig provides new opportunities to unmask functions and roles of ATM in AT disease and in physiological conditions. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. A novel porcine model of ataxia telangiectasia reproduces neurological features and motor deficits of human disease

    Science.gov (United States)

    Beraldi, Rosanna; Chan, Chun-Hung; Rogers, Christopher S.; Kovács, Attila D.; Meyerholz, David K.; Trantzas, Constantin; Lambertz, Allyn M.; Darbro, Benjamin W.; Weber, Krystal L.; White, Katherine A.M.; Rheeden, Richard V.; Kruer, Michael C.; Dacken, Brian A.; Wang, Xiao-Jun; Davis, Bryan T.; Rohret, Judy A.; Struzynski, Jason T.; Rohret, Frank A.; Weimer, Jill M.; Pearce, David A.

    2015-01-01

    Ataxia telangiectasia (AT) is a progressive multisystem disorder caused by mutations in the AT-mutated (ATM) gene. AT is a neurodegenerative disease primarily characterized by cerebellar degeneration in children leading to motor impairment. The disease progresses with other clinical manifestations including oculocutaneous telangiectasia, immune disorders, increased susceptibly to cancer and respiratory infections. Although genetic investigations and physiological models have established the linkage of ATM with AT onset, the mechanisms linking ATM to neurodegeneration remain undetermined, hindering therapeutic development. Several murine models of AT have been successfully generated showing some of the clinical manifestations of the disease, however they do not fully recapitulate the hallmark neurological phenotype, thus highlighting the need for a more suitable animal model. We engineered a novel porcine model of AT to better phenocopy the disease and bridge the gap between human and current animal models. The initial characterization of AT pigs revealed early cerebellar lesions including loss of Purkinje cells (PCs) and altered cytoarchitecture suggesting a developmental etiology for AT and could advocate for early therapies for AT patients. In addition, similar to patients, AT pigs show growth retardation and develop motor deficit phenotypes. By using the porcine system to model human AT, we established the first animal model showing PC loss and motor features of the human disease. The novel AT pig provides new opportunities to unmask functions and roles of ATM in AT disease and in physiological conditions. PMID:26374845

  17. Using a 1-D model to reproduce the diurnal variability of SST

    DEFF Research Database (Denmark)

    Karagali, Ioanna; Høyer, Jacob L.; Donlon, Craig J.

    2017-01-01

    preferred approach to bridge the gap between in situ and remotely sensed measurements and obtain diurnal warming estimates at large spatial scales is modeling of the upper ocean temperature. This study uses the one-dimensional General Ocean Turbulence Model (GOTM) to resolve diurnal signals identified from...... forcing fields and is able to resolve daily SST variability seen both from satellite and in situ measurements. As such, and due to its low computational cost, it is proposed as a candidate model for diurnal variability estimates....

  18. Do on/off time series models reproduce emerging stock market comovements?

    OpenAIRE

    Mohamed el hédi Arouri; Fredj Jawadi

    2011-01-01

    Using nonlinear modeling tools, this study investigates the comovements between the Mexican and the world stock markets over the last three decades. While the previous works only highlight some evidence of comovements, our paper aims to specify the different time-varying links and mechanisms characterizing the Mexican stock market through the comparison of two nonlinear error correction models (NECMs). Our findings point out strong evidence of time-varying and nonlinear mean-reversion and lin...

  19. The Computable Catchment: An executable document for model-data software sharing, reproducibility and interactive visualization

    Science.gov (United States)

    Gil, Y.; Duffy, C.

    2015-12-01

    This paper proposes the concept of a "Computable Catchment" which is used to develop a collaborative platform for watershed modeling and data analysis. The object of the research is a sharable, executable document similar to a pdf, but one that includes documentation of the underlying theoretical concepts, interactive computational/numerical resources, linkage to essential data repositories and the ability for interactive model-data visualization and analysis. The executable document for each catchment is stored in the cloud with automatic provisioning and a unique identifier allowing collaborative model and data enhancements for historical hydroclimatic reconstruction and/or future landuse or climate change scenarios to be easily reconstructed or extended. The Computable Catchment adopts metadata standards for naming all variables in the model and the data. The a-priori or initial data is derived from national data sources for soils, hydrogeology, climate, and land cover available from the www.hydroterre.psu.edu data service (Leonard and Duffy, 2015). The executable document is based on Wolfram CDF or Computable Document Format with an interactive open-source reader accessible by any modern computing platform. The CDF file and contents can be uploaded to a website or simply shared as a normal document maintaining all interactive features of the model and data. The Computable Catchment concept represents one application for Geoscience Papers of the Future representing an extensible document that combines theory, models, data and analysis that are digitally shared, documented and reused among research collaborators, students, educators and decision makers.

  20. "High-precision, reconstructed 3D model" of skull scanned by conebeam CT: Reproducibility verified using CAD/CAM data.

    Science.gov (United States)

    Katsumura, Seiko; Sato, Keita; Ikawa, Tomoko; Yamamura, Keiko; Ando, Eriko; Shigeta, Yuko; Ogawa, Takumi

    2016-01-01

    Computed tomography (CT) scanning has recently been introduced into forensic medicine and dentistry. However, the presence of metal restorations in the dentition can adversely affect the quality of three-dimensional reconstruction from CT scans. In this study, we aimed to evaluate the reproducibility of a "high-precision, reconstructed 3D model" obtained from a conebeam CT scan of dentition, a method that might be particularly helpful in forensic medicine. We took conebeam CT and helical CT images of three dry skulls marked with 47 measuring points; reconstructed three-dimensional images; and measured the distances between the points in the 3D images with a computer-aided design/computer-aided manufacturing (CAD/CAM) marker. We found that in comparison with the helical CT, conebeam CT is capable of reproducing measurements closer to those obtained from the actual samples. In conclusion, our study indicated that the image-reproduction from a conebeam CT scan was more accurate than that from a helical CT scan. Furthermore, the "high-precision reconstructed 3D model" facilitates reliable visualization of full-sized oral and maxillofacial regions in both helical and conebeam CT scans. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. A novel porcine model of ataxia telangiectasia reproduces neurological features and motor deficits of human disease

    OpenAIRE

    Beraldi, Rosanna; Chan, Chun-Hung; Rogers, Christopher S.; Kovács, Attila D.; Meyerholz, David K.; Trantzas, Constantin; Lambertz, Allyn M.; Darbro, Benjamin W.; Weber, Krystal L.; White, Katherine A.M.; Rheeden, Richard V.; Kruer, Michael C.; Dacken, Brian A.; Wang, Xiao-Jun; Davis, Bryan T.

    2015-01-01

    Ataxia telangiectasia (AT) is a progressive multisystem disorder caused by mutations in the AT-mutated (ATM) gene. AT is a neurodegenerative disease primarily characterized by cerebellar degeneration in children leading to motor impairment. The disease progresses with other clinical manifestations including oculocutaneous telangiectasia, immune disorders, increased susceptibly to cancer and respiratory infections. Although genetic investigations and physiological models have established the l...

  2. Establishing a Reproducible Hypertrophic Scar following Thermal Injury: A Porcine Model

    Directory of Open Access Journals (Sweden)

    Scott J. Rapp, MD

    2015-02-01

    Conclusions: Deep partial-thickness thermal injury to the back of domestic swine produces an immature hypertrophic scar by 10 weeks following burn with thickness appearing to coincide with the location along the dorsal axis. With minimal pig to pig variation, we describe our technique to provide a testable immature scar model.

  3. Reproducibility of a novel model of murine asthma-like pulmonary inflammation.

    Science.gov (United States)

    McKinley, L; Kim, J; Bolgos, G L; Siddiqui, J; Remick, D G

    2004-05-01

    Sensitization to cockroach allergens (CRA) has been implicated as a major cause of asthma, especially among inner-city populations. Endotoxin from Gram-negative bacteria has also been investigated for its role in attenuating or exacerbating the asthmatic response. We have created a novel model utilizing house dust extract (HDE) containing high levels of both CRA and endotoxin to induce pulmonary inflammation (PI) and airway hyperresponsiveness (AHR). A potential drawback of this model is that the HDE is in limited supply and preparation of new HDE will not contain the exact components of the HDE used to define our model system. The present study involved testing HDEs collected from various homes for their ability to cause PI and AHR. Dust collected from five homes was extracted in phosphate buffered saline overnight. The levels of CRA and endotoxin in the supernatants varied from 7.1 to 49.5 mg/ml of CRA and 1.7-6 micro g/ml of endotoxin in the HDEs. Following immunization and two pulmonary exposures to HDE all five HDEs induced AHR, PI and plasma IgE levels substantially higher than normal mice. This study shows that HDE containing high levels of cockroach allergens and endotoxin collected from different sources can induce an asthma-like response in our murine model.

  4. [Renaissance of training in general surgery in Cambodia: a unique experience or reproducible model].

    Science.gov (United States)

    Dumurgier, C; Baulieux, J

    2005-01-01

    Is the new surgical training program at the University of Phom-Penh, Cambodia a unique experience or can it serve as a model for developing countries? This report describes the encouraging first results of this didactic and hands-on surgical program. Based on their findings the authors recommend not only continuing the program in Phom-Penh but also proposing slightly modified versions to new medical universities not currently offering specialization in surgery.

  5. Evaluation of Nitinol staples for the Lapidus arthrodesis in a reproducible biomechanical model

    Directory of Open Access Journals (Sweden)

    Nicholas Alexander Russell

    2015-12-01

    Full Text Available While the Lapidus procedure is a widely accepted technique for treatment of hallux valgus, the optimal fixation method to maintain joint stability remains controversial. The purpose of this study was to evaluate the biomechanical properties of new Shape Memory Alloy staples arranged in different configurations in a repeatable 1st Tarsometatarsal arthrodesis model. Ten sawbones models of the whole foot (n=5 per group were reconstructed using a single dorsal staple or two staples in a delta configuration. Each construct was mechanically tested in dorsal four-point bending, medial four-point bending, dorsal three-point bending and plantar cantilever bending with the staples activated at 37°C. The peak load, stiffness and plantar gapping were determined for each test. Pressure sensors were used to measure the contact force and area of the joint footprint in each group. There was a significant (p < 0.05 increase in peak load in the two staple constructs compared to the single staple constructs for all testing modalities. Stiffness also increased significantly in all tests except dorsal four-point bending. Pressure sensor readings showed a significantly higher contact force at time zero and contact area following loading in the two staple constructs (p < 0.05. Both groups completely recovered any plantar gapping following unloading and restored their initial contact footprint. The biomechanical integrity and repeatability of the models was demonstrated with no construct failures due to hardware or model breakdown. Shape memory alloy staples provide fixation with the ability to dynamically apply and maintain compression across a simulated arthrodesis following a range of loading conditions.

  6. Can lagrangian models reproduce the migration time of European eel obtained from otolith analysis?

    Science.gov (United States)

    Rodríguez-Díaz, L.; Gómez-Gesteira, M.

    2017-12-01

    European eel can be found at the Bay of Biscay after a long migration across the Atlantic. The duration of migration, which takes place at larval stage, is of primary importance to understand eel ecology and, hence, its survival. This duration is still a controversial matter since it can range from 7 months to > 4 years depending on the method to estimate duration. The minimum migration duration estimated from our lagrangian model is similar to the duration obtained from the microstructure of eel otoliths, which is typically on the order of 7-9 months. The lagrangian model showed to be sensitive to different conditions like spatial and time resolution, release depth, release area and initial distribution. In general, migration showed to be faster when decreasing the depth and increasing the resolution of the model. In average, the fastest migration was obtained when only advective horizontal movement was considered. However, faster migration was even obtained in some cases when locally oriented random migration was taken into account.

  7. Realizing the Living Paper using the ProvONE Model for Reproducible Research

    Science.gov (United States)

    Jones, M. B.; Jones, C. S.; Ludäscher, B.; Missier, P.; Walker, L.; Slaughter, P.; Schildhauer, M.; Cuevas-Vicenttín, V.

    2015-12-01

    Science has advanced through traditional publications that codify research results as a permenant part of the scientific record. But because publications are static and atomic, researchers can only cite and reference a whole work when building on prior work of colleagues. The open source software model has demonstrated a new approach in which strong version control in an open environment can nurture an open ecosystem of software. Developers now commonly fork and extend software giving proper credit, with less repetition, and with confidence in the relationship to original software. Through initiatives like 'Beyond the PDF', an analogous model has been imagined for open science, in which software, data, analyses, and derived products become first class objects within a publishing ecosystem that has evolved to be finer-grained and is realized through a web of linked open data. We have prototyped a Living Paper concept by developing the ProvONE provenance model for scientific workflows, with prototype deployments in DataONE. ProvONE promotes transparency and openness by describing the authenticity, origin, structure, and processing history of research artifacts and by detailing the steps in computational workflows that produce derived products. To realize the Living Paper, we decompose scientific papers into their constituent products and publish these as compound objects in the DataONE federation of archival repositories. Each individual finding and sub-product of a reseach project (such as a derived data table, a workflow or script, a figure, an image, or a finding) can be independently stored, versioned, and cited. ProvONE provenance traces link these fine-grained products within and across versions of a paper, and across related papers that extend an original analysis. This allows for open scientific publishing in which researchers extend and modify findings, creating a dynamic, evolving web of results that collectively represent the scientific enterprise. The

  8. Observations and modeling of the diurnal SST cycle in the North and Baltic Seas

    DEFF Research Database (Denmark)

    Karagali, Ioanna; Høyer, J.L.

    2013-01-01

    do not exceed 0.25 K, with a maximum standard deviation of 0.76 and a 0.45 correlation. When random noise is added to the models, their ability to reproduce the statistical properties of the SEVIRI observations improves. The correlation between the observed and modeled anomalies and different...

  9. Comparative analysis of 5 lung cancer natural history and screening models that reproduce outcomes of the NLST and PLCO trials.

    Science.gov (United States)

    Meza, Rafael; ten Haaf, Kevin; Kong, Chung Yin; Erdogan, Ayca; Black, William C; Tammemagi, Martin C; Choi, Sung Eun; Jeon, Jihyoun; Han, Summer S; Munshi, Vidit; van Rosmalen, Joost; Pinsky, Paul; McMahon, Pamela M; de Koning, Harry J; Feuer, Eric J; Hazelton, William D; Plevritis, Sylvia K

    2014-06-01

    The National Lung Screening Trial (NLST) demonstrated that low-dose computed tomography screening is an effective way of reducing lung cancer (LC) mortality. However, optimal screening strategies have not been determined to date and it is uncertain whether lighter smokers than those examined in the NLST may also benefit from screening. To address these questions, it is necessary to first develop LC natural history models that can reproduce NLST outcomes and simulate screening programs at the population level. Five independent LC screening models were developed using common inputs and calibration targets derived from the NLST and the Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO). Imputation of missing information regarding smoking, histology, and stage of disease for a small percentage of individuals and diagnosed LCs in both trials was performed. Models were calibrated to LC incidence, mortality, or both outcomes simultaneously. Initially, all models were calibrated to the NLST and validated against PLCO. Models were found to validate well against individuals in PLCO who would have been eligible for the NLST. However, all models required further calibration to PLCO to adequately capture LC outcomes in PLCO never-smokers and light smokers. Final versions of all models produced incidence and mortality outcomes in the presence and absence of screening that were consistent with both trials. The authors developed 5 distinct LC screening simulation models based on the evidence in the NLST and PLCO. The results of their analyses demonstrated that the NLST and PLCO have produced consistent results. The resulting models can be important tools to generate additional evidence to determine the effectiveness of lung cancer screening strategies using low-dose computed tomography. © 2014 American Cancer Society.

  10. QSAR model reproducibility and applicability: a case study of rate constants of hydroxyl radical reaction models applied to polybrominated diphenyl ethers and (benzo-)triazoles.

    Science.gov (United States)

    Roy, Partha Pratim; Kovarich, Simona; Gramatica, Paola

    2011-08-01

    The crucial importance of the three central OECD principles for quantitative structure-activity relationship (QSAR) model validation is highlighted in a case study of tropospheric degradation of volatile organic compounds (VOCs) by OH, applied to two CADASTER chemical classes (PBDEs and (benzo-)triazoles). The application of any QSAR model to chemicals without experimental data largely depends on model reproducibility by the user. The reproducibility of an unambiguous algorithm (OECD Principle 2) is guaranteed by redeveloping MLR models based on both updated version of DRAGON software for molecular descriptors calculation and some freely available online descriptors. The Genetic Algorithm has confirmed its ability to always select the most informative descriptors independently on the input pool of variables. The ability of the GA-selected descriptors to model chemicals not used in model development is verified by three different splittings (random by response, K-ANN and K-means clustering), thus ensuring the external predictivity of the new models, independently of the training/prediction set composition (OECD Principle 5). The relevance of checking the structural applicability domain becomes very evident on comparing the predictions for CADASTER chemicals, using the new models proposed herein, with those obtained by EPI Suite. Copyright © 2011 Wiley Periodicals, Inc.

  11. Eccentric Contraction-Induced Muscle Injury: Reproducible, Quantitative, Physiological Models to Impair Skeletal Muscle’s Capacity to Generate Force

    Science.gov (United States)

    Call, Jarrod A.; Lowe, Dawn A.

    2018-01-01

    In order to investigate the molecular and cellular mechanisms of muscle regeneration an experimental injury model is required. Advantages of eccentric contraction-induced injury are that it is a controllable, reproducible, and physiologically relevant model to cause muscle injury, with injury being defined as a loss of force generating capacity. While eccentric contractions can be incorporated into conscious animal study designs such as downhill treadmill running, electrophysiological approaches to elicit eccentric contractions and examine muscle contractility, for example before and after the injurious eccentric contractions, allows researchers to circumvent common issues in determining muscle function in a conscious animal (e.g., unwillingness to participate). Herein, we describe in vitro and in vivo methods that are reliable, repeatable, and truly maximal because the muscle contractions are evoked in a controlled, quantifiable manner independent of subject motivation. Both methods can be used to initiate eccentric contraction-induced injury and are suitable for monitoring functional muscle regeneration hours to days to weeks post-injury. PMID:27492161

  12. Eccentric Contraction-Induced Muscle Injury: Reproducible, Quantitative, Physiological Models to Impair Skeletal Muscle's Capacity to Generate Force.

    Science.gov (United States)

    Call, Jarrod A; Lowe, Dawn A

    2016-01-01

    In order to investigate the molecular and cellular mechanisms of muscle regeneration an experimental injury model is required. Advantages of eccentric contraction-induced injury are that it is a controllable, reproducible, and physiologically relevant model to cause muscle injury, with injury being defined as a loss of force generating capacity. While eccentric contractions can be incorporated into conscious animal study designs such as downhill treadmill running, electrophysiological approaches to elicit eccentric contractions and examine muscle contractility, for example before and after the injurious eccentric contractions, allows researchers to circumvent common issues in determining muscle function in a conscious animal (e.g., unwillingness to participate). Herein, we describe in vitro and in vivo methods that are reliable, repeatable, and truly maximal because the muscle contractions are evoked in a controlled, quantifiable manner independent of subject motivation. Both methods can be used to initiate eccentric contraction-induced injury and are suitable for monitoring functional muscle regeneration hours to days to weeks post-injury.

  13. [Reproducing and evaluating a rabbit model of multiple organ dysfunction syndrome after cardiopulmonary resuscitation resulted from asphyxia].

    Science.gov (United States)

    Zhang, Dong; Li, Nan; Chen, Ying; Wang, Yu-shan

    2013-02-01

    To evaluate the reproduction of a model of post resuscitation multiple organ dysfunction syndrome (PR-MODS) after cardiac arrest (CA) in rabbit, in order to provide new methods for post-CA treatment. Thirty-five rabbits were randomly divided into three groups, the sham group (n=5), the 7-minute asphyxia group (n=15), and the 8-minute asphyxia group (n=15). The asphyxia CA model was reproduced with tracheal occlusion. After cardiopulmonary resuscitation (CPR), the ratio of recovery of spontaneous circulation (ROSC), the mortality at different time points and the incidence of systemic inflammatory response syndrome (SIRS) were observed in two asphyxia groups. Creatine kinase isoenzyme (CK-MB), alanine aminotransferase (ALT), creatinine (Cr), glucose (Glu) and arterial partial pressure of oxygen (PaO2) levels in blood were measured in the two asphyxia groups before CPR and 12, 24 and 48 hours after ROSC. The survived rabbits were euthanized at 48 hours after ROSC, and heart, brain, lung, kidney, liver, and intestine were harvested for pathological examination using light microscope. PR-MODS after CA was defined based on the function of main organs and their pathological changes. (1) The incidence of ROSC was 100.0% in 7-minute asphyxia group and 86.7% in 8-minute asphyxia group respectively (P>0.05). The 6-hour mortality in 8-minute asphyxia group was significantly higher than that in 7-minute asphyxia group (46.7% vs. 6.7%, P0.05). (2) There was a variety of organ dysfunctions in survived rabbits after ROSC, including chemosis, respiratory distress, hypotension, abdominal distension, weakened or disappearance of bowel peristalsis and oliguria. (3) There was no SIRS or associated changes in major organ function in the sham group. SIRS was observed at 12 - 24 hours after ROSC in the two asphyxia groups. CK-MB was increased significantly at 12 hours after ROSC compared with that before asphyxia (7-minute asphyxia group: 786.88±211.84 U/L vs. 468.20±149.45 U/L, 8

  14. Assessing trends in observed and modelled climate extremes over Australia in relation to future projections

    International Nuclear Information System (INIS)

    Alexander, Lisa

    2007-01-01

    Full text: Nine global coupled climate models were assessed for their ability to reproduce observed trends in a set of indices representing temperature and precipitation extremes over Australia. Observed trends for 1957-1999 were compared with individual and multi-modelled trends calculated over the same period. When averaged across Australia the magnitude of trends and interannual variability of temperature extremes were well simulated by most models, particularly for the warm nights index. Except for consecutive dry days, the majority of models also reproduced the correct sign of trend for precipitation extremes. A bootstrapping technique was used to show that most models produce plausible trends when averaged over Australia, although only heavy precipitation days simulated from the multi-model ensemble showed significant skill at reproducing the observed spatial pattern of trends. Two of the models with output from different forcings showed that only with anthropogenic forcing included could the models capture the observed areally averaged trend for some of the temperature indices, but the forcing made little difference to the models' ability to reproduce the spatial pattern of trends over Australia. Future projected changes in extremes using three emissions scenarios were also analysed. Australia shows a shift towards significant warming of temperature extremes with much longer dry spells interspersed with periods of increased extreme precipitation irrespective of the scenario used. More work is required to determine whether regional projected changes over Australia are robust

  15. Reproducing the organic matter model of anthropogenic dark earth of Amazonia and testing the ecotoxicity of functionalized charcoal compounds

    Directory of Open Access Journals (Sweden)

    Carolina Rodrigues Linhares

    2012-05-01

    Full Text Available The objective of this work was to obtain organic compounds similar to the ones found in the organic matter of anthropogenic dark earth of Amazonia (ADE using a chemical functionalization procedure on activated charcoal, as well as to determine their ecotoxicity. Based on the study of the organic matter from ADE, an organic model was proposed and an attempt to reproduce it was described. Activated charcoal was oxidized with the use of sodium hypochlorite at different concentrations. Nuclear magnetic resonance was performed to verify if the spectra of the obtained products were similar to the ones of humic acids from ADE. The similarity between spectra indicated that the obtained products were polycondensed aromatic structures with carboxyl groups: a soil amendment that can contribute to soil fertility and to its sustainable use. An ecotoxicological test with Daphnia similis was performed on the more soluble fraction (fulvic acids of the produced soil amendment. Aryl chloride was formed during the synthesis of the organic compounds from activated charcoal functionalization and partially removed through a purification process. However, it is probable that some aryl chloride remained in the final product, since the ecotoxicological test indicated that the chemical functionalized soil amendment is moderately toxic.

  16. Validity, reliability, and reproducibility of linear measurements on digital models obtained from intraoral and cone-beam computed tomography scans of alginate impressions

    NARCIS (Netherlands)

    Wiranto, Matthew G.; Engelbrecht, W. Petrie; Nolthenius, Heleen E. Tutein; van der Meer, W. Joerd; Ren, Yijin

    INTRODUCTION: Digital 3-dimensional models are widely used for orthodontic diagnosis. The aim of this study was to assess the validity, reliability, and reproducibility of digital models obtained from the Lava Chairside Oral scanner (3M ESPE, Seefeld, Germany) and cone-beam computed tomography scans

  17. Reproducibility and accuracy of linear measurements on dental models derived from cone-beam computed tomography compared with digital dental casts

    NARCIS (Netherlands)

    Waard, O. de; Rangel, F.A.; Fudalej, P.S.; Bronkhorst, E.M.; Kuijpers-Jagtman, A.M.; Breuning, K.H.

    2014-01-01

    INTRODUCTION: The aim of this study was to determine the reproducibility and accuracy of linear measurements on 2 types of dental models derived from cone-beam computed tomography (CBCT) scans: CBCT images, and Anatomodels (InVivoDental, San Jose, Calif); these were compared with digital models

  18. Coupled RipCAS-DFLOW (CoRD) Software and Data Management System for Reproducible Floodplain Vegetation Succession Modeling

    Science.gov (United States)

    Turner, M. A.; Miller, S.; Gregory, A.; Cadol, D. D.; Stone, M. C.; Sheneman, L.

    2016-12-01

    We present the Coupled RipCAS-DFLOW (CoRD) modeling system created to encapsulate the workflow to analyze the effects of stream flooding on vegetation succession. CoRD provides an intuitive command-line and web interface to run DFLOW and RipCAS in succession over many years automatically, which is a challenge because, for our application, DFLOW must be run on a supercomputing cluster via the PBS job scheduler. RipCAS is a vegetation succession model, and DFLOW is a 2D open channel flow model. Data adaptors have been developed to seamlessly connect DFLOW output data to be RipCAS inputs, and vice-versa. CoRD provides automated statistical analysis and visualization, plus automatic syncing of input and output files and model run metadata to the hydrological data management system HydroShare using its excellent Python REST client. This combination of technologies and data management techniques allows the results to be shared with collaborators and eventually published. Perhaps most importantly, it allows results to be easily reproduced via either the command-line or web user interface. This system is a result of collaboration between software developers and hydrologists participating in the Western Consortium for Watershed Analysis, Visualization, and Exploration (WC-WAVE). Because of the computing-intensive nature of this particular workflow, including automating job submission/monitoring and data adaptors, software engineering expertise is required. However, the hydrologists provide the software developers with a purpose and ensure a useful, intuitive tool is developed. Our hydrologists contribute software, too: RipCAS was developed from scratch by hydrologists on the team as a specialized, open-source version of the Computer Aided Simulation Model for Instream Flow and Riparia (CASiMiR) vegetation model; our hydrologists running DFLOW provided numerous examples and help with the supercomputing system. This project is written in Python, a popular language in the

  19. Assessment of a numerical model to reproduce event‐scale erosion and deposition distributions in a braided river

    Science.gov (United States)

    Measures, R.; Hicks, D. M.; Brasington, J.

    2016-01-01

    Abstract Numerical morphological modeling of braided rivers, using a physics‐based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth‐averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high‐flow event. Evaluation of model performance primarily focused upon using high‐resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach‐scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers. PMID:27708477

  20. Assessment of a numerical model to reproduce event-scale erosion and deposition distributions in a braided river

    Science.gov (United States)

    Williams, R. D.; Measures, R.; Hicks, D. M.; Brasington, J.

    2016-08-01

    Numerical morphological modeling of braided rivers, using a physics-based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth-averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high-flow event. Evaluation of model performance primarily focused upon using high-resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach-scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers.

  1. Modeling of the cloud and radiation processes observed during SHEBA

    Science.gov (United States)

    Du, Ping; Girard, Eric; Bertram, Allan K.; Shupe, Matthew D.

    2011-09-01

    Six microphysics schemes implemented in the climate version of the Environment Canada's Global Multiscale Environmental (GEM) model are used to simulate the cloud and radiation processes observed during Surface Heat Budget of the Arctic Ocean (SHEBA) field experiment. The simplest microphysics scheme (SUN) has one prognostic variable: the total cloud water content. The second microphysics scheme (MLO) has 12 prognostic variables. The four other microphysics schemes are modified versions of MLO. A new parameterization for heterogeneous ice nucleation based on laboratory experiments is included in these versions of MLO. One is for uncoated ice nuclei (ML-NAC) and another is for sulfuric acid coated ice nuclei (ML-AC). ML-AC and ML-NAC have been developed to distinguish non-polluted and polluted air masses, the latter being common over the Arctic during winter and spring. A sensitivity study, in which the dust concentration is reduced by a factor 5, is also performed to assess the sensitivity of the results to the dust concentration in ML-AC-test and ML-NAC-test. Results show that SUN, ML-AC and ML-AC-test reproduce quite well the downward longwave radiation and cloud radiative forcing during the cold season. The good results obtained with SUN are due to compensating errors. It overestimates cloud fraction and underestimates cloud liquid water path during winter. ML-AC and ML-AC-test reproduces quite well all these variables and their relationships. MLO, ML-NAC and ML-NAC-test underestimate the cloud liquid water path and cloud fraction during the cold season, which leads to an underestimation of the downward longwave radiation at surface. During summer, all versions of the model underestimate the downward shortwave radiation at surface. ML-AC and ML-NAC overestimate the total cloud water during the warm season, however, they reproduce relatively well the relationships between cloud radiative forcing and cloud microstructure, which is not the case for the most simple

  2. Observations and Modeling of Atmospheric Radiance Structure

    National Research Council Canada - National Science Library

    Wintersteiner, Peter

    2001-01-01

    The overall purpose of the work that we have undertaken is to provide new capabilities for observing and modeling structured radiance in the atmosphere, particularly the non-LTE regions of the atmosphere...

  3. Model for behavior observation training programs

    International Nuclear Information System (INIS)

    Berghausen, P.E. Jr.

    1987-01-01

    Continued behavior observation is mandated by ANSI/ANS 3.3. This paper presents a model for behavior observation training that is in accordance with this standard and the recommendations contained in US NRC publications. The model includes seventeen major topics or activities. Ten of these are discussed: Pretesting of supervisor's knowledge of behavior observation requirements, explanation of the goals of behavior observation programs, why behavior observation training programs are needed (legal and psychological issues), early indicators of emotional instability, use of videotaped interviews to demonstrate significant psychopathology, practice recording behaviors, what to do when unusual behaviors are observed, supervisor rationalizations for noncompliance, when to be especially vigilant, and prevention of emotional instability

  4. The Need for Reproducibility

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-06-27

    The purpose of this presentation is to consider issues of reproducibility, specifically it determines whether bitwise reproducible computation is possible, if computational research in DOE improves its publication process, and if reproducible results can be achieved apart from the peer review process?

  5. Reliability versus reproducibility

    International Nuclear Information System (INIS)

    Lautzenheiser, C.E.

    1976-01-01

    Defect detection and reproducibility of results are two separate but closely related subjects. It is axiomatic that a defect must be detected from examination to examination or reproducibility of results is very poor. On the other hand, a defect can be detected on each of subsequent examinations for higher reliability and still have poor reproducibility of results

  6. Inhibition of basophil activation by histamine: a sensitive and reproducible model for the study of the biological activity of high dilutions.

    Science.gov (United States)

    Sainte-Laudy, J; Belon, Ph

    2009-10-01

    (another human basophil activation marker). Results were expressed in mean fluorescence intensity of the CD203c positive population (MFI-CD203c) and an activation index calculated by an algorithm. For the mouse basophil model, histamine was measured spectrofluorimetrically. The main results obtained over 28 years of work was the demonstration of a reproducible inhibition of human basophil activation by high dilutions of histamine, the effect peaks in the range of 15-17CH. The effect was not significant when histamine was replaced by histidine (a histamine precursor) or cimetidine (histamine H2 receptor antagonist) was added to the incubation medium. These results were confirmed by flow cytometry. Using the latter technique, we also showed that 4-Methyl histamine (H2 agonist) induced a similar effect, in contrast to 1-Methyl histamine, an inactive histamine metabolite. Using the mouse model, we showed that histamine high dilutions, in the same range of dilutions, inhibited histamine release. Successively, using different models to study of human and murine basophil activation, we demonstrated that high dilutions of histamine, in the range of 15-17CH induce a reproducible biological effect. This phenomenon has been confirmed by a multi-center study using the HBDT model and by at least three independent laboratories by flow cytometry. The specificity of the observed effect was confirmed, versus the water controls at the same dilution level by the absence of biological activity of inactive compounds such as histidine and 1-Methyl histamine and by the reversibility of this effect in the presence of a histamine receptor H2 antagonist.

  7. REPRODUCING THE OBSERVED ABUNDANCES IN RCB AND HdC STARS WITH POST-DOUBLE-DEGENERATE MERGER MODELS—CONSTRAINTS ON MERGER AND POST-MERGER SIMULATIONS AND PHYSICS PROCESSES

    International Nuclear Information System (INIS)

    Menon, Athira; Herwig, Falk; Denissenkov, Pavel A.; Clayton, Geoffrey C.; Staff, Jan; Pignatari, Marco; Paxton, Bill

    2013-01-01

    The R Coronae Borealis (RCB) stars are hydrogen-deficient, variable stars that are most likely the result of He-CO WD mergers. They display extremely low oxygen isotopic ratios, 16 O/ 18 O ≅ 1-10, 12 C/ 13 C ≥ 100, and enhancements up to 2.6 dex in F and in s-process elements from Zn to La, compared to solar. These abundances provide stringent constraints on the physical processes during and after the double-degenerate merger. As shown previously, O-isotopic ratios observed in RCB stars cannot result from the dynamic double-degenerate merger phase, and we now investigate the role of the long-term one-dimensional spherical post-merger evolution and nucleosynthesis based on realistic hydrodynamic merger progenitor models. We adopt a model for extra envelope mixing to represent processes driven by rotation originating in the dynamical merger. Comprehensive nucleosynthesis post-processing simulations for these stellar evolution models reproduce, for the first time, the full range of the observed abundances for almost all the elements measured in RCB stars: 16 O/ 18 O ratios between 9 and 15, C-isotopic ratios above 100, and ∼1.4-2.35 dex F enhancements, along with enrichments in s-process elements. The nucleosynthesis processes in our models constrain the length and temperature in the dynamic merger shell-of-fire feature as well as the envelope mixing in the post-merger phase. s-process elements originate either in the shell-of-fire merger feature or during the post-merger evolution, but the contribution from the asymptotic giant branch progenitors is negligible. The post-merger envelope mixing must eventually cease ∼10 6 yr after the dynamic merger phase before the star enters the RCB phase

  8. Skills of General Circulation and Earth System Models in reproducing streamflow to the ocean: the case of Congo river

    Science.gov (United States)

    Santini, M.; Caporaso, L.

    2017-12-01

    Although the importance of water resources in the context of climate change, it is still difficult to correctly simulate the freshwater cycle over the land via General Circulation and Earth System Models (GCMs and ESMs). Existing efforts from the Climate Model Intercomparison Project 5 (CMIP5) were mainly devoted to the validation of atmospheric variables like temperature and precipitation, with low attention to discharge.Here we investigate the present-day performances of GCMs and ESMs participating to CMIP5 in simulating the discharge of the river Congo to the sea thanks to: i) the long-term availability of discharge data for the Kinshasa hydrological station representative of more than 95% of the water flowing in the whole catchment; and ii) the River's still low influence by human intervention, which enables comparison with the (mostly) natural streamflow simulated within CMIP5.Our findings suggest how most of models appear overestimating the streamflow in terms of seasonal cycle, especially in the late winter and spring, while overestimation and variability across models are lower in late summer. Weighted ensemble means are also calculated, based on simulations' performances given by several metrics, showing some improvements of results.Although simulated inter-monthly and inter-annual percent anomalies do not appear significantly different from those in observed data, when translated into well consolidated indicators of drought attributes (frequency, magnitude, timing, duration), usually adopted for more immediate communication to stakeholders and decision makers, such anomalies can be misleading.These inconsistencies produce incorrect assessments towards water management planning and infrastructures (e.g. dams or irrigated areas), especially if models are used instead of measurements, as in case of ungauged basins or for basins with insufficient data, as well as when relying on models for future estimates without a preliminary quantification of model biases.

  9. From Peer-Reviewed to Peer-Reproduced in Scholarly Publishing: The Complementary Roles of Data Models and Workflows in Bioinformatics.

    Directory of Open Access Journals (Sweden)

    Alejandra González-Beltrán

    Full Text Available Reproducing the results from a scientific paper can be challenging due to the absence of data and the computational tools required for their analysis. In addition, details relating to the procedures used to obtain the published results can be difficult to discern due to the use of natural language when reporting how experiments have been performed. The Investigation/Study/Assay (ISA, Nanopublications (NP, and Research Objects (RO models are conceptual data modelling frameworks that can structure such information from scientific papers. Computational workflow platforms can also be used to reproduce analyses of data in a principled manner. We assessed the extent by which ISA, NP, and RO models, together with the Galaxy workflow system, can capture the experimental processes and reproduce the findings of a previously published paper reporting on the development of SOAPdenovo2, a de novo genome assembler.Executable workflows were developed using Galaxy, which reproduced results that were consistent with the published findings. A structured representation of the information in the SOAPdenovo2 paper was produced by combining the use of ISA, NP, and RO models. By structuring the information in the published paper using these data and scientific workflow modelling frameworks, it was possible to explicitly declare elements of experimental design, variables, and findings. The models served as guides in the curation of scientific information and this led to the identification of inconsistencies in the original published paper, thereby allowing its authors to publish corrections in the form of an errata.SOAPdenovo2 scripts, data, and results are available through the GigaScience Database: http://dx.doi.org/10.5524/100044; the workflows are available from GigaGalaxy: http://galaxy.cbiit.cuhk.edu.hk; and the representations using the ISA, NP, and RO models are available through the SOAPdenovo2 case study website http://isa-tools.github.io/soapdenovo2

  10. A Community Data Model for Hydrologic Observations

    Science.gov (United States)

    Tarboton, D. G.; Horsburgh, J. S.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.; Jennings, B.

    2006-12-01

    The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. Hydrologic information science involves the description of hydrologic environments in a consistent way, using data models for information integration. This includes a hydrologic observations data model for the storage and retrieval of hydrologic observations in a relational database designed to facilitate data retrieval for integrated analysis of information collected by multiple investigators. It is intended to provide a standard format to facilitate the effective sharing of information between investigators and to facilitate analysis of information within a single study area or hydrologic observatory, or across hydrologic observatories and regions. The observations data model is designed to store hydrologic observations and sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used and provide traceable heritage from raw measurements to usable information. The design is based on the premise that a relational database at the single observation level is most effective for providing querying capability and cross dimension data retrieval and analysis. This premise is being tested through the implementation of a prototype hydrologic observations database, and the development of web services for the retrieval of data from and ingestion of data into the database. These web services hosted by the San Diego Supercomputer center make data in the database accessible both through a Hydrologic Data Access System portal and directly from applications software such as Excel, Matlab and ArcGIS that have Standard Object Access Protocol (SOAP) capability. This paper will (1) describe the data model; (2) demonstrate the capability for representing diverse data in the same database; (3) demonstrate the use of the database from applications software for the performance of hydrologic analysis

  11. Developing a Collection of Composable Data Translation Software Units to Improve Efficiency and Reproducibility in Ecohydrologic Modeling Workflows

    Science.gov (United States)

    Olschanowsky, C.; Flores, A. N.; FitzGerald, K.; Masarik, M. T.; Rudisill, W. J.; Aguayo, M.

    2017-12-01

    Dynamic models of the spatiotemporal evolution of water, energy, and nutrient cycling are important tools to assess impacts of climate and other environmental changes on ecohydrologic systems. These models require spatiotemporally varying environmental forcings like precipitation, temperature, humidity, windspeed, and solar radiation. These input data originate from a variety of sources, including global and regional weather and climate models, global and regional reanalysis products, and geostatistically interpolated surface observations. Data translation measures, often subsetting in space and/or time and transforming and converting variable units, represent a seemingly mundane, but critical step in the application workflows. Translation steps can introduce errors, misrepresentations of data, slow execution time, and interrupt data provenance. We leverage a workflow that subsets a large regional dataset derived from the Weather Research and Forecasting (WRF) model and prepares inputs to the Parflow integrated hydrologic model to demonstrate the impact translation tool software quality on scientific workflow results and performance. We propose that such workflows will benefit from a community approved collection of data transformation components. The components should be self-contained composable units of code. This design pattern enables automated parallelization and software verification, improving performance and reliability. Ensuring that individual translation components are self-contained and target minute tasks increases reliability. The small code size of each component enables effective unit and regression testing. The components can be automatically composed for efficient execution. An efficient data translation framework should be written to minimize data movement. Composing components within a single streaming process reduces data movement. Each component will typically have a low arithmetic intensity, meaning that it requires about the same number of

  12. Model and observed seismicity represented in a two dimensional space

    Directory of Open Access Journals (Sweden)

    M. Caputo

    1976-06-01

    Full Text Available In recent years theoretical seismology lias introduced
    some formulae relating the magnitude and the seismic moment of earthquakes
    to the size of the fault and the stress drop which generated the
    earthquake.
    In the present paper we introduce a model for the statistics of the
    earthquakes based on these formulae. The model gives formulae which
    show internal consistency and are also confirmed by observations.
    For intermediate magnitudes the formulae reproduce also the trend
    of linearity of the statistics of magnitude and moment observed in all the
    seismic regions of the world. This linear trend changes into a curve with
    increasing slope for large magnitudes and moment.
    When a catalogue of the magnitudes and/or the seismic moment of
    the earthquakes of a seismic region is available, the model allows to estimate
    the maximum magnitude possible in the region.

  13. 3D-modeling of the spine using EOS imaging system: Inter-reader reproducibility and reliability.

    Science.gov (United States)

    Rehm, Johannes; Germann, Thomas; Akbar, Michael; Pepke, Wojciech; Kauczor, Hans-Ulrich; Weber, Marc-André; Spira, Daniel

    2017-01-01

    To retrospectively assess the interreader reproducibility and reliability of EOS 3D full spine reconstructions in patients with adolescent idiopathic scoliosis (AIS). 73 patients with mean age of 17 years and a moderate AIS (median Cobb Angle 18.2°) obtained low-dose standing biplanar radiographs with EOS. Two independent readers performed "full spine" 3D reconstructions of the spine with the "full-spine" method adjusting the bone contour of every thoracic and lumbar vertebra (Th1-L5). Interreader reproducibility was assessed regarding rotation of every single vertebra in the coronal (i.e. frontal), sagittal (i.e. lateral), and axial plane, T1/T12 kyphosis, T4/T12 kyphosis, L1/L5 lordosis, L1/S1 lordosis and pelvic parameters. Radiation exposure, scan-time and 3D reconstruction time were recorded. Interclass correlation (ICC) ranged between 0.83 and 0.98 for frontal vertebral rotation, between 0.94 and 0.99 for lateral vertebral rotation and between 0.51 and 0.88 for axial vertebral rotation. ICC was 0.92 for T1/T12 kyphosis, 0.95 for T4/T12 kyphosis, 0.90 for L1/L5 lordosis, 0.85 for L1/S1 lordosis, 0.97 for pelvic incidence, 0.96 for sacral slope, 0.98 for sagittal pelvic tilt and 0.94 for lateral pelvic tilt. The mean time for reconstruction was 14.9 minutes (reader 1: 14.6 minutes, reader 2: 15.2 minutes, p3D angle measurement of vertebral rotation proved to be reliable and was performed in an acceptable reconstruction time. Interreader reproducibility of axial rotation was limited to some degree in the upper and middle thoracic spine due the obtuse angulation of the pedicles and the processi spinosi in the frontal view somewhat complicating their delineation.

  14. Measurement of cerebral blood flow by intravenous xenon-133 technique and a mobile system. Reproducibility using the Obrist model compared to total curve analysis

    DEFF Research Database (Denmark)

    Schroeder, T; Holstein, P; Lassen, N A

    1986-01-01

    was considerably more reproducible than CBF level. Using a single detector instead of five regional values averaged as the hemispheric flow increased standard deviation of CBF level by 10-20%, while the variation in asymmetry was doubled. In optimal measuring conditions the two models revealed no significant...... differences, but in low flow situations the artifact model yielded significantly more stable results. The present apparatus, equipped with 3-5 detectors covering each hemisphere, offers the opportunity of performing serial CBF measurements in situations not otherwise feasible.......The recent development of a mobile 10 detector unit, using i.v. Xenon-133 technique, has made it possible to perform repeated bedside measurements of cerebral blood flow (CBF). Test-retest studies were carried out in 38 atherosclerotic subjects, in order to evaluate the reproducibility of CBF level...

  15. Reproducibility study of [{sup 18}F]FPP(RGD){sub 2} uptake in murine models of human tumor xenografts

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Edwin; Liu, Shuangdong; Chin, Frederick; Cheng, Zhen [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Gowrishankar, Gayatri; Yaghoubi, Shahriar [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Wedgeworth, James Patrick [Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Berndorff, Dietmar; Gekeler, Volker [Bayer Schering Pharma AG, Global Drug Discovery, Berlin (Germany); Gambhir, Sanjiv S. [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Canary Center at Stanford for Cancer Early Detection, Nuclear Medicine, Departments of Radiology and Bioengineering, Molecular Imaging Program at Stanford, Stanford, CA (United States)

    2011-04-15

    An {sup 18}F-labeled PEGylated arginine-glycine-aspartic acid (RGD) dimer [{sup 18}F]FPP(RGD){sub 2} has been used to image tumor {alpha}{sub v}{beta}{sub 3} integrin levels in preclinical and clinical studies. Serial positron emission tomography (PET) studies may be useful for monitoring antiangiogenic therapy response or for drug screening; however, the reproducibility of serial scans has not been determined for this PET probe. The purpose of this study was to determine the reproducibility of the integrin {alpha}{sub v}{beta}{sub 3}-targeted PET probe, [{sup 18}F ]FPP(RGD){sub 2} using small animal PET. Human HCT116 colon cancer xenografts were implanted into nude mice (n = 12) in the breast and scapular region and grown to mean diameters of 5-15 mm for approximately 2.5 weeks. A 3-min acquisition was performed on a small animal PET scanner approximately 1 h after administration of [{sup 18}F]FPP(RGD){sub 2} (1.9-3.8 MBq, 50-100 {mu}Ci) via the tail vein. A second small animal PET scan was performed approximately 6 h later after reinjection of the probe to assess for reproducibility. Images were analyzed by drawing an ellipsoidal region of interest (ROI) around the tumor xenograft activity. Percentage injected dose per gram (%ID/g) values were calculated from the mean or maximum activity in the ROIs. Coefficients of variation and differences in %ID/g values between studies from the same day were calculated to determine the reproducibility. The coefficient of variation (mean {+-}SD) for %ID{sub mean}/g and %ID{sub max}/g values between [{sup 18}F]FPP(RGD){sub 2} small animal PET scans performed 6 h apart on the same day were 11.1 {+-} 7.6% and 10.4 {+-} 9.3%, respectively. The corresponding differences in %ID{sub mean}/g and %ID{sub max}/g values between scans were -0.025 {+-} 0.067 and -0.039 {+-} 0.426. Immunofluorescence studies revealed a direct relationship between extent of {alpha}{sub {nu}}{beta}{sub 3} integrin expression in tumors and tumor vasculature

  16. Development and reproducibility evaluation of a Monte Carlo-based standard LINAC model for quality assurance of multi-institutional clinical trials.

    Science.gov (United States)

    Usmani, Muhammad Nauman; Takegawa, Hideki; Takashina, Masaaki; Numasaki, Hodaka; Suga, Masaki; Anetai, Yusuke; Kurosu, Keita; Koizumi, Masahiko; Teshima, Teruki

    2014-11-01

    Technical developments in radiotherapy (RT) have created a need for systematic quality assurance (QA) to ensure that clinical institutions deliver prescribed radiation doses consistent with the requirements of clinical protocols. For QA, an ideal dose verification system should be independent of the treatment-planning system (TPS). This paper describes the development and reproducibility evaluation of a Monte Carlo (MC)-based standard LINAC model as a preliminary requirement for independent verification of dose distributions. The BEAMnrc MC code is used for characterization of the 6-, 10- and 15-MV photon beams for a wide range of field sizes. The modeling of the LINAC head components is based on the specifications provided by the manufacturer. MC dose distributions are tuned to match Varian Golden Beam Data (GBD). For reproducibility evaluation, calculated beam data is compared with beam data measured at individual institutions. For all energies and field sizes, the MC and GBD agreed to within 1.0% for percentage depth doses (PDDs), 1.5% for beam profiles and 1.2% for total scatter factors (Scps.). Reproducibility evaluation showed that the maximum average local differences were 1.3% and 2.5% for PDDs and beam profiles, respectively. MC and institutions' mean Scps agreed to within 2.0%. An MC-based standard LINAC model developed to independently verify dose distributions for QA of multi-institutional clinical trials and routine clinical practice has proven to be highly accurate and reproducible and can thus help ensure that prescribed doses delivered are consistent with the requirements of clinical protocols. © The Author 2014. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  17. Confronting Cepheids Models with Interferometric Observations

    Science.gov (United States)

    Nardetto, N.

    In the last years, some issues concerning Cepheids have been resolved, based on observations and modeling. However, as usual, new difficulties arise. The link between the dynamical structure of Cepheid atmosphere and the distance scale calibration in the universe is now clearly established. To support observations, we currently need fully consistent hydrodynamical models, including pulsating and evolutionary theories, convective energy transport, adaptive numerical meshes, and a refined calculation of the radiative transfer within the pulsating atmosphere, and also in the expected circumstellar envelope (hereafter CSE). Confronting such models with observations (spectral line profiles, spatial- and spectral- visibility curves), will permit to resolve and/or strengthen subtle questions concerning (1) the limb-darkening, (2) the dynamical structure of Cepheids' atmosphere, (3) the expected interaction between the atmosphere and the CSE, and (4) it will bring new insights in determining the fundamental parameters of Cepheids. All these physical quantities are supposed furthermore to be linked to the pulsation period of Cepheids. From these studies, it will be possible to paint a glowing picture of all Cepheids within the instability strip, allowing an unprecedent calibration of the period-luminosity relation (hereafter PL relation), leading to new insights in the fields of extragalactic distance scales and cosmology.

  18. Observation and modelling of urban dew

    Science.gov (United States)

    Richards, Katrina

    Despite its relevance to many aspects of urban climate and to several practical questions, urban dew has largely been ignored. Here, simple observations an out-of-doors scale model, and numerical simulation are used to investigate patterns of dewfall and surface moisture (dew + guttation) in urban environments. Observations and modelling were undertaken in Vancouver, B.C., primarily during the summers of 1993 and 1996. Surveys at several scales (0.02-25 km) show that the main controls on dew are weather, location and site configuration (geometry and surface materials). Weather effects are discussed using an empirical factor, FW . Maximum dew accumulation (up to ~ 0.2 mm per night) is seen on nights with moist air and high FW , i.e., cloudless conditions with light winds. Favoured sites are those with high Ysky and surfaces which cool rapidly after sunset, e.g., grass and well insulated roofs. A 1/8-scale model is designed, constructed, and run at an out-of-doors site to study dew patterns in an urban residential landscape which consists of house lots, a street and an open grassed park. The Internal Thermal Mass (ITM) approach is used to scale the thermal inertia of buildings. The model is validated using data from full-scale sites in Vancouver. Patterns in the model agree with those seen at the full-scale, i.e., dew distribution is governed by weather, site geometry and substrate conditions. Correlation is shown between Ysky and surface moisture accumulation. The feasibility of using a numerical model to simulate urban dew is investigated using a modified version of a rural dew model. Results for simple isolated surfaces-a deciduous tree leaf and an asphalt shingle roof-show promise, especially for built surfaces.

  19. Modeling Heliospheric Interface: Observational and Theoretical Challenges

    Science.gov (United States)

    Pogorelov, N.; Heerikhuisen, J.; Borovikov, S.; Zank, G.

    2008-12-01

    Observational data provided by Voyager 1 and Voyager 2 spacecraft ahead of the heliospheric termination shock (TS) and in the heliosheath require considerate reassessment of theoretical models of the solar wind (SW) interaction with the magnetized interstellar medium (LISM). Contemporary models, although sophisticated enough to take into account kinetic processes accompanying charge exchange between ions and atoms and address the coupling of the interstellar and interplanetary magnetic fields (ISMF and IMF) at the heliospheric interface, are still unable to analyze the effect of non-thermal pick-up ions (PUI's) in the heliosheath. The presence of PUI's undermines the assumption of a Maxwellian distribution of the SW ions. We discuss the ways to improve physical models in this respect. The TS asymmetry observed by Voyagers can be attributed to the combination of 3D, time- dependent behavior of the SW and by the action of the ISMF. It is clear, however, that the ISMF alone can account for the TS asymmetry of about 10 AU only if it is unexpectedly strong (greater than 4 microgauss). We analyze the consequences of such magnetic fields for the neutral hydrogen deflection in the inner heliosphere from its original direction in the unperturbed LISM. We also discuss the conditions for the 2-3 kHz radio emission, which is believed to be generated in the outer heliosheath beyond the heliopause, and analyze possible location of radio emission sources under the assumption of strong magnetic field. The quality of the physical model becomes crucial when we need to address modern observational and theoretical challenges. We compare the plasma, neutral particle, and magnetic field distributions obtained with our MHD-kinetic and 5-fluid models. The transport of neutral particles is treated kinetically in the former and by a multiple neutral-fluid approach in the latter. We also investigate the distribution of magnetic field in the inner heliosheath for large angles between the Sun

  20. Spatio-temporal reproducibility of the microbial food web structure associated with the change in temperature: Long-term observations in the Adriatic Sea

    Science.gov (United States)

    Šolić, Mladen; Grbec, Branka; Matić, Frano; Šantić, Danijela; Šestanović, Stefanija; Ninčević Gladan, Živana; Bojanić, Natalia; Ordulj, Marin; Jozić, Slaven; Vrdoljak, Ana

    2018-02-01

    Global and atmospheric climate change is altering the thermal conditions in the Adriatic Sea and, consequently, the marine ecosystem. Along the eastern Adriatic coast sea surface temperature (SST) increased by an average of 1.03 °C during the period from 1979 to 2015, while in the recent period, starting from 2008, a strong upward almost linear trend of 0.013 °C/month was noted. Being mainly oligotrophic, the middle Adriatic Sea is characterized by the important role played by the microbial food web in the production and transfer of biomass and energy towards higher trophic levels. It is very important to understand the effect of warming on microbial communities, since small temperature increases in surface seawater can greatly modify the microbial role in the global carbon cycle. In this study, the Self-Organizing Map (SOM) procedure was used to analyse the time series of a number of microbial parameters at two stations with different trophic status in the central Adriatic Sea. The results show that responses of the microbial food web (MFW) structure to temperature changes are reproducible in time. Furthermore, qualitatively similar changes in the structure of the MFW occurred regardless of the trophic status. The rise in temperature was associated with: (1) the increasing importance of microbial heterotrophic activities (increase bacterial growth and bacterial predator abundance, particularly heterotrophic nanoflagellates) and (2) the increasing importance of autotrophic picoplankton (APP) in the MFW.

  1. Variation of plasmapause location during magnetic storms: observations and modeling

    Science.gov (United States)

    Liu, W.

    2017-12-01

    This paper investigates the dynamic evolutions of the plasmapause during magnetic storms based on in situ observations and empirical modeling results. Superposed epoch analysis is performed on the plasmapause location identified from THEMIS in situ measurements during the 61 magnetic storms from 2009 to 2013. The evolution of the plasmapause is generally consistent with the theory of erosion/refilling of the plasmapause. From multi-spacecraft in situ measurements, we are able to directly calculate the plasmapause radial velocity, Vpp. It is found that the radial velocity is on average earthward during main phase and turns outward during recovery phase. The empirical plasmapause model by Liu et al. [2015] is further utilized to reproduce the plasmapause location during these 61 storms to reveal the details of the evolution, such as the local time dependence. It is shown that the expansion of the plasmapause starts firstly on the midnight sector at t0+1hr, and subsequently on the dawnside at t0+4hr, dayside at t0+8hr and duskside at t0+11hr, where t0 corresponds to the time of Dst minimum. The averaged Vpp is quantified based on modeling results as up to 0.17 RE/hr earthward in the main phase and 0.08 RE/hr outward in the recovery phase. The knowledge of the dynamic evolution of plasmapause provided in this paper is valuable to understand the dynamics of the inner magnetosphere during magnetic storms.

  2. Dark energy observational evidence and theoretical models

    CERN Document Server

    Novosyadlyj, B; Shtanov, Yu; Zhuk, A

    2013-01-01

    The book elucidates the current state of the dark energy problem and presents the results of the authors, who work in this area. It describes the observational evidence for the existence of dark energy, the methods and results of constraining of its parameters, modeling of dark energy by scalar fields, the space-times with extra spatial dimensions, especially Kaluza---Klein models, the braneworld models with a single extra dimension as well as the problems of positive definition of gravitational energy in General Relativity, energy conditions and consequences of their violation in the presence of dark energy. This monograph is intended for science professionals, educators and graduate students, specializing in general relativity, cosmology, field theory and particle physics.

  3. INTERVAL OBSERVER FOR A BIOLOGICAL REACTOR MODEL

    Directory of Open Access Journals (Sweden)

    T. A. Kharkovskaia

    2014-05-01

    Full Text Available The method of an interval observer design for nonlinear systems with parametric uncertainties is considered. The interval observer synthesis problem for systems with varying parameters consists in the following. If there is the uncertainty restraint for the state values of the system, limiting the initial conditions of the system and the set of admissible values for the vector of unknown parameters and inputs, the interval existence condition for the estimations of the system state variables, containing the actual state at a given time, needs to be held valid over the whole considered time segment as well. Conditions of the interval observers design for the considered class of systems are shown. They are: limitation of the input and state, the existence of a majorizing function defining the uncertainty vector for the system, Lipschitz continuity or finiteness of this function, the existence of an observer gain with the suitable Lyapunov matrix. The main condition for design of such a device is cooperativity of the interval estimation error dynamics. An individual observer gain matrix selection problem is considered. In order to ensure the property of cooperativity for interval estimation error dynamics, a static transformation of coordinates is proposed. The proposed algorithm is demonstrated by computer modeling of the biological reactor. Possible applications of these interval estimation systems are the spheres of robust control, where the presence of various types of uncertainties in the system dynamics is assumed, biotechnology and environmental systems and processes, mechatronics and robotics, etc.

  4. Pangea breakup and northward drift of the Indian subcontinent reproduced by a numerical model of mantle convection.

    Science.gov (United States)

    Yoshida, Masaki; Hamano, Yozo

    2015-02-12

    Since around 200 Ma, the most notable event in the process of the breakup of Pangea has been the high speed (up to 20 cm yr(-1)) of the northward drift of the Indian subcontinent. Our numerical simulations of 3-D spherical mantle convection approximately reproduced the process of continental drift from the breakup of Pangea at 200 Ma to the present-day continental distribution. These simulations revealed that a major factor in the northward drift of the Indian subcontinent was the large-scale cold mantle downwelling that developed spontaneously in the North Tethys Ocean, attributed to the overall shape of Pangea. The strong lateral mantle flow caused by the high-temperature anomaly beneath Pangea, due to the thermal insulation effect, enhanced the acceleration of the Indian subcontinent during the early stage of the Pangea breakup. The large-scale hot upwelling plumes from the lower mantle, initially located under Africa, might have contributed to the formation of the large-scale cold mantle downwelling in the North Tethys Ocean.

  5. The Proximal Medial Sural Nerve Biopsy Model: A Standardised and Reproducible Baseline Clinical Model for the Translational Evaluation of Bioengineered Nerve Guides

    Directory of Open Access Journals (Sweden)

    Ahmet Bozkurt

    2014-01-01

    Full Text Available Autologous nerve transplantation (ANT is the clinical gold standard for the reconstruction of peripheral nerve defects. A large number of bioengineered nerve guides have been tested under laboratory conditions as an alternative to the ANT. The step from experimental studies to the implementation of the device in the clinical setting is often substantial and the outcome is unpredictable. This is mainly linked to the heterogeneity of clinical peripheral nerve injuries, which is very different from standardized animal studies. In search of a reproducible human model for the implantation of bioengineered nerve guides, we propose the reconstruction of sural nerve defects after routine nerve biopsy as a first or baseline study. Our concept uses the medial sural nerve of patients undergoing diagnostic nerve biopsy (≥2 cm. The biopsy-induced nerve gap was immediately reconstructed by implantation of the novel microstructured nerve guide, Neuromaix, as part of an ongoing first-in-human study. Here we present (i a detailed list of inclusion and exclusion criteria, (ii a detailed description of the surgical procedure, and (iii a follow-up concept with multimodal sensory evaluation techniques. The proximal medial sural nerve biopsy model can serve as a preliminarynature of the injuries or baseline nerve lesion model. In a subsequent step, newly developed nerve guides could be tested in more unpredictable and challenging clinical peripheral nerve lesions (e.g., following trauma which have reduced comparability due to the different nature of the injuries (e.g., site of injury and length of nerve gap.

  6. Reproducibility in Seismic Imaging

    Directory of Open Access Journals (Sweden)

    González-Verdejo O.

    2012-04-01

    Full Text Available Within the field of exploration seismology, there is interest at national level of integrating reproducibility in applied, educational and research activities related to seismic processing and imaging. This reproducibility implies the description and organization of the elements involved in numerical experiments. Thus, a researcher, teacher or student can study, verify, repeat, and modify them independently. In this work, we document and adapt reproducibility in seismic processing and imaging to spread this concept and its benefits, and to encourage the use of open source software in this area within our academic and professional environment. We present an enhanced seismic imaging example, of interest in both academic and professional environments, using Mexican seismic data. As a result of this research, we prove that it is possible to assimilate, adapt and transfer technology at low cost, using open source software and following a reproducible research scheme.

  7. Fitting a 3-D analytic model of the coronal mass ejection to observations

    Science.gov (United States)

    Gibson, S. E.; Biesecker, D.; Fisher, R.; Howard, R. A.; Thompson, B. J.

    1997-01-01

    The application of an analytic magnetohydrodynamic model is presented to observations of the time-dependent explusion of 3D coronal mass ejections (CMEs) out of the solar corona. This model relates the white-light appearance of the CME to its internal magnetic field, which takes the form of a closed bubble, filled with a partly anchored, twisted magnetic flux rope and embedded in an otherwise open background field. The density distribution frozen into the expanding CME expanding field is fully 3D, and can be integrated along the line of sight to reproduce observations of scattered white light. The model is able to reproduce the three conspicuous features often associated with CMEs as observed with white-light coronagraphs: a surrounding high-density region, an internal low-density cavity, and a high-density core. The model also describes the self-similar radial expansion of these structures. By varying the model parameters, the model can be fitted directly to observations of CMEs. It is shown how the model can quantitatively match the polarized brightness contrast of a dark cavity emerging through the lower corona as observed by the HAO Mauna Loa K-coronameter to within the noise level of the data.

  8. Lagrangian Observations and Modeling of Marine Larvae

    Science.gov (United States)

    Paris, Claire B.; Irisson, Jean-Olivier

    2017-04-01

    Just within the past two decades, studies on the early-life history stages of marine organisms have led to new paradigms in population dynamics. Unlike passive plant seeds that are transported by the wind or by animals, marine larvae have motor and sensory capabilities. As a result, marine larvae have a tremendous capacity to actively influence their dispersal. This is continuously revealed as we develop new techniques to observe larvae in their natural environment and begin to understand their ability to detect cues throughout ontogeny, process the information, and use it to ride ocean currents and navigate their way back home, or to a place like home. We present innovative in situ and numerical modeling approaches developed to understand the underlying mechanisms of larval transport in the ocean. We describe a novel concept of a Lagrangian platform, the Drifting In Situ Chamber (DISC), designed to observe and quantify complex larval behaviors and their interactions with the pelagic environment. We give a brief history of larval ecology research with the DISC, showing that swimming is directional in most species, guided by cues as diverse as the position of the sun or the underwater soundscape, and even that (unlike humans!) larvae orient better and swim faster when moving as a group. The observed Lagrangian behavior of individual larvae are directly implemented in the Connectivity Modeling System (CMS), an open source Lagrangian tracking application. Simulations help demonstrate the impact that larval behavior has compared to passive Lagrangian trajectories. These methodologies are already the base of exciting findings and are promising tools for documenting and simulating the behavior of other small pelagic organisms, forecasting their migration in a changing ocean.

  9. WhatsApp Messenger is useful and reproducible in the assessment of tibial plateau fractures: inter- and intra-observer agreement study.

    Science.gov (United States)

    Giordano, Vincenzo; Koch, Hilton Augusto; Mendes, Carlos Henrique; Bergamin, André; de Souza, Felipe Serrão; do Amaral, Ney Pecegueiro

    2015-02-01

    The aim of this study was to evaluate the inter- and intra-observer agreement in the initial diagnosis and classification by means of plain radiographs and CT scans of tibial plateau fractures photographed and sent via WhatsApp Messenger. The increasing popularity of smartphones has driven the development of technology for data transmission and imaging and generated a growing interest in the use of these devices as diagnostic tools. The emergence of WhatsApp Messenger technology, which is available for various platforms used by smartphones, has led to an improvement in the quality and resolution of images sent and received. The images (plain radiographs and CT scans) were obtained from 13 cases of tibial plateau fractures using the iPhone 5 (Apple Inc., Cupertino, CA, USA) and were sent to six observers via the WhatsApp Messenger application. The observers were asked to determine the standard deviation and type of injury, the classification according to the Schatzker and the Luo classifications schemes, and whether the CT scan changed the classification. The six observers independently assessed the images on two separate occasions, 15 days apart. The inter- and intra-observer agreement for both periods of the study ranged from excellent to perfect (0.75WhatsApp Messenger. The authors now propose the systematic use of the application to facilitate faster documentation and obtaining the opinion of an experienced consultant when not on call. Finally, we think the use of the WhatsApp Messenger as an adjuvant tool could be broadened to other clinical centres to assess its viability in other skeletal and non-skeletal trauma situations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Accuracy and reproducibility of voxel based superimposition of cone beam computed tomography models on the anterior cranial base and the zygomatic arches.

    Science.gov (United States)

    Nada, Rania M; Maal, Thomas J J; Breuning, K Hero; Bergé, Stefaan J; Mostafa, Yehya A; Kuijpers-Jagtman, Anne Marie

    2011-02-09

    Superimposition of serial Cone Beam Computed Tomography (CBCT) scans has become a valuable tool for three dimensional (3D) assessment of treatment effects and stability. Voxel based image registration is a newly developed semi-automated technique for superimposition and comparison of two CBCT scans. The accuracy and reproducibility of CBCT superimposition on the anterior cranial base or the zygomatic arches using voxel based image registration was tested in this study. 16 pairs of 3D CBCT models were constructed from pre and post treatment CBCT scans of 16 adult dysgnathic patients. Each pair was registered on the anterior cranial base three times and on the left zygomatic arch twice. Following each superimposition, the mean absolute distances between the 2 models were calculated at 4 regions: anterior cranial base, forehead, left and right zygomatic arches. The mean distances between the models ranged from 0.2 to 0.37 mm (SD 0.08-0.16) for the anterior cranial base registration and from 0.2 to 0.45 mm (SD 0.09-0.27) for the zygomatic arch registration. The mean differences between the two registration zones ranged between 0.12 to 0.19 mm at the 4 regions. Voxel based image registration on both zones could be considered as an accurate and a reproducible method for CBCT superimposition. The left zygomatic arch could be used as a stable structure for the superimposition of smaller field of view CBCT scans where the anterior cranial base is not visible.

  11. Hyper-Resolution Global Land Surface Model at Regional-to-Local Scales with observed Groundwater data assimilation

    OpenAIRE

    Singh, Raj Shekhar

    2014-01-01

    Modeling groundwater is challenging: it is not readily visible and is difficult to measure, with limited sets of observations available. Even though groundwater models can reproduce water table and head variations, considerable drift in modeled land surface states can nonetheless result from partially known geologic structure, errors in the input forcing fields, and imperfect Land Surface Model (LSM) parameterizations. These models frequently have biased results that are very different from o...

  12. Apparent diffusion coefficient measurements in diffusion-weighted magnetic resonance imaging of the anterior mediastinum: inter-observer reproducibility of five different methods of region-of-interest positioning

    Energy Technology Data Exchange (ETDEWEB)

    Priola, Adriano Massimiliano; Priola, Sandro Massimo; Parlatano, Daniela; Gned, Dario; Veltri, Andrea [San Luigi Gonzaga University Hospital, Department of Diagnostic Imaging, Regione Gonzole 10, Orbassano, Torino (Italy); Giraudo, Maria Teresa [University of Torino, Department of Mathematics ' ' Giuseppe Peano' ' , Torino (Italy); Giardino, Roberto; Ardissone, Francesco [San Luigi Gonzaga University Hospital, Department of Thoracic Surgery, Regione Gonzole 10, Orbassano, Torino (Italy); Ferrero, Bruno [San Luigi Gonzaga University Hospital, Department of Neurology, Regione Gonzole 10, Orbassano, Torino (Italy)

    2017-04-15

    To investigate inter-reader reproducibility of five different region-of-interest (ROI) protocols for apparent diffusion coefficient (ADC) measurements in the anterior mediastinum. In eighty-one subjects, on ADC mapping, two readers measured the ADC using five methods of ROI positioning that encompassed the entire tissue (whole tissue volume [WTV], three slices observer-defined [TSOD], single-slice [SS]) or the more restricted areas (one small round ROI [OSR], multiple small round ROI [MSR]). Inter-observer variability was assessed with interclass correlation coefficient (ICC), coefficient of variation (CoV), and Bland-Altman analysis. Nonparametric tests were performed to compare the ADC between ROI methods. The measurement time was recorded and compared between ROI methods. All methods showed excellent inter-reader agreement with best and worst reproducibility in WTV and OSR, respectively (ICC, 0.937/0.874; CoV, 7.3 %/16.8 %; limits of agreement, ±0.44/±0.77 x 10{sup -3} mm{sup 2}/s). ADC values of OSR and MSR were significantly lower compared to the other methods in both readers (p < 0.001). The SS and OSR methods required less measurement time (14 ± 2 s) compared to the others (p < 0.0001), while the WTV method required the longest measurement time (90 ± 56 and 77 ± 49 s for each reader) (p < 0.0001). All methods demonstrate excellent inter-observer reproducibility with the best agreement in WTV, although it requires the longest measurement time. (orig.)

  13. Repeatability and Reproducibility of Corneal Biometric Measurements Using the Visante Omni and a Rabbit Experimental Model of Post-Surgical Corneal Ectasia

    Science.gov (United States)

    Liu, Yu-Chi; Konstantopoulos, Aris; Riau, Andri K.; Bhayani, Raj; Lwin, Nyein C.; Teo, Ericia Pei Wen; Yam, Gary Hin Fai; Mehta, Jodhbir S.

    2015-01-01

    Purpose: To investigate the repeatability and reproducibility of the Visante Omni topography in obtaining topography measurements of rabbit corneas and to develop a post-surgical model of corneal ectasia. Methods: Eight rabbits were used to study the repeatability and reproducibility by assessing the intra- and interobserver bias and limits of agreement. Another nine rabbits underwent different diopters (D) of laser in situ keratosmileusis (LASIK) were used for the development of ectasia model. All eyes were examined with the Visante Omni, and corneal ultrastructure were evaluated with transmission electron microscopy (TEM). Results: There was no significant intra- or interobserver difference for mean steep and flat keratometry (K) values of simulated K, anterior, and posterior elevation measurements. Eyes underwent −5 D LASIK had a significant increase in mean amplitude of astigmatism and posterior surface elevation with time (P for trend corneal ectasia that was gradual in development and simulated the human condition. Translational Relevance: The results provide the foundations for the future evaluation of novel treatment modalities for post-surgical ectasia and keratoconus. PMID:25938004

  14. A Reliable and Reproducible Model for Assessing the Effect of Different Concentrations of α-Solanine on Rat Bone Marrow Mesenchymal Stem Cells

    Directory of Open Access Journals (Sweden)

    Adriana Ordóñez-Vásquez

    2017-01-01

    Full Text Available Αlpha-solanine (α-solanine is a glycoalkaloid present in potato (Solanum tuberosum. It has been of particular interest because of its toxicity and potential teratogenic effects that include abnormalities of the central nervous system, such as exencephaly, encephalocele, and anophthalmia. Various types of cell culture have been used as experimental models to determine the effect of α-solanine on cell physiology. The morphological changes in the mesenchymal stem cell upon exposure to α-solanine have not been established. This study aimed to describe a reliable and reproducible model for assessing the structural changes induced by exposure of mouse bone marrow mesenchymal stem cells (MSCs to different concentrations of α-solanine for 24 h. The results demonstrate that nonlethal concentrations of α-solanine (2–6 μM changed the morphology of the cells, including an increase in the number of nucleoli, suggesting elevated protein synthesis, and the formation of spicules. In addition, treatment with α-solanine reduced the number of adherent cells and the formation of colonies in culture. Immunophenotypic characterization and staining of MSCs are proposed as a reproducible method that allows description of cells exposed to the glycoalkaloid, α-solanine.

  15. Attempting to train a digital human model to reproduce human subject reach capabilities in an ejection seat aircraft

    NARCIS (Netherlands)

    Zehner, G.F.; Hudson, J.A.; Oudenhuijzen, A.

    2006-01-01

    From 1997 through 2002, the Air Force Research Lab and TNO Defence, Security and Safety (Business Unit Human Factors) were involved in a series of tests to quantify the accuracy of five Human Modeling Systems (HMSs) in determining accommodation limits of ejection seat aircraft. The results of these

  16. A sensitive and reproducible in vivo imaging mouse model for evaluation of drugs against late-stage human African trypanosomiasis.

    Science.gov (United States)

    Burrell-Saward, Hollie; Rodgers, Jean; Bradley, Barbara; Croft, Simon L; Ward, Theresa H

    2015-02-01

    To optimize the Trypanosoma brucei brucei GVR35 VSL-2 bioluminescent strain as an innovative drug evaluation model for late-stage human African trypanosomiasis. An IVIS® Lumina II imaging system was used to detect bioluminescent T. b. brucei GVR35 parasites in mice to evaluate parasite localization and disease progression. Drug treatment was assessed using qualitative bioluminescence imaging and real-time quantitative PCR (qPCR). We have shown that drug dose-response can be evaluated using bioluminescence imaging and confirmed quantification of tissue parasite load using qPCR. The model was also able to detect drug relapse earlier than the traditional blood film detection and even in the absence of any detectable peripheral parasites. We have developed and optimized a new, efficient method to evaluate novel anti-trypanosomal drugs in vivo and reduce the current 180 day drug relapse experiment to a 90 day model. The non-invasive in vivo imaging model reduces the time required to assess preclinical efficacy of new anti-trypanosomal drugs. © The Author 2014. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Isokinetic eccentric exercise as a model to induce and reproduce pathophysiological alterations related to delayed onset muscle soreness

    DEFF Research Database (Denmark)

    Lund, Henrik; Vestergaard-Poulsen, P; Kanstrup, I.L.

    1998-01-01

    Physiological alterations following unaccustomed eccentric exercise in an isokinetic dynamometer of the right m. quadriceps until exhaustion were studied, in order to create a model in which the physiological responses to physiotherapy could be measured. In experiment I (exp. I), seven selected p...

  18. Synchronized mammalian cell culture: part II--population ensemble modeling and analysis for development of reproducible processes.

    Science.gov (United States)

    Jandt, Uwe; Barradas, Oscar Platas; Pörtner, Ralf; Zeng, An-Ping

    2015-01-01

    The consideration of inherent population inhomogeneities of mammalian cell cultures becomes increasingly important for systems biology study and for developing more stable and efficient processes. However, variations of cellular properties belonging to different sub-populations and their potential effects on cellular physiology and kinetics of culture productivity under bioproduction conditions have not yet been much in the focus of research. Culture heterogeneity is strongly determined by the advance of the cell cycle. The assignment of cell-cycle specific cellular variations to large-scale process conditions can be optimally determined based on the combination of (partially) synchronized cultivation under otherwise physiological conditions and subsequent population-resolved model adaptation. The first step has been achieved using the physical selection method of countercurrent flow centrifugal elutriation, recently established in our group for different mammalian cell lines which is presented in Part I of this paper series. In this second part, we demonstrate the successful adaptation and application of a cell-cycle dependent population balance ensemble model to describe and understand synchronized bioreactor cultivations performed with two model mammalian cell lines, AGE1.HNAAT and CHO-K1. Numerical adaptation of the model to experimental data allows for detection of phase-specific parameters and for determination of significant variations between different phases and different cell lines. It shows that special care must be taken with regard to the sampling frequency in such oscillation cultures to minimize phase shift (jitter) artifacts. Based on predictions of long-term oscillation behavior of a culture depending on its start conditions, optimal elutriation setup trade-offs between high cell yields and high synchronization efficiency are proposed. © 2014 American Institute of Chemical Engineers.

  19. Preserve specimens for reproducibility

    Czech Academy of Sciences Publication Activity Database

    Krell, F.-T.; Klimeš, Petr; Rocha, L. A.; Fikáček, M.; Miller, S. E.

    2016-01-01

    Roč. 539, č. 7628 (2016), s. 168 ISSN 0028-0836 Institutional support: RVO:60077344 Keywords : reproducibility * specimen * biodiversity Subject RIV: EH - Ecology, Behaviour Impact factor: 40.137, year: 2016 http://www.nature.com/nature/journal/v539/n7628/full/539168b.html

  20. Minimum Information about a Cardiac Electrophysiology Experiment (MICEE): Standardised Reporting for Model Reproducibility, Interoperability, and Data Sharing

    Science.gov (United States)

    Quinn, TA; Granite, S; Allessie, MA; Antzelevitch, C; Bollensdorff, C; Bub, G; Burton, RAB; Cerbai, E; Chen, PS; Delmar, M; DiFrancesco, D; Earm, YE; Efimov, IR; Egger, M; Entcheva, E; Fink, M; Fischmeister, R; Franz, MR; Garny, A; Giles, WR; Hannes, T; Harding, SE; Hunter, PJ; Iribe, G; Jalife, J; Johnson, CR; Kass, RS; Kodama, I; Koren, G; Lord, P; Markhasin, VS; Matsuoka, S; McCulloch, AD; Mirams, GR; Morley, GE; Nattel, S; Noble, D; Olesen, SP; Panfilov, AV; Trayanova, NA; Ravens, U; Richard, S; Rosenbaum, DS; Rudy, Y; Sachs, F; Sachse, FB; Saint, DA; Schotten, U; Solovyova, O; Taggart, P; Tung, L; Varró, A; Volders, PG; Wang, K; Weiss, JN; Wettwer, E; White, E; Wilders, R; Winslow, RL; Kohl, P

    2011-01-01

    Cardiac experimental electrophysiology is in need of a well-defined Minimum Information Standard for recording, annotating, and reporting experimental data. As a step toward establishing this, we present a draft standard, called Minimum Information about a Cardiac Electrophysiology Experiment (MICEE). The ultimate goal is to develop a useful tool for cardiac electrophysiologists which facilitates and improves dissemination of the minimum information necessary for reproduction of cardiac electrophysiology research, allowing for easier comparison and utilisation of findings by others. It is hoped that this will enhance the integration of individual results into experimental, computational, and conceptual models. In its present form, this draft is intended for assessment and development by the research community. We invite the reader to join this effort, and, if deemed productive, implement the Minimum Information about a Cardiac Electrophysiology Experiment standard in their own work. PMID:21745496

  1. Characterization of the Sahelian-Sudan rainfall based on observations and regional climate models

    Science.gov (United States)

    Salih, Abubakr A. M.; Elagib, Nadir Ahmed; Tjernström, Michael; Zhang, Qiong

    2018-04-01

    The African Sahel region is known to be highly vulnerable to climate variability and change. We analyze rainfall in the Sahelian Sudan in terms of distribution of rain-days and amounts, and examine whether regional climate models can capture these rainfall features. Three regional models namely, Regional Model (REMO), Rossby Center Atmospheric Model (RCA) and Regional Climate Model (RegCM4), are evaluated against gridded observations (Climate Research Unit, Tropical Rainfall Measuring Mission, and ERA-interim reanalysis) and rain-gauge data from six arid and semi-arid weather stations across Sahelian Sudan over the period 1989 to 2008. Most of the observed rain-days are characterized by weak (0.1-1.0 mm/day) to moderate (> 1.0-10.0 mm/day) rainfall, with average frequencies of 18.5% and 48.0% of the total annual rain-days, respectively. Although very strong rainfall events (> 30.0 mm/day) occur rarely, they account for a large fraction of the total annual rainfall (28-42% across the stations). The performance of the models varies both spatially and temporally. RegCM4 most closely reproduces the observed annual rainfall cycle, especially for the more arid locations, but all of the three models fail to capture the strong rainfall events and hence underestimate its contribution to the total annual number of rain-days and rainfall amount. However, excessive moderate rainfall compensates this underestimation in the models in an annual average sense. The present study uncovers some of the models' limitations in skillfully reproducing the observed climate over dry regions, will aid model users in recognizing the uncertainties in the model output and will help climate and hydrological modeling communities in improving models.

  2. An International Ki67 Reproducibility Study

    Science.gov (United States)

    2013-01-01

    Background In breast cancer, immunohistochemical assessment of proliferation using the marker Ki67 has potential use in both research and clinical management. However, lack of consistency across laboratories has limited Ki67’s value. A working group was assembled to devise a strategy to harmonize Ki67 analysis and increase scoring concordance. Toward that goal, we conducted a Ki67 reproducibility study. Methods Eight laboratories received 100 breast cancer cases arranged into 1-mm core tissue microarrays—one set stained by the participating laboratory and one set stained by the central laboratory, both using antibody MIB-1. Each laboratory scored Ki67 as percentage of positively stained invasive tumor cells using its own method. Six laboratories repeated scoring of 50 locally stained cases on 3 different days. Sources of variation were analyzed using random effects models with log2-transformed measurements. Reproducibility was quantified by intraclass correlation coefficient (ICC), and the approximate two-sided 95% confidence intervals (CIs) for the true intraclass correlation coefficients in these experiments were provided. Results Intralaboratory reproducibility was high (ICC = 0.94; 95% CI = 0.93 to 0.97). Interlaboratory reproducibility was only moderate (central staining: ICC = 0.71, 95% CI = 0.47 to 0.78; local staining: ICC = 0.59, 95% CI = 0.37 to 0.68). Geometric mean of Ki67 values for each laboratory across the 100 cases ranged 7.1% to 23.9% with central staining and 6.1% to 30.1% with local staining. Factors contributing to interlaboratory discordance included tumor region selection, counting method, and subjective assessment of staining positivity. Formal counting methods gave more consistent results than visual estimation. Conclusions Substantial variability in Ki67 scoring was observed among some of the world’s most experienced laboratories. Ki67 values and cutoffs for clinical decision-making cannot be transferred between laboratories without

  3. Reproducibility of haemodynamical simulations in a subject-specific stented aneurysm model--a report on the Virtual Intracranial Stenting Challenge 2007.

    Science.gov (United States)

    Radaelli, A G; Augsburger, L; Cebral, J R; Ohta, M; Rüfenacht, D A; Balossino, R; Benndorf, G; Hose, D R; Marzo, A; Metcalfe, R; Mortier, P; Mut, F; Reymond, P; Socci, L; Verhegghe, B; Frangi, A F

    2008-07-19

    This paper presents the results of the Virtual Intracranial Stenting Challenge (VISC) 2007, an international initiative whose aim was to establish the reproducibility of state-of-the-art haemodynamical simulation techniques in subject-specific stented models of intracranial aneurysms (IAs). IAs are pathological dilatations of the cerebral artery walls, which are associated with high mortality and morbidity rates due to subarachnoid haemorrhage following rupture. The deployment of a stent as flow diverter has recently been indicated as a promising treatment option, which has the potential to protect the aneurysm by reducing the action of haemodynamical forces and facilitating aneurysm thrombosis. The direct assessment of changes in aneurysm haemodynamics after stent deployment is hampered by limitations in existing imaging techniques and currently requires resorting to numerical simulations. Numerical simulations also have the potential to assist in the personalized selection of an optimal stent design prior to intervention. However, from the current literature it is difficult to assess the level of technological advancement and the reproducibility of haemodynamical predictions in stented patient-specific models. The VISC 2007 initiative engaged in the development of a multicentre-controlled benchmark to analyse differences induced by diverse grid generation and computational fluid dynamics (CFD) technologies. The challenge also represented an opportunity to provide a survey of available technologies currently adopted by international teams from both academic and industrial institutions for constructing computational models of stented aneurysms. The results demonstrate the ability of current strategies in consistently quantifying the performance of three commercial intracranial stents, and contribute to reinforce the confidence in haemodynamical simulation, thus taking a step forward towards the introduction of simulation tools to support diagnostics and

  4. Accuracy and reproducibility of voxel based superimposition of cone beam computed tomography models on the anterior cranial base and the zygomatic arches.

    Directory of Open Access Journals (Sweden)

    Rania M Nada

    Full Text Available Superimposition of serial Cone Beam Computed Tomography (CBCT scans has become a valuable tool for three dimensional (3D assessment of treatment effects and stability. Voxel based image registration is a newly developed semi-automated technique for superimposition and comparison of two CBCT scans. The accuracy and reproducibility of CBCT superimposition on the anterior cranial base or the zygomatic arches using voxel based image registration was tested in this study. 16 pairs of 3D CBCT models were constructed from pre and post treatment CBCT scans of 16 adult dysgnathic patients. Each pair was registered on the anterior cranial base three times and on the left zygomatic arch twice. Following each superimposition, the mean absolute distances between the 2 models were calculated at 4 regions: anterior cranial base, forehead, left and right zygomatic arches. The mean distances between the models ranged from 0.2 to 0.37 mm (SD 0.08-0.16 for the anterior cranial base registration and from 0.2 to 0.45 mm (SD 0.09-0.27 for the zygomatic arch registration. The mean differences between the two registration zones ranged between 0.12 to 0.19 mm at the 4 regions. Voxel based image registration on both zones could be considered as an accurate and a reproducible method for CBCT superimposition. The left zygomatic arch could be used as a stable structure for the superimposition of smaller field of view CBCT scans where the anterior cranial base is not visible.

  5. Models Constraints from Observations of Active Galaxies

    Science.gov (United States)

    Riffel, R.; Pastoriza, M. G.; Rodríguez-Ardila, A.; Dametto, N. Z.; Ruschel-Dutra, D.; Riffel, R. A.; Storchi-Bergmann, T.; Martins, L. P.; Mason, R.; Ho, L. C.; Palomar XD Team

    2015-08-01

    Studying the unresolved stellar content of galaxies generally involves disentangling the various components contributing to the spectral energy distribution (SED), and fitting a combination of simple stellar populations (SSPs) to derive information about age, metallicity, and star formation history. In the near-infrared (NIR, 0.85-2.5 μm), the thermally pulsing asymptotic giant branch (TP-AGB) phase - the last stage of the evolution of intermediate-mass (M ≲ 6 M⊙) stars - is a particularly important component of the SSP models. These stars can dominate the emission of stellar populations with ages ˜ 0.2-2 Gyr, being responsible for roughly half of the luminosity in the K band. In addition, when trying to describe the continuum observed in active galactic nuclei, the signatures of the central engine and from the dusty torus cannot be ignored. Over the past several years we have developed a method to disentangle these three components. Our synthesis shows significant differences between Seyfert 1 (Sy 1) and Seyfert 2 (Sy 2) galaxies. The central few hundred parsecs of our galaxy sample contain a substantial fraction of intermediate-age populations with a mean metallicity near solar. Two-dimensional mapping of the near-infrared stellar population of the nuclear region of active galaxies suggests that there is a spatial correlation between the intermediate-age stellar population and a partial ring of low stellar velocity dispersion (σ*). Such an age is consistent with a scenario in which the origin of the low-σ* rings is a past event which triggered an inflow of gas and formed stars which still keep the colder kinematics of the gas from which they have formed. We also discuss the fingerprints of features attributed to TP-AGB stars in the spectra of the nuclear regions of nearby galaxies.

  6. General Description of Fission Observables: GEF Model Code

    Science.gov (United States)

    Schmidt, K.-H.; Jurado, B.; Amouroux, C.; Schmitt, C.

    2016-01-01

    consistent with the collective enhancement of the level density. The exchange of excitation energy and nucleons between the nascent fragments on the way from saddle to scission is estimated according to statistical mechanics. As a result, excitation energy and unpaired nucleons are predominantly transferred to the heavy fragment in the superfluid regime. This description reproduces some rather peculiar observed features of the prompt-neutron multiplicities and of the even-odd effect in fission-fragment Z distributions. For completeness, some conventional descriptions are used for calculating pre-equilibrium emission, fission probabilities and statistical emission of neutrons and gamma radiation from the excited fragments. Preference is given to simple models that can also be applied to exotic nuclei compared to more sophisticated models that need precise empirical input of nuclear properties, e.g. spectroscopic information. The approach reveals a high degree of regularity and provides a considerable insight into the physics of the fission process. Fission observables can be calculated with a precision that complies with the needs for applications in nuclear technology without specific adjustments to measured data of individual systems. The GEF executable runs out of the box with no need for entering any empirical data. This unique feature is of valuable importance, because the number of systems and energies of potential significance for fundamental and applied science will never be possible to be measured. The relevance of the approach for examining the consistency of experimental results and for evaluating nuclear data is demonstrated.

  7. Should we trust models or observations

    International Nuclear Information System (INIS)

    Ellsaesser, H.W.

    1982-01-01

    Scientists and laymen alike already trust observational data more than theories-this is made explicit in all formalizations of the scientific method. It was demonstrated again during the Supersonic Transport (SST) controversy by the continued efforts to reconcile the computed effect of the 1961-62 nuclear test series on the ozone layer with the observational record. Scientists, caught in the focus of the political limelight, sometimes, demonstrated their faith in the primacy of observations by studiously ignoring or dismissing as erroneous data at variance with the prevailing theoretical consensus-thereby stalling the theoretical modifications required to accommodate the observations. (author)

  8. Reproducibility of ultrasonic testing

    International Nuclear Information System (INIS)

    Lecomte, J.-C.; Thomas, Andre; Launay, J.-P.; Martin, Pierre

    The reproducibility of amplitude quotations for both artificial and natural reflectors was studied for several combinations of instrument/search unit, all being of the same type. This study shows that in industrial inspection if a range of standardized equipment is used, a margin of error of about 6 decibels has to be taken into account (confidence interval of 95%). This margin is about 4 to 5 dB for natural or artificial defects located in the central area and about 6 to 7 dB for artificial defects located on the back surface. This lack of reproducibility seems to be attributable first to the search unit and then to the instrument and operator. These results were confirmed by analysis of calibration data obtained from 250 tests performed by 25 operators under shop conditions. The margin of error was higher than the 6 dB obtained in the study [fr

  9. Observations and Numerical Modeling of the Jovian Ribbon

    Science.gov (United States)

    Cosentino, R. G.; Simon, A.; Morales-Juberias, R.; Sayanagi, K. M.

    2015-01-01

    Multiple wavelength observations made by the Hubble Space Telescope in early 2007 show the presence of a wavy, high-contrast feature in Jupiter's atmosphere near 30 degrees North. The "Jovian Ribbon," best seen at 410 nanometers, irregularly undulates in latitude and is time-variable in appearance. A meridional intensity gradient algorithm was applied to the observations to track the Ribbon's contour. Spectral analysis of the contour revealed that the Ribbon's structure is a combination of several wavenumbers ranging from k equals 8-40. The Ribbon is a dynamic structure that has been observed to have spectral power for dominant wavenumbers which vary over a time period of one month. The presence of the Ribbon correlates with periods when the velocity of the westward jet at the same location is highest. We conducted numerical simulations to investigate the stability of westward jets of varying speed, vertical shear, and background static stability to different perturbations. A Ribbon-like morphology was best reproduced with a 35 per millisecond westward jet that decreases in amplitude for pressures greater than 700 hectopascals and a background static stability of N equals 0.005 per second perturbed by heat pulses constrained to latitudes south of 30 degrees North. Additionally, the simulated feature had wavenumbers that qualitatively matched observations and evolved throughout the simulation reproducing the Jovian Ribbon's dynamic structure.

  10. Retrospective Correction of Physiological Noise: Impact on Sensitivity, Specificity, and Reproducibility of Resting-State Functional Connectivity in a Reading Network Model.

    Science.gov (United States)

    Krishnamurthy, Venkatagiri; Krishnamurthy, Lisa C; Schwam, Dina M; Ealey, Ashley; Shin, Jaemin; Greenberg, Daphne; Morris, Robin D

    2018-03-01

    It is well accepted that physiological noise (PN) obscures the detection of neural fluctuations in resting-state functional connectivity (rsFC) magnetic resonance imaging. However, a clear consensus for an optimal PN correction (PNC) methodology and how it can impact the rsFC signal characteristics is still lacking. In this study, we probe the impact of three PNC methods: RETROICOR: (Glover et al., 2000 ), ANATICOR: (Jo et al., 2010 ), and RVTMBPM: (Bianciardi et al., 2009 ). Using a reading network model, we systematically explore the effects of PNC optimization on sensitivity, specificity, and reproducibility of rsFC signals. In terms of specificity, ANATICOR was found to be effective in removing local white matter (WM) fluctuations and also resulted in aggressive removal of expected cortical-to-subcortical functional connections. The ability of RETROICOR to remove PN was equivalent to removal of simulated random PN such that it artificially inflated the connection strength, thereby decreasing sensitivity. RVTMBPM maintained specificity and sensitivity by balanced removal of vasodilatory PN and local WM nuisance edges. Another aspect of this work was exploring the effects of PNC on identifying reading group differences. Most PNC methods accounted for between-subject PN variability resulting in reduced intersession reproducibility. This effect facilitated the detection of the most consistent group differences. RVTMBPM was most effective in detecting significant group differences due to its inherent sensitivity to removing spatially structured and temporally repeating PN arising from dense vasculature. Finally, results suggest that combining all three PNC resulted in "overcorrection" by removing signal along with noise.

  11. Ionosphere TEC disturbances before strong earthquakes: observations, physics, modeling (Invited)

    Science.gov (United States)

    Namgaladze, A. A.

    2013-12-01

    The phenomenon of the pre-earthquake ionospheric disturbances is discussed. A number of typical TEC (Total Electron Content) relative disturbances is presented for several recent strong earthquakes occurred in different ionospheric conditions. Stable typical TEC deviations from quiet background state are observed few days before the strong seismic events in the vicinity of the earthquake epicenter and treated as ionospheric earthquake precursors. They don't move away from the source in contrast to the disturbances related with geomagnetic activity. Sunlit ionosphere approach leads to reduction of the disturbances up to their full disappearance, and effects regenerate at night. The TEC disturbances often observed in the magnetically conjugated areas as well. At low latitudes they accompany with equatorial anomaly modifications. The hypothesis about the electromagnetic channel of the pre-earthquake ionospheric disturbances' creation is discussed. The lithosphere and ionosphere are coupled by the vertical external electric currents as a result of ionization of the near-Earth air layer and vertical transport of the charged particles through the atmosphere over the fault. The external electric current densities exceeding the regular fair-weather electric currents by several orders are required to produce stable long-living seismogenic electric fields such as observed by onboard measurements of the 'Intercosmos-Bulgaria 1300' satellite over the seismic active zones. The numerical calculation results using the Upper Atmosphere Model demonstrate the ability of the external electric currents with the densities of 10-8-10-9 A/m2 to produce such electric fields. The sumulations reproduce the basic features of typical pre-earthquake TEC relative disturbances. It is shown that the plasma ExB drift under the action of the seismogenic electric field leads to the changes of the F2 region electron number density and TEC. The upward drift velocity component enhances NmF2 and TEC and

  12. Bicycle Rider Control: Observations, Modeling & Experiments

    OpenAIRE

    Kooijman, J.D.G.

    2012-01-01

    Bicycle designers traditionally develop bicycles based on experience and trial and error. Adopting modern engineering tools to model bicycle and rider dynamics and control is another method for developing bicycles. This method has the potential to evaluate the complete design space, and thereby develop well handling bicycles for specific user groups in a much shorter time span. The recent benchmarking of the Whipple bicycle model for the balance and steer of a bicycle is an opening enabling t...

  13. Reproducible research in palaeomagnetism

    Science.gov (United States)

    Lurcock, Pontus; Florindo, Fabio

    2015-04-01

    The reproducibility of research findings is attracting increasing attention across all scientific disciplines. In palaeomagnetism as elsewhere, computer-based analysis techniques are becoming more commonplace, complex, and diverse. Analyses can often be difficult to reproduce from scratch, both for the original researchers and for others seeking to build on the work. We present a palaeomagnetic plotting and analysis program designed to make reproducibility easier. Part of the problem is the divide between interactive and scripted (batch) analysis programs. An interactive desktop program with a graphical interface is a powerful tool for exploring data and iteratively refining analyses, but usually cannot operate without human interaction. This makes it impossible to re-run an analysis automatically, or to integrate it into a larger automated scientific workflow - for example, a script to generate figures and tables for a paper. In some cases the parameters of the analysis process itself are not saved explicitly, making it hard to repeat or improve the analysis even with human interaction. Conversely, non-interactive batch tools can be controlled by pre-written scripts and configuration files, allowing an analysis to be 'replayed' automatically from the raw data. However, this advantage comes at the expense of exploratory capability: iteratively improving an analysis entails a time-consuming cycle of editing scripts, running them, and viewing the output. Batch tools also tend to require more computer expertise from their users. PuffinPlot is a palaeomagnetic plotting and analysis program which aims to bridge this gap. First released in 2012, it offers both an interactive, user-friendly desktop interface and a batch scripting interface, both making use of the same core library of palaeomagnetic functions. We present new improvements to the program that help to integrate the interactive and batch approaches, allowing an analysis to be interactively explored and refined

  14. European cold winter 2009-2010: How unusual in the instrumental record and how reproducible in the ARPEGE-Climat model?

    Science.gov (United States)

    Ouzeau, G.; Cattiaux, J.; Douville, H.; Ribes, A.; Saint-Martin, D.

    2011-06-01

    Boreal winter 2009-2010 made headlines for cold anomalies in many countries of the northern mid-latitudes. Northern Europe was severely hit by this harsh winter in line with a record persistence of the negative phase of the North Atlantic Oscillation (NAO). In the present study, we first provide a wider perspective on how unusual this winter was by using the recent 20th Century Reanalysis. A weather regime analysis shows that the frequency of the negative NAO was unprecedented since winter 1939-1940, which is then used as a dynamical analog of winter 2009-2010 to demonstrate that the latter might have been much colder without the background global warming observed during the twentieth century. We then use an original nudging technique in ensembles of global atmospheric simulations driven by observed sea surface temperature (SST) and radiative forcings to highlight the relevance of the stratosphere for understanding if not predicting such anomalous winter seasons. Our results demonstrate that an improved representation of the lower stratosphere is necessary to reproduce not only the seasonal mean negative NAO signal, but also its intraseasonal distribution and the corresponding increased probability of cold waves over northern Europe.

  15. Towards Reproducibility in Computational Hydrology

    Science.gov (United States)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei; Duffy, Chris; Arheimer, Berit

    2017-04-01

    Reproducibility is a foundational principle in scientific research. The ability to independently re-run an experiment helps to verify the legitimacy of individual findings, and evolve (or reject) hypotheses and models of how environmental systems function, and move them from specific circumstances to more general theory. Yet in computational hydrology (and in environmental science more widely) the code and data that produces published results are not regularly made available, and even if they are made available, there remains a multitude of generally unreported choices that an individual scientist may have made that impact the study result. This situation strongly inhibits the ability of our community to reproduce and verify previous findings, as all the information and boundary conditions required to set up a computational experiment simply cannot be reported in an article's text alone. In Hutton et al 2016 [1], we argue that a cultural change is required in the computational hydrological community, in order to advance and make more robust the process of knowledge creation and hypothesis testing. We need to adopt common standards and infrastructures to: (1) make code readable and re-useable; (2) create well-documented workflows that combine re-useable code together with data to enable published scientific findings to be reproduced; (3) make code and workflows available, easy to find, and easy to interpret, using code and code metadata repositories. To create change we argue for improved graduate training in these areas. In this talk we reflect on our progress in achieving reproducible, open science in computational hydrology, which are relevant to the broader computational geoscience community. In particular, we draw on our experience in the Switch-On (EU funded) virtual water science laboratory (http://www.switch-on-vwsl.eu/participate/), which is an open platform for collaboration in hydrological experiments (e.g. [2]). While we use computational hydrology as

  16. Television Advertising and Children's Observational Modeling.

    Science.gov (United States)

    Atkin, Charles K.

    This paper assesses advertising effects on children and adolescents from a social learning theory perspective, emphasizing imitative performance of vicariously reinforced consumption stimuli. The basic elements of social psychologist Albert Bandura's modeling theory are outlined. Then specific derivations from the theory are applied to the problem…

  17. Bicycle Rider Control : Observations, Modeling & Experiments

    NARCIS (Netherlands)

    Kooijman, J.D.G.

    2012-01-01

    Bicycle designers traditionally develop bicycles based on experience and trial and error. Adopting modern engineering tools to model bicycle and rider dynamics and control is another method for developing bicycles. This method has the potential to evaluate the complete design space, and thereby

  18. Observational consequences of a dark interaction model

    Energy Technology Data Exchange (ETDEWEB)

    Campos, M. de, E-mail: campos@if.uff.b [Roraima Federal University (UFRR), Paricarana, Boa Vista, RO (Brazil). Physics Dept.

    2010-12-15

    We study a model with decay of dark energy and creation of the dark matter particles. We integrate the field equations and find the transition redshift where the evolution process of the universe change the accelerated expansion, and discuss the luminosity distance, acoustic oscillations and the state finder parameters. (author)

  19. Opening Reproducible Research

    Science.gov (United States)

    Nüst, Daniel; Konkol, Markus; Pebesma, Edzer; Kray, Christian; Klötgen, Stephanie; Schutzeichel, Marc; Lorenz, Jörg; Przibytzin, Holger; Kussmann, Dirk

    2016-04-01

    Open access is not only a form of publishing such that research papers become available to the large public free of charge, it also refers to a trend in science that the act of doing research becomes more open and transparent. When science transforms to open access we not only mean access to papers, research data being collected, or data being generated, but also access to the data used and the procedures carried out in the research paper. Increasingly, scientific results are generated by numerical manipulation of data that were already collected, and may involve simulation experiments that are completely carried out computationally. Reproducibility of research findings, the ability to repeat experimental procedures and confirm previously found results, is at the heart of the scientific method (Pebesma, Nüst and Bivand, 2012). As opposed to the collection of experimental data in labs or nature, computational experiments lend themselves very well for reproduction. Some of the reasons why scientists do not publish data and computational procedures that allow reproduction will be hard to change, e.g. privacy concerns in the data, fear for embarrassment or of losing a competitive advantage. Others reasons however involve technical aspects, and include the lack of standard procedures to publish such information and the lack of benefits after publishing them. We aim to resolve these two technical aspects. We propose a system that supports the evolution of scientific publications from static papers into dynamic, executable research documents. The DFG-funded experimental project Opening Reproducible Research (ORR) aims for the main aspects of open access, by improving the exchange of, by facilitating productive access to, and by simplifying reuse of research results that are published over the Internet. Central to the project is a new form for creating and providing research results, the executable research compendium (ERC), which not only enables third parties to

  20. Spectrophotometric Modeling of MAHLI Goniometer Observations

    Science.gov (United States)

    Liang, W.; Johnson, J. R.; Hayes, A.; Lemmon, M. T.; Bell, J. F., III; Grundy, W. M.; Deen, R. G.

    2017-12-01

    The Mars Hand Lends Imager (MAHLI) on the Curiosity rover's robotic arm was used as a goniometer to acquire a multiple-viewpoint data set on sol 544 [1]. Images were acquired at 20 arm positions, all centered at the same location and from a near-constant distance of 1.0 m from the surface. Although this sequence was acquired at only one time of day ( 13:30 LTST), it provided phase angle coverage from 0-110°. Images were converted to radiance from calibrated PDS files (DRXX) using radiance scaling factors and MAHLI focus position counts in an algorithm that rescaled the data to match the Mastcam M-34 calibration via comparison of sky images acquired during the mission. Converted MAHLI radiance values from an image of the Mastcam calibration target compared favorably in the red, green, and blue Bayer filters to M-34 radiance values from an image of the same target taken minutes afterwards. The 20 MAHLI images allowed construction of a digital terrain model (DTM), although images with shadows cast by the rover arm were more challenging to include. Their current absence restricts the lowest phase angles available to about 17°. The DTM enables calculation of surface normals that can be used with sky models to correct for diffuse reflectance on surface facets prior to Hapke modeling [cf. 2-6]. Regions of interest (ROIs) were extracted using one of the low emission-angle images as a template. ROI unit types included soils, light-toned surfaces (5 cm felsic rock "Nita"), dark-toned rocks with variable textures and dust cover, and larger areas representative of the average surface (see attached figure). These ROIs were translated from the template image to the other images through a matching of DTM three-dimensional coordinates. Preliminary phase curves (prior to atmospheric correction) show that soil-dominated surfaces are most backscattering, whereas rocks are least backscattering, and light-toned surfaces exhibit wavelength-dependent scattering. Future work will

  1. Interacting Dark Energy Models and Observations

    Science.gov (United States)

    Shojaei, Hamed; Urioste, Jazmin

    2017-01-01

    Dark energy is one of the mysteries of the twenty first century. Although there are candidates resembling some features of dark energy, there is no single model describing all the properties of dark energy. Dark energy is believed to be the most dominant component of the cosmic inventory, but a lot of models do not consider any interaction between dark energy and other constituents of the cosmic inventory. Introducing an interaction will change the equation governing the behavior of dark energy and matter and creates new ways to explain cosmic coincidence problem. In this work we studied how the Hubble parameter and density parameters evolve with time in the presence of certain types of interaction. The interaction serves as a way to convert dark energy into matter to avoid a dark energy-dominated universe by creating new equilibrium points for the differential equations. Then we will use numerical analysis to predict the values of distance moduli at different redshifts and compare them to the values for the distance moduli obtained by WMAP (Wilkinson Microwave Anisotropy Probe). Undergraduate Student

  2. Evaluation of Aerosol-cloud Interaction in the GISS Model E Using ARM Observations

    Science.gov (United States)

    DeBoer, G.; Bauer, S. E.; Toto, T.; Menon, Surabi; Vogelmann, A. M.

    2013-01-01

    Observations from the US Department of Energy's Atmospheric Radiation Measurement (ARM) program are used to evaluate the ability of the NASA GISS ModelE global climate model in reproducing observed interactions between aerosols and clouds. Included in the evaluation are comparisons of basic meteorology and aerosol properties, droplet activation, effective radius parameterizations, and surface-based evaluations of aerosol-cloud interactions (ACI). Differences between the simulated and observed ACI are generally large, but these differences may result partially from vertical distribution of aerosol in the model, rather than the representation of physical processes governing the interactions between aerosols and clouds. Compared to the current observations, the ModelE often features elevated droplet concentrations for a given aerosol concentration, indicating that the activation parameterizations used may be too aggressive. Additionally, parameterizations for effective radius commonly used in models were tested using ARM observations, and there was no clear superior parameterization for the cases reviewed here. This lack of consensus is demonstrated to result in potentially large, statistically significant differences to surface radiative budgets, should one parameterization be chosen over another.

  3. Comparison of observed and modeled longwave radiances

    Science.gov (United States)

    Stone, Kenneth; Coakley, J. A., Jr.

    1990-01-01

    Calculated LW radiances based on NMC profiles of temperature and humidities for the month of July 1985 are obtained using standard procedures for performing radiative transfer calculations, and are within 3 percent (against a standard deviation of 4 percent) for global daytime land comparsions and within 1 percent (against a standard deviation of 1.5 percent) for a case study located over North America. The calculated values over the global data set show a slight trend with the surface temperature, and since there is no obvious trend with the column amount of water vapor, it is argued that the trend with temperature is evidence that absorption by other components (i.e., CO2O3 and other trace gases not included in these calculations) in the model could be improved.

  4. A NANOFLARE-BASED CELLULAR AUTOMATON MODEL AND THE OBSERVED PROPERTIES OF THE CORONAL PLASMA

    Energy Technology Data Exchange (ETDEWEB)

    Fuentes, Marcelo López [Instituto de Astronomía y Física del Espacio, CONICET-UBA, CC. 67, Suc. 28, 1428 Buenos Aires (Argentina); Klimchuk, James A., E-mail: lopezf@iafe.uba.ar [NASA Goddard Space Flight Center, Code 671, Greenbelt, MD 20771 (United States)

    2016-09-10

    We use the cellular automaton model described in López Fuentes and Klimchuk to study the evolution of coronal loop plasmas. The model, based on the idea of a critical misalignment angle in tangled magnetic fields, produces nanoflares of varying frequency with respect to the plasma cooling time. We compare the results of the model with active region (AR) observations obtained with the Hinode /XRT and SDO /AIA instruments. The comparison is based on the statistical properties of synthetic and observed loop light curves. Our results show that the model reproduces the main observational characteristics of the evolution of the plasma in AR coronal loops. The typical intensity fluctuations have amplitudes of 10%–15% both for the model and the observations. The sign of the skewness of the intensity distributions indicates the presence of cooling plasma in the loops. We also study the emission measure (EM) distribution predicted by the model and obtain slopes in log(EM) versus log(T) between 2.7 and 4.3, in agreement with published observational values.

  5. A Nanoflare-Based Cellular Automaton Model and the Observed Properties of the Coronal Plasma

    Science.gov (United States)

    Lopez-Fuentes, Marcelo; Klimchuk, James Andrew

    2016-01-01

    We use the cellular automaton model described in Lopez Fuentes and Klimchuk to study the evolution of coronal loop plasmas. The model, based on the idea of a critical misalignment angle in tangled magnetic fields, produces nanoflares of varying frequency with respect to the plasma cooling time. We compare the results of the model with active region (AR) observations obtained with the Hinode/XRT and SDOAIA instruments. The comparison is based on the statistical properties of synthetic and observed loop light curves. Our results show that the model reproduces the main observational characteristics of the evolution of the plasma in AR coronal loops. The typical intensity fluctuations have amplitudes of 10 percent - 15 percent both for the model and the observations. The sign of the skewness of the intensity distributions indicates the presence of cooling plasma in the loops. We also study the emission measure (EM) distribution predicted by the model and obtain slopes in log(EM) versus log(T) between 2.7 and 4.3, in agreement with published observational values.

  6. Charge state evolution in the solar wind. III. Model comparison with observations

    Energy Technology Data Exchange (ETDEWEB)

    Landi, E.; Oran, R.; Lepri, S. T.; Zurbuchen, T. H.; Fisk, L. A.; Van der Holst, B. [Department of Atmospheric, Oceanic and Space Sciences, University of Michigan, Ann Arbor, MI 48109 (United States)

    2014-08-01

    We test three theoretical models of the fast solar wind with a set of remote sensing observations and in-situ measurements taken during the minimum of solar cycle 23. First, the model electron density and temperature are compared to SOHO/SUMER spectroscopic measurements. Second, the model electron density, temperature, and wind speed are used to predict the charge state evolution of the wind plasma from the source regions to the freeze-in point. Frozen-in charge states are compared with Ulysses/SWICS measurements at 1 AU, while charge states close to the Sun are combined with the CHIANTI spectral code to calculate the intensities of selected spectral lines, to be compared with SOHO/SUMER observations in the north polar coronal hole. We find that none of the theoretical models are able to completely reproduce all observations; namely, all of them underestimate the charge state distribution of the solar wind everywhere, although the levels of disagreement vary from model to model. We discuss possible causes of the disagreement, namely, uncertainties in the calculation of the charge state evolution and of line intensities, in the atomic data, and in the assumptions on the wind plasma conditions. Last, we discuss the scenario where the wind is accelerated from a region located in the solar corona rather than in the chromosphere as assumed in the three theoretical models, and find that a wind originating from the corona is in much closer agreement with observations.

  7. Results of an interactively coupled atmospheric chemistry - general circulation model. Comparison with observations

    Energy Technology Data Exchange (ETDEWEB)

    Hein, R.; Dameris, M.; Schnadt, C. [and others

    2000-01-01

    An interactively coupled climate-chemistry model which enables a simultaneous treatment of meteorology and atmospheric chemistry and their feedbacks is presented. This is the first model, which interactively combines a general circulation model based on primitive equations with a rather complex model of stratospheric and tropospheric chemistry, and which is computational efficient enough to allow long-term integrations with currently available computer resources. The applied model version extends from the Earth's surface up to 10 hPa with a relatively high number (39) of vertical levels. We present the results of a present-day (1990) simulation and compare it to available observations. We focus on stratospheric dynamics and chemistry relevant to describe the stratospheric ozone layer. The current model version ECHAM4.L39(DLR)/CHEM can realistically reproduce stratospheric dynamics in the Arctic vortex region, including stratospheric warming events. This constitutes a major improvement compared to formerly applied model versions. However, apparent shortcomings in Antarctic circulation and temperatures persist. The seasonal and interannual variability of the ozone layer is simulated in accordance with observations. Activation and deactivation of chlorine in the polar stratospheric vortices and their interhemispheric differences are reproduced. The consideration of the chemistry feedback on dynamics results in an improved representation of the spatial distribution of stratospheric water vapor concentrations, i.e., the simulated meriodional water vapor gradient in the stratosphere is realistic. The present model version constitutes a powerful tool to investigate, for instance, the combined direct and indirect effects of anthropogenic trace gas emissions, and the future evolution of the ozone layer. (orig.)

  8. Correlation between human observer performance and model observer performance in differential phase contrast CT

    International Nuclear Information System (INIS)

    Li, Ke; Garrett, John; Chen, Guang-Hong

    2013-01-01

    Purpose: With the recently expanding interest and developments in x-ray differential phase contrast CT (DPC-CT), the evaluation of its task-specific detection performance and comparison with the corresponding absorption CT under a given radiation dose constraint become increasingly important. Mathematical model observers are often used to quantify the performance of imaging systems, but their correlations with actual human observers need to be confirmed for each new imaging method. This work is an investigation of the effects of stochastic DPC-CT noise on the correlation of detection performance between model and human observers with signal-known-exactly (SKE) detection tasks.Methods: The detectabilities of different objects (five disks with different diameters and two breast lesion masses) embedded in an experimental DPC-CT noise background were assessed using both model and human observers. The detectability of the disk and lesion signals was then measured using five types of model observers including the prewhitening ideal observer, the nonprewhitening (NPW) observer, the nonprewhitening observer with eye filter and internal noise (NPWEi), the prewhitening observer with eye filter and internal noise (PWEi), and the channelized Hotelling observer (CHO). The same objects were also evaluated by four human observers using the two-alternative forced choice method. The results from the model observer experiment were quantitatively compared to the human observer results to assess the correlation between the two techniques.Results: The contrast-to-detail (CD) curve generated by the human observers for the disk-detection experiments shows that the required contrast to detect a disk is inversely proportional to the square root of the disk size. Based on the CD curves, the ideal and NPW observers tend to systematically overestimate the performance of the human observers. The NPWEi and PWEi observers did not predict human performance well either, as the slopes of their CD

  9. A right to reproduce?

    Science.gov (United States)

    Emson, H E

    1992-10-31

    Conscious control of the environment by homo sapiens has brought almost total release from the controls of ecology that limit the population of all other species. After a mere 10,000 years, humans have brought the planet close to collapse, and all the debate in the world seems unlikely to save it. A combination of uncontrolled breeding and rapacity is propelling us down the slippery slope 1st envisioned by Malthus, dragging the rest of the planet along. Only the conscious control, and most likely voluntary, reimposition of controls on breeding will reduce the overgrowth of humans, and we have far to go in that direction. "According to the United Nations Universal Declaration of Human Rights (1948, articles 16[I] and 16 [III]), Men and women of full age without any limitation due to race, nationality or religion have the right to marry and to found a family ... the family is the natural and fundamental group unit of society." The rhetoric of rights without the balancing of responsibilities is wrong in health care, and even more wrong in the context of world population. The mind-set of dominance and exploitation over the rest of creation has meant human reluctance to admit participation in a system where every part is interdependent. We must balance the right to reproduce with it responsible use, valuing interdependence, understanding, and respect with a duty not to unbalance, damage, or destroy. It is long overdue that we discard every statement of right that is unmatched by the equivalent duty and responsibility.

  10. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  11. Results of an interactively coupled atmospheric chemistry – general circulation model: Comparison with observations

    Directory of Open Access Journals (Sweden)

    R. Hein

    2001-04-01

    Full Text Available The coupled climate-chemistry model ECHAM4.L39(DLR/CHEM is presented which enables a simultaneous treatment of meteorology and atmospheric chemistry and their feedbacks. This is the first model which interactively combines a general circulation model with a chemical model, employing most of the important reactions and species necessary to describe the stratospheric and upper tropospheric ozone chemistry, and which is computationally fast enough to allow long-term integrations with currently available computer resources. This is possible as the model time-step used for the chemistry can be chosen as large as the integration time-step for the dynamics. Vertically the atmosphere is discretized by 39 levels from the surface up to the top layer which is centred at 10 hPa, with a relatively high vertical resolution of approximately 700 m near the extra-tropical tropopause. We present the results of a control simulation representing recent conditions (1990 and compare it to available observations. The focus is on investigations of stratospheric dynamics and chemistry relevant to describe the stratospheric ozone layer. ECHAM4.L39(DLR/CHEM reproduces main features of stratospheric dynamics in the arctic vortex region, including stratospheric warming events. This constitutes a major improvement compared to earlier model versions. However, apparent shortcomings in Antarctic circulation and temperatures persist. The seasonal and interannual variability of the ozone layer is simulated in accordance with observations. Activation and deactivation of chlorine in the polar stratospheric vortices and their inter-hemispheric differences are reproduced. Considering methane oxidation as part of the dynamic-chemistry feedback results in an improved representation of the spatial distribution of stratospheric water vapour concentrations. The current model constitutes a powerful tool to investigate, for instance, the combined direct and indirect effects of anthropogenic

  12. Results of an interactively coupled atmospheric chemistry – general circulation model: Comparison with observations

    Directory of Open Access Journals (Sweden)

    R. Hein

    Full Text Available The coupled climate-chemistry model ECHAM4.L39(DLR/CHEM is presented which enables a simultaneous treatment of meteorology and atmospheric chemistry and their feedbacks. This is the first model which interactively combines a general circulation model with a chemical model, employing most of the important reactions and species necessary to describe the stratospheric and upper tropospheric ozone chemistry, and which is computationally fast enough to allow long-term integrations with currently available computer resources. This is possible as the model time-step used for the chemistry can be chosen as large as the integration time-step for the dynamics. Vertically the atmosphere is discretized by 39 levels from the surface up to the top layer which is centred at 10 hPa, with a relatively high vertical resolution of approximately 700 m near the extra-tropical tropopause. We present the results of a control simulation representing recent conditions (1990 and compare it to available observations. The focus is on investigations of stratospheric dynamics and chemistry relevant to describe the stratospheric ozone layer. ECHAM4.L39(DLR/CHEM reproduces main features of stratospheric dynamics in the arctic vortex region, including stratospheric warming events. This constitutes a major improvement compared to earlier model versions. However, apparent shortcomings in Antarctic circulation and temperatures persist. The seasonal and interannual variability of the ozone layer is simulated in accordance with observations. Activation and deactivation of chlorine in the polar stratospheric vortices and their inter-hemispheric differences are reproduced. Considering methane oxidation as part of the dynamic-chemistry feedback results in an improved representation of the spatial distribution of stratospheric water vapour concentrations. The current model constitutes a powerful tool to investigate, for instance, the combined direct and indirect effects of anthropogenic

  13. Evaluation of Land Surface Models in Reproducing Satellite-Derived LAI over the High-Latitude Northern Hemisphere. Part I: Uncoupled DGVMs

    Directory of Open Access Journals (Sweden)

    Ning Zeng

    2013-10-01

    Full Text Available Leaf Area Index (LAI represents the total surface area of leaves above a unit area of ground and is a key variable in any vegetation model, as well as in climate models. New high resolution LAI satellite data is now available covering a period of several decades. This provides a unique opportunity to validate LAI estimates from multiple vegetation models. The objective of this paper is to compare new, satellite-derived LAI measurements with modeled output for the Northern Hemisphere. We compare monthly LAI output from eight land surface models from the TRENDY compendium with satellite data from an Artificial Neural Network (ANN from the latest version (third generation of GIMMS AVHRR NDVI data over the period 1986–2005. Our results show that all the models overestimate the mean LAI, particularly over the boreal forest. We also find that seven out of the eight models overestimate the length of the active vegetation-growing season, mostly due to a late dormancy as a result of a late summer phenology. Finally, we find that the models report a much larger positive trend in LAI over this period than the satellite observations suggest, which translates into a higher trend in the growing season length. These results highlight the need to incorporate a larger number of more accurate plant functional types in all models and, in particular, to improve the phenology of deciduous trees.

  14. Observation and modelling of main-sequence star chromospheres - XIX. FIES and FEROS observations of dM1 stars

    Science.gov (United States)

    Houdebine, E. R.; Butler, C. J.; Garcia-Alvarez, D.; Telting, J.

    2012-10-01

    We present 187 high-resolution spectra for 62 different M1 dwarfs from observations obtained with the FIbre-fed Echelle Spectrograph (FIES) on the Nordic Optical Telescope (NOT) and from observations with the Fibre-fed Extended Range Echelle Spectrograph (FEROS) from the European Southern Observatory (ESO) data base. We also compiled other measurements available in the literature. We observed two stars, Gl 745A and Gl 745B, with no Ca II line core emission and Hα line equivalent widths (EWs) of only 0.171 and 0.188 Å, respectively. We also observed another very low activity M1 dwarf, Gl 63, with an Hα line EW of only 0.199 Å. These are the lowest activity M dwarfs ever observed and are of particular interest for the non-local thermodynamic equilibrium radiative transfer modelling of M1 dwarfs. Thanks to the high signal-to-noise ratio of most of our spectra, we were able to measure the Ca II H&K full width at half-maximum (FWHM) for most of our stars. We find good correlations between the FWHM values and the mean Ca II line EW for dM1 stars. Then the FWHM seems to saturate for dM1e stars. Our previous models of M1 dwarfs can reproduce the FWHM for dM1e stars and the most active dM1 stars, but fail to reproduce the observations of lower activity M1 dwarfs. We believe this is due to an effect of metallicity. We also investigate the dependence of the Hα line FWHM as a function of its EW. We find that the models globally agree with the observations including subwarfs, but tend to produce too narrow profiles for dM1e stars. We re-investigate the correlation between the Ca II line mean EW and the absolute magnitude. With our new data that notably include several M1 subdwarfs, we find a slightly different and better correlation with a slope of -0.779 instead of -0.909. We also re-investigate the variations of the Hα line EW as a function of radius and find that the EW increases continuously with increasing radius. This confirms our previous finding that the level of

  15. Confronting Weather and Climate Models with Observational Data from Soil Moisture Networks over the United States

    Science.gov (United States)

    Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A., Jr.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal Dean; hide

    2016-01-01

    Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses out perform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.

  16. A construction of observables for AKSZ sigma models

    OpenAIRE

    Mnev, Pavel

    2012-01-01

    A construction of gauge-invariant observables is suggested for a class of topological field theories, the AKSZ sigma-models. The observables are associated to extensions of the target Q-manifold of the sigma model to a Q-bundle over it with additional Hamiltonian structure in fibers.

  17. Is the island universe model consistent with observations?

    OpenAIRE

    Piao, Yun-Song

    2005-01-01

    We study the island universe model, in which initially the universe is in a cosmological constant sea, then the local quantum fluctuations violating the null energy condition create the islands of matter, some of which might corresponds to our observable universe. We examine the possibility that the island universe model is regarded as an alternative scenario of the origin of observable universe.

  18. modeling, observation and control, a multi-model approach

    OpenAIRE

    Elkhalil, Mansoura

    2011-01-01

    This thesis is devoted to the control of systems which dynamics can be suitably described by a multimodel approach from an investigation study of a model reference adaptative control performance enhancement. Four multimodel control approaches have been proposed. The first approach is based on an output reference model control design. A successful experimental validation involving a chemical reactor has been carried out. The second approach is based on a suitable partial state model reference ...

  19. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods.

    Science.gov (United States)

    Evans, H R; Karmakharm, T; Lawson, M A; Walker, R E; Harris, W; Fellows, C; Huggins, I D; Richmond, P; Chantry, A D

    2016-02-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (±19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (±0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images), it

  20. Fuzzy model-based observers for fault detection in CSTR.

    Science.gov (United States)

    Ballesteros-Moncada, Hazael; Herrera-López, Enrique J; Anzurez-Marín, Juan

    2015-11-01

    Under the vast variety of fuzzy model-based observers reported in the literature, what would be the properone to be used for fault detection in a class of chemical reactor? In this study four fuzzy model-based observers for sensor fault detection of a Continuous Stirred Tank Reactor were designed and compared. The designs include (i) a Luenberger fuzzy observer, (ii) a Luenberger fuzzy observer with sliding modes, (iii) a Walcott-Zak fuzzy observer, and (iv) an Utkin fuzzy observer. A negative, an oscillating fault signal, and a bounded random noise signal with a maximum value of ±0.4 were used to evaluate and compare the performance of the fuzzy observers. The Utkin fuzzy observer showed the best performance under the tested conditions. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. WDAC Task Team on Observations for Model Evaluation: Facilitating the use of observations for CMIP

    Science.gov (United States)

    Waliser, D. E.; Gleckler, P. J.; Ferraro, R.; Eyring, V.; Bosilovich, M. G.; Schulz, J.; Thepaut, J. N.; Taylor, K. E.; Chepfer, H.; Bony, S.; Lee, T. J.; Joseph, R.; Mathieu, P. P.; Saunders, R.

    2015-12-01

    Observations are essential for the development and evaluation of climate models. Satellite and in-situ measurements as well as reanalysis products provide crucial resources for these purposes. Over the last two decades, the climate modeling community has become adept at developing model intercomparison projects (MIPs) that provide the basis for more systematic comparisons of climate models under common experimental conditions. A prominent example among these is the coupled MIP (CMIP). Due to its growing importance in providing input to the IPCC, the framework for CMIP, now planning CMIP6, has expanded to include a very comprehensive and precise set of experimental protocols, with an advanced data archive and dissemination system. While the number, types and sophistication of observations over the same time period have kept pace, their systematic application to the evaluation of climate models has yet to be fully exploited due to a lack of coordinated protocols for identifying, archiving, documenting and applying observational resources. This presentation will discuss activities and plans of the World Climate Research Program (WCRP) Data Advisory Council's (WDAC) Task Team on Observations for Model Evaluation for facilitating the use of observations for model evaluation. The presentation will include an update on the status of the obs4MIPs and ana4MIPs projects, whose purpose is to provide a limited collection of well-established and documented observation and reanalysis datasets for comparison with Earth system models, targeting CMIP in particular. The presentation will also describe the role these activities and datasets play in the development of a set of community standard observation-based climate model performance metrics by the Working Group on Numerical Experimentation (WGNE)'s Performance Metrics Panel, as well as which CMIP6 experiments these activities are targeting, and where additional community input and contributions to these activities are needed.

  2. A Unimodal Model for Double Observer Distance Sampling Surveys.

    Directory of Open Access Journals (Sweden)

    Earl F Becker

    Full Text Available Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.

  3. Resolving a long-standing model-observation discrepancy on ozone solar-cycle response

    Science.gov (United States)

    Li, K. F.; Zhang, Q.; Tung, K. K.; Yung, Y. L.

    2016-12-01

    To have the capability for long-term prediction of stratospheric ozone (O3), chemistry-climate models have often been tested against observations on decadal time scales. A model-observation discrepancy in the tropical O3 response to the 11-year solar cycle, first noted in 1993, persists for more than 20 years: While standard photochemical models predict a single-peak response in the stratosphere, satellite observations show an unexpected double-peak structure. Various studies have explored uncertainties in photochemistry and dynamics but there has not been compelling evidence of model biases. Here we suggest that decadal satellite orbital drifts relative to the diurnal cycle could be the primary cause of the discrepancy. We show that the double-peak structure can be reproduced by adding the AM/PM diurnal difference to the single-peak response predicted by the standard photochemistry. This work illustrates the importance of a synergistic and iterative process involving observations and theory in search of robust climate signals.

  4. Assimilating uncertain, dynamic and intermittent streamflow observations in hydrological models

    Science.gov (United States)

    Mazzoleni, Maurizio; Alfonso, Leonardo; Chacon-Hurtado, Juan; Solomatine, Dimitri

    2015-09-01

    Catastrophic floods cause significant socio-economical losses. Non-structural measures, such as real-time flood forecasting, can potentially reduce flood risk. To this end, data assimilation methods have been used to improve flood forecasts by integrating static ground observations, and in some cases also remote sensing observations, within water models. Current hydrologic and hydraulic research works consider assimilation of observations coming from traditional, static sensors. At the same time, low-cost, mobile sensors and mobile communication devices are becoming also increasingly available. The main goal and innovation of this study is to demonstrate the usefulness of assimilating uncertain streamflow observations that are dynamic in space and intermittent in time in the context of two different semi-distributed hydrological model structures. The developed method is applied to the Brue basin, where the dynamic observations are imitated by the synthetic observations of discharge. The results of this study show how model structures and sensors locations affect in different ways the assimilation of streamflow observations. In addition, it proves how assimilation of such uncertain observations from dynamic sensors can provide model improvements similar to those of streamflow observations coming from a non-optimal network of static physical sensors. This can be a potential application of recent efforts to build citizen observatories of water, which can make the citizens an active part in information capturing, evaluation and communication, helping simultaneously to improvement of model-based flood forecasting.

  5. Predicting the future completing models of observed complex systems

    CERN Document Server

    Abarbanel, Henry

    2013-01-01

    Predicting the Future: Completing Models of Observed Complex Systems provides a general framework for the discussion of model building and validation across a broad spectrum of disciplines. This is accomplished through the development of an exact path integral for use in transferring information from observations to a model of the observed system. Through many illustrative examples drawn from models in neuroscience, fluid dynamics, geosciences, and nonlinear electrical circuits, the concepts are exemplified in detail. Practical numerical methods for approximate evaluations of the path integral are explored, and their use in designing experiments and determining a model's consistency with observations is investigated. Using highly instructive examples, the problems of data assimilation and the means to treat them are clearly illustrated. This book will be useful for students and practitioners of physics, neuroscience, regulatory networks, meteorology and climate science, network dynamics, fluid dynamics, and o...

  6. Evaluation of internal noise methods for Hotelling observer models

    International Nuclear Information System (INIS)

    Zhang Yani; Pham, Binh T.; Eckstein, Miguel P.

    2007-01-01

    The inclusion of internal noise in model observers is a common method to allow for quantitative comparisons between human and model observer performance in visual detection tasks. In this article, we studied two different strategies for inserting internal noise into Hotelling model observers. In the first strategy, internal noise was added to the output of individual channels: (a) Independent nonuniform channel noise, (b) independent uniform channel noise. In the second strategy, internal noise was added to the decision variable arising from the combination of channel responses. The standard deviation of the zero mean internal noise was either constant or proportional to: (a) the decision variable's standard deviation due to the external noise, (b) the decision variable's variance caused by the external noise, (c) the decision variable magnitude on a trial to trial basis. We tested three model observers: square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO) using a four alternative forced choice (4AFC) signal known exactly but variable task with a simulated signal embedded in real x-ray coronary angiogram backgrounds. The results showed that the internal noise method that led to the best prediction of human performance differed across the studied model observers. The CHO model best predicted human observer performance with the channel internal noise. The HO and LGHO best predicted human observer performance with the decision variable internal noise. The present results might guide researchers with the choice of methods to include internal noise into Hotelling model observers when evaluating and optimizing medical image quality

  7. Detecting influential observations in nonlinear regression modeling of groundwater flow

    Science.gov (United States)

    Yager, Richard M.

    1998-01-01

    Nonlinear regression is used to estimate optimal parameter values in models of groundwater flow to ensure that differences between predicted and observed heads and flows do not result from nonoptimal parameter values. Parameter estimates can be affected, however, by observations that disproportionately influence the regression, such as outliers that exert undue leverage on the objective function. Certain statistics developed for linear regression can be used to detect influential observations in nonlinear regression if the models are approximately linear. This paper discusses the application of Cook's D, which measures the effect of omitting a single observation on a set of estimated parameter values, and the statistical parameter DFBETAS, which quantifies the influence of an observation on each parameter. The influence statistics were used to (1) identify the influential observations in the calibration of a three-dimensional, groundwater flow model of a fractured-rock aquifer through nonlinear regression, and (2) quantify the effect of omitting influential observations on the set of estimated parameter values. Comparison of the spatial distribution of Cook's D with plots of model sensitivity shows that influential observations correspond to areas where the model heads are most sensitive to certain parameters, and where predicted groundwater flow rates are largest. Five of the six discharge observations were identified as influential, indicating that reliable measurements of groundwater flow rates are valuable data in model calibration. DFBETAS are computed and examined for an alternative model of the aquifer system to identify a parameterization error in the model design that resulted in overestimation of the effect of anisotropy on horizontal hydraulic conductivity.

  8. Production process reproducibility and product quality consistency of transient gene expression in HEK293 cells with anti-PD1 antibody as the model protein.

    Science.gov (United States)

    Ding, Kai; Han, Lei; Zong, Huifang; Chen, Junsheng; Zhang, Baohong; Zhu, Jianwei

    2017-03-01

    Demonstration of reproducibility and consistency of process and product quality is one of the most crucial issues in using transient gene expression (TGE) technology for biopharmaceutical development. In this study, we challenged the production consistency of TGE by expressing nine batches of recombinant IgG antibody in human embryonic kidney 293 cells to evaluate reproducibility including viable cell density, viability, apoptotic status, and antibody yield in cell culture supernatant. Product quality including isoelectric point, binding affinity, secondary structure, and thermal stability was assessed as well. In addition, major glycan forms of antibody from different batches of production were compared to demonstrate glycosylation consistency. Glycan compositions of the antibody harvested at different time periods were also measured to illustrate N-glycan distribution over the culture time. From the results, it has been demonstrated that different TGE batches are reproducible from lot to lot in overall cell growth, product yield, and product qualities including isoelectric point, binding affinity, secondary structure, and thermal stability. Furthermore, major N-glycan compositions are consistent among different TGE batches and conserved during cell culture time.

  9. A Self-consistent Cloud Model for Brown Dwarfs and Young Giant Exoplanets: Comparison with Photometric and Spectroscopic Observations

    Science.gov (United States)

    Charnay, B.; Bézard, B.; Baudino, J.-L.; Bonnefoy, M.; Boccaletti, A.; Galicher, R.

    2018-02-01

    We developed a simple, physical, and self-consistent cloud model for brown dwarfs and young giant exoplanets. We compared different parametrizations for the cloud particle size, by fixing either particle radii or the mixing efficiency (parameter f sed), or by estimating particle radii from simple microphysics. The cloud scheme with simple microphysics appears to be the best parametrization by successfully reproducing the observed photometry and spectra of brown dwarfs and young giant exoplanets. In particular, it reproduces the L–T transition, due to the condensation of silicate and iron clouds below the visible/near-IR photosphere. It also reproduces the reddening observed for low-gravity objects, due to an increase of cloud optical depth for low gravity. In addition, we found that the cloud greenhouse effect shifts chemical equilibrium, increasing the abundances of species stable at high temperature. This effect should significantly contribute to the strong variation of methane abundance at the L–T transition and to the methane depletion observed on young exoplanets. Finally, we predict the existence of a continuum of brown dwarfs and exoplanets for absolute J magnitude = 15–18 and J-K color = 0–3, due to the evolution of the L–T transition with gravity. This self-consistent model therefore provides a general framework to understand the effects of clouds and appears well-suited for atmospheric retrievals.

  10. COCOA code for creating mock observations of star cluster models

    Science.gov (United States)

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Dalessandro, Emanuele

    2018-04-01

    We introduce and present results from the COCOA (Cluster simulatiOn Comparison with ObservAtions) code that has been developed to create idealized mock photometric observations using results from numerical simulations of star cluster evolution. COCOA is able to present the output of realistic numerical simulations of star clusters carried out using Monte Carlo or N-body codes in a way that is useful for direct comparison with photometric observations. In this paper, we describe the COCOA code and demonstrate its different applications by utilizing globular cluster (GC) models simulated with the MOCCA (MOnte Carlo Cluster simulAtor) code. COCOA is used to synthetically observe these different GC models with optical telescopes, perform point spread function photometry, and subsequently produce observed colour-magnitude diagrams. We also use COCOA to compare the results from synthetic observations of a cluster model that has the same age and metallicity as the Galactic GC NGC 2808 with observations of the same cluster carried out with a 2.2 m optical telescope. We find that COCOA can effectively simulate realistic observations and recover photometric data. COCOA has numerous scientific applications that maybe be helpful for both theoreticians and observers that work on star clusters. Plans for further improving and developing the code are also discussed in this paper.

  11. Southeast Atmosphere Studies: learning from model-observation syntheses

    Data.gov (United States)

    U.S. Environmental Protection Agency — Observed and modeled data shown in figure 2b-c. This dataset is associated with the following publication: Mao, J., A. Carlton, R. Cohen, W. Brune, S. Brown, G....

  12. Meridional Flow Observations: Implications for the current Flux Transport Models

    International Nuclear Information System (INIS)

    Gonzalez Hernandez, Irene; Komm, Rudolf; Kholikov, Shukur; Howe, Rachel; Hill, Frank

    2011-01-01

    Meridional circulation has become a key element in the solar dynamo flux transport models. Available helioseismic observations from several instruments, Taiwan Oscillation Network (TON), Global Oscillation Network Group (GONG) and Michelson Doppler Imager (MDI), have made possible a continuous monitoring of the solar meridional flow in the subphotospheric layers for the last solar cycle, including the recent extended minimum. Here we review some of the meridional circulation observations using local helioseismology techniques and relate them to magnetic flux transport models.

  13. Renormalization group running of fermion observables in an extended non-supersymmetric SO(10) model

    Energy Technology Data Exchange (ETDEWEB)

    Meloni, Davide [Dipartimento di Matematica e Fisica, Università di Roma Tre,Via della Vasca Navale 84, 00146 Rome (Italy); Ohlsson, Tommy; Riad, Stella [Department of Physics, School of Engineering Sciences,KTH Royal Institute of Technology - AlbaNova University Center,Roslagstullsbacken 21, 106 91 Stockholm (Sweden)

    2017-03-08

    We investigate the renormalization group evolution of fermion masses, mixings and quartic scalar Higgs self-couplings in an extended non-supersymmetric SO(10) model, where the Higgs sector contains the 10{sub H}, 120{sub H}, and 126{sub H} representations. The group SO(10) is spontaneously broken at the GUT scale to the Pati-Salam group and subsequently to the Standard Model (SM) at an intermediate scale M{sub I}. We explicitly take into account the effects of the change of gauge groups in the evolution. In particular, we derive the renormalization group equations for the different Yukawa couplings. We find that the computed physical fermion observables can be successfully matched to the experimental measured values at the electroweak scale. Using the same Yukawa couplings at the GUT scale, the measured values of the fermion observables cannot be reproduced with a SM-like evolution, leading to differences in the numerical values up to around 80%. Furthermore, a similar evolution can be performed for a minimal SO(10) model, where the Higgs sector consists of the 10{sub H} and 126{sub H} representations only, showing an equally good potential to describe the low-energy fermion observables. Finally, for both the extended and the minimal SO(10) models, we present predictions for the three Dirac and Majorana CP-violating phases as well as three effective neutrino mass parameters.

  14. Runoff modeling of the Mara River using satellite observed soil ...

    African Journals Online (AJOL)

    The model is developed based on the relationships found between satellite observed soil moisture and rainfall and the measured runoff. It uses the satellite observed rainfall as the prime forcing, and the soil moisture to separate the fast surface runoff and slow base flow contributions. The soil moisture and rainfall products ...

  15. Observational Data-Driven Modeling and Optimization of Manufacturing Processes

    OpenAIRE

    Sadati, Najibesadat; Chinnam, Ratna Babu; Nezhad, Milad Zafar

    2017-01-01

    The dramatic increase of observational data across industries provides unparalleled opportunities for data-driven decision making and management, including the manufacturing industry. In the context of production, data-driven approaches can exploit observational data to model, control and improve the process performance. When supplied by observational data with adequate coverage to inform the true process performance dynamics, they can overcome the cost associated with intrusive controlled de...

  16. Time-symmetric universe model and its observational implication

    Energy Technology Data Exchange (ETDEWEB)

    Futamase, T.; Matsuda, T.

    1987-08-01

    A time-symmetric closed-universe model is discussed in terms of the radiation arrow of time. The time symmetry requires the occurrence of advanced waves in the recontracting phase of the Universe. We consider the observational consequences of such advanced waves, and it is shown that a test observer in the expanding phase can observe a time-reversed image of a source of radiation in the future recontracting phase.

  17. A time-symmetric Universe model and its observational implication

    International Nuclear Information System (INIS)

    Futamase, T.; Matsuda, T.

    1987-01-01

    A time-symmetric closed-universe model is discussed in terms of the radiation arrow of time. The time symmetry requires the occurrence of advanced waves in the recontracting phase of the Universe. The observational consequences of such advanced waves are considered, and it is shown that a test observer in the expanding phase can observe a time-reversed image of a source of radiation in the future recontracting phase

  18. Cosmological observables in the quasi-spherical Szekeres model

    Science.gov (United States)

    Buckley, Robert G.

    2014-10-01

    The standard model of cosmology presents a homogeneous universe, and we interpret cosmological data through this framework. However, structure growth creates nonlinear inhomogeneities that may affect observations, and even larger structures may be hidden by our limited vantage point and small number of independent observations. As we determine the universe's parameters with increasing precision, the accuracy is contingent on our understanding of the effects of such structures. For instance, giant void models can explain some observations without dark energy. Because perturbation theory cannot adequately describe nonlinear inhomogeneities, exact solutions to the equations of general relativity are important for these questions. The most general known solution capable of describing inhomogeneous matter distributions is the Szekeres class of models. In this work, we study the quasi-spherical subclass of these models, using numerical simulations to calculate the inhomogeneities' effects on observations. We calculate the large-angle CMB in giant void models and compare with simpler, symmetric void models that have previously been found inadequate to matchobservations. We extend this by considering models with early-time inhomogeneities as well. Then, we study distance observations, including selection effects, in models which are homogeneous on scales around 100 Mpc---consistent with standard cosmology---but inhomogeneous on smaller scales. Finally, we consider photon polarizations, and show that they are not directly affected by inhomogeneities. Overall, we find that while Szekeres models have some advantages over simpler models, they are still seriously limited in their ability to alter our parameter estimation while remaining within the bounds of current observations.

  19. Contextual sensitivity in scientific reproducibility.

    Science.gov (United States)

    Van Bavel, Jay J; Mende-Siedlecki, Peter; Brady, William J; Reinero, Diego A

    2016-06-07

    In recent years, scientists have paid increasing attention to reproducibility. For example, the Reproducibility Project, a large-scale replication attempt of 100 studies published in top psychology journals found that only 39% could be unambiguously reproduced. There is a growing consensus among scientists that the lack of reproducibility in psychology and other fields stems from various methodological factors, including low statistical power, researcher's degrees of freedom, and an emphasis on publishing surprising positive results. However, there is a contentious debate about the extent to which failures to reproduce certain results might also reflect contextual differences (often termed "hidden moderators") between the original research and the replication attempt. Although psychologists have found extensive evidence that contextual factors alter behavior, some have argued that context is unlikely to influence the results of direct replications precisely because these studies use the same methods as those used in the original research. To help resolve this debate, we recoded the 100 original studies from the Reproducibility Project on the extent to which the research topic of each study was contextually sensitive. Results suggested that the contextual sensitivity of the research topic was associated with replication success, even after statistically adjusting for several methodological characteristics (e.g., statistical power, effect size). The association between contextual sensitivity and replication success did not differ across psychological subdisciplines. These results suggest that researchers, replicators, and consumers should be mindful of contextual factors that might influence a psychological process. We offer several guidelines for dealing with contextual sensitivity in reproducibility.

  20. Asymptotic behavior of observables in the asymmetric quantum Rabi model

    Science.gov (United States)

    Semple, J.; Kollar, M.

    2018-01-01

    The asymmetric quantum Rabi model with broken parity invariance shows spectral degeneracies in the integer case, that is when the asymmetry parameter equals an integer multiple of half the oscillator frequency, thus hinting at a hidden symmetry and accompanying integrability of the model. We study the expectation values of spin observables for each eigenstate and observe characteristic differences between the integer and noninteger cases for the asymptotics in the deep strong coupling regime, which can be understood from a perturbative expansion in the qubit splitting. We also construct a parent Hamiltonian whose exact eigenstates possess the same symmetries as the perturbative eigenstates of the asymmetric quantum Rabi model in the integer case.

  1. Tests of Financial Models in the Presence of Overlapping Observations.

    OpenAIRE

    Richardson, Matthew; Smith, Tom

    1991-01-01

    A general approach to testing serial dependence restrictions implied from financial models is developed. In particular, we discuss joint serial dependence restrictions imposed by random walk, market microstructure, and rational expectations models recently examined in the literature. This approach incorporates more information from the data by explicitly modeling dependencies induced by the use of overlapping observations. Because the estimation problem is sufficiently simple in this framewor...

  2. Inverse modeling of interbed storage parameters using land subsidence observations, Antelope Valley, California

    Science.gov (United States)

    Hoffmann, J.; Galloway, D.L.; Zebker, H.A.

    2003-01-01

    We use land-subsidence observations from repeatedly surveyed benchmarks and interferometric synthetic aperture radar (InSAR) in Antelope Valley, California, to estimate spatially varying compaction time constants, ??, and inelastic specific skeletal storage coefficients, Skv*, in a previously calibrated regional groundwater flow and subsidence model. The observed subsidence patterns reflect both the spatial distribution of head declines and the spatially variable inelastic skeletal storage coefficient. Using the nonlinear parameter estimation program UCODE we estimate compaction time constants between 3.8 and 285 years. The Skv* values are estimated by linear estimation and range from 0 to almost 0.09. We find that subsidence observations over long time periods are necessary to constrain estimates of the large compaction time constants in Antelope Valley. The InSAR data used in this study cover only a three-year period, limiting their usefulness in constraining these time constants. This problem will be alleviated as more SAR data become available in the future or where time constants are small. By incorporating the resulting parameter estimates in the previously calibrated regional model of groundwater flow and land subsidence we can significantly improve the agreement between simulated and observed land subsidence both in terms of magnitude and spatial extent. The sum of weighted squared subsidence residuals, a common measure of model fit, was reduced by 73% with respect to the original model. However, the ability of the model to adequately reproduce the subsidence observed over only a few years is impaired by the fact that the simulated hydraulic heads over small time periods are often not representative of the actual aquifer hydraulic heads. Errors in the simulated hydraulic aquifer heads constitute the primary limitation of the approach presented here.

  3. Consistent negative response of US crops to high temperatures in observations and crop models

    Science.gov (United States)

    Schauberger, Bernhard; Archontoulis, Sotirios; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Elliott, Joshua; Folberth, Christian; Khabarov, Nikolay; Müller, Christoph; Pugh, Thomas A. M.; Rolinski, Susanne; Schaphoff, Sibyll; Schmid, Erwin; Wang, Xuhui; Schlenker, Wolfram; Frieler, Katja

    2017-04-01

    High temperatures are detrimental to crop yields and could lead to global warming-driven reductions in agricultural productivity. To assess future threats, the majority of studies used process-based crop models, but their ability to represent effects of high temperature has been questioned. Here we show that an ensemble of nine crop models reproduces the observed average temperature responses of US maize, soybean and wheat yields. Each day above 30°C diminishes maize and soybean yields by up to 6% under rainfed conditions. Declines observed in irrigated areas, or simulated assuming full irrigation, are weak. This supports the hypothesis that water stress induced by high temperatures causes the decline. For wheat a negative response to high temperature is neither observed nor simulated under historical conditions, since critical temperatures are rarely exceeded during the growing season. In the future, yields are modelled to decline for all three crops at temperatures above 30°C. Elevated CO2 can only weakly reduce these yield losses, in contrast to irrigation.

  4. Modeling of cryoseismicity observed at the Fimbulisen Ice Shelf, East Antarctica

    Science.gov (United States)

    Hainzl, S.; Pirli, M.; Dahm, T.; Schweitzer, J.; Köhler, A.

    2017-12-01

    A source region of repetitive cryoseismic activity has been identified at the Fimbulisen ice shelf, in Dronning Maud Land, East Antarctica. The specific area is located at the outlet of the Jutulstraumen glacier, near the Kupol Moskovskij ice rise. A unique event catalog extending over 13 years, from 2003 to 2016 has been built based on waveform cross-correlation detectors and Hidden Markov Model classifiers. Phases of low seismicity rates are alternating with intense activity intervals that exhibit a strong tidal modulation. We performed a detailed analysis and modeling of the more than 2000 events recorded between July and October 2013. The observations are characterized by a number of very clear signals: (i) the event rate follows both the neap-spring and the semi-diurnal ocean-tide cycle; (ii) recurrences have a characteristic time of approximately 8 minutes; (iii) magnitudes vary systematically both on short and long time scales; and (iv) the events migrate within short-time clusters. We use these observations to constrain the dynamic processes at work at this particular region of the Fimbulisen ice shelf. Our model assumes a local grounding of the ice shelf, where stick-slip motion occurs. We show that the observations can be reproduced considering the modulation of the Coulomb-Failure stress by ocean tides.

  5. Observations that polar climate modelers use and want

    Science.gov (United States)

    Kay, J. E.; de Boer, G.; Hunke, E. C.; Bailey, D. A.; Schneider, D. P.

    2012-12-01

    Observations are essential for motivating and establishing improvement in the representation of polar processes within climate models. We believe that explicitly documenting the current methods used to develop and evaluate climate models with observations will help inform and improve collaborations between the observational and climate modeling communities. As such, we will present the current strategy of the Polar Climate Working Group (PCWG) to evaluate polar processes within Community Earth System Model (CESM) using observations. Our presentation will focus primarily on PCWG evaluation of atmospheric, sea ice, and surface oceanic processes. In the future, we hope to expand to include land surface, deep ocean, and biogeochemical observations. We hope our presentation, and a related working document developed by the PCWG (https://docs.google.com/document/d/1zt0xParsFeMYhlihfxVJhS3D5nEcKb8A41JH0G1Ic-E/edit) inspires new and useful interactions that lead to improved climate model representation of polar processes relevant to polar climate.

  6. Confronting Lemaitre–Tolman–Bondi models with observational cosmology

    International Nuclear Information System (INIS)

    Garcia-Bellido, Juan; Haugbølle, Troels

    2008-01-01

    The possibility that we live in a special place in the universe, close to the centre of a large void, seems an appealing alternative to the prevailing interpretation of the acceleration of the universe in terms of a ΛCDM model with a dominant dark energy component. In this paper we confront the asymptotically flat Lemaitre–Tolman–Bondi (LTB) models with a series of observations, from type Ia supernovae to cosmic microwave background and baryon acoustic oscillations data. We propose two concrete LTB models describing a local void in which the only arbitrary functions are the radial dependence of the matter density Ω M and the Hubble expansion rate H. We find that all observations can be accommodated within 1 sigma, for our models with four or five independent parameters. The best fit models have a χ 2 very close to that of the ΛCDM model. A general Fortran program for comparing LTB models with cosmological observations, that has been used to make the parameter scan in this paper, has been made public, and can be downloaded at http://www.phys.au.dk/~haugboel/software.shtml together with IDL routines for creating the likelihood plots. We perform a simple Bayesian analysis and show that one cannot exclude the hypothesis that we live within a large local void of an otherwise Einstein–de Sitter model

  7. Observation-based Model of Evolution of the Lyman-Alpha Line Profile During the Solar Cycle

    Science.gov (United States)

    Kowalska-Leszczyńska, I.; Bzowski, M.; Sokol, J. M.; Kubiak, M. A.

    2017-12-01

    Recent studies of interstellar neutral (ISN) hydrogen observed by the Interstellar Boundary Explorer (IBEX) suggested that present understanding of the radiation pressure acting on hydrogen atoms in the heliosphere should be revised. There is a significant discrepancy between theoretical predictions of the ISN H signal based on the currently used model of the solar Lyman-alpha profile and the signal due to interstellar neutral H observed by IBEX-Lo in energy range from 0.01 to 0.07 keV. We have developed a new model of evolution of the solar Lyman-alpha profile that takes into account all available observations of the full-disk solar Lyman-alpha profiles from SUMER/SOHO, provided by Lemaire et al. 2015, and covering practically the entire solar cycle. The model has three components that reproduce different features of the profile. The main shape of the emission line that is produced in the chromosphere is modelled by the Kappa function; the central reversal due to absorption in the transition region is modelled by the Gauss function; the spectral background is represented by the linear function. We verified that with this model, all of the individual profiles can be reproduced quite accurately. The profile for an arbitrary day is parameterized by just one parameter either the composite Lyman-alpha flux available from LASP or alternatively by the F10.7 solar radio flux, however we have noticed that the agreement of the model with the data is a little worse but still within the uncertainties of the data for the latter one The new model features potentially important differences in comparison with the model by Tarnopolski & Bzowski 2007, which was based on a limited set of observations. In addition to the model itself, we will demonstrate some consequences resulting from this model on predicted distributions of interstellar hydrogen in the inner heliosphere, as well as on probabilities of survival calculated for heliospheric energetic neutral atoms.

  8. Energetic protons at Mars: interpretation of SLED/Phobos-2 observations by a kinetic model

    Directory of Open Access Journals (Sweden)

    E. Kallio

    2012-11-01

    Full Text Available Mars has neither a significant global intrinsic magnetic field nor a dense atmosphere. Therefore, solar energetic particles (SEPs from the Sun can penetrate close to the planet (under some circumstances reaching the surface. On 13 March 1989 the SLED instrument aboard the Phobos-2 spacecraft recorded the presence of SEPs near Mars while traversing a circular orbit (at 2.8 RM. In the present study the response of the Martian plasma environment to SEP impingement on 13 March was simulated using a kinetic model. The electric and magnetic fields were derived using a 3-D self-consistent hybrid model (HYB-Mars where ions are modelled as particles while electrons form a massless charge neutralizing fluid. The case study shows that the model successfully reproduced several of the observed features of the in situ observations: (1 a flux enhancement near the inbound bow shock, (2 the formation of a magnetic shadow where the energetic particle flux was decreased relative to its solar wind values, (3 the energy dependency of the flux enhancement near the bow shock and (4 how the size of the magnetic shadow depends on the incident particle energy. Overall, it is demonstrated that the Martian magnetic field environment resulting from the Mars–solar wind interaction significantly modulated the Martian energetic particle environment.

  9. Energetic protons at Mars. Interpretation of SLED/Phobos-2 observations by a kinetic model

    Energy Technology Data Exchange (ETDEWEB)

    Kallio, E.; Alho, M.; Jarvinen, R.; Dyadechkin, S. [Finnish Meteorological Institute, Helsinki (Finland); McKenna-Lawlor, S. [Space Technology Ireland, Maynooth, Co. Kildare (Ireland); Afonin, V.V. [Space Research Institute, Moscow (Russian Federation)

    2012-07-01

    Mars has neither a significant global intrinsic magnetic field nor a dense atmosphere. Therefore, solar energetic particles (SEPs) from the Sun can penetrate close to the planet (under some circumstances reaching the surface). On 13 March 1989 the SLED instrument aboard the Phobos- 2 spacecraft recorded the presence of SEPs near Mars while traversing a circular orbit (at 2.8RM). In the present study the response of the Martian plasma environment to SEP impingement on 13 March was simulated using a kinetic model. The electric and magnetic fields were derived using a 3- D self-consistent hybrid model (HYB-Mars) where ions are modelled as particles while electrons form a massless charge neutralizing fluid. The case study shows that the model successfully reproduced several of the observed features of the in situ observations: (1) a flux enhancement near the inbound bow shock, (2) the formation of a magnetic shadow where the energetic particle flux was decreased relative to its solar wind values, (3) the energy dependency of the flux enhancement near the bow shock and (4) how the size of the magnetic shadow depends on the incident particle energy. Overall, it is demonstrated that the Martian magnetic field environment resulting from the Mars-solar wind interaction significantly modulated the Martian energetic particle environment. (orig.)

  10. Observational constraints on models for giant planet formation

    International Nuclear Information System (INIS)

    Gautier, D.; Owen, T.; Arizona Univ., Tucson)

    1985-01-01

    Current information about element abundances and isotope ratios in the atmospheres of Jupiter, Saturn, Uranus, and Neptune is reviewed. The observed enhancement of C/H compared with the solar value favors models for the origin of these bodies that invoke the accretion and degassing of an ice-rock core followed by the accumulation of a solar composition envelope. Titan may represent an example of a core-forming planetesimal. Observations of D/H and other isotope ratios must be accommodated by these models in ways that are not yet completely clear. Some additional tests are suggested

  11. Enhancing reproducibility: Failures from Reproducibility Initiatives underline core challenges.

    Science.gov (United States)

    Mullane, Kevin; Williams, Michael

    2017-08-15

    Efforts to address reproducibility concerns in biomedical research include: initiatives to improve journal publication standards and peer review; increased attention to publishing methodological details that enable experiments to be reconstructed; guidelines on standards for study design, implementation, analysis and execution; meta-analyses of multiple studies within a field to synthesize a common conclusion and; the formation of consortia to adopt uniform protocols and internally reproduce data. Another approach to addressing reproducibility are Reproducibility Initiatives (RIs), well-intended, high-profile, systematically peer-vetted initiatives that are intended to replace the traditional process of scientific self-correction. Outcomes from the RIs reported to date have questioned the usefulness of this approach, particularly when the RI outcome differs from other independent self-correction studies that have reproduced the original finding. As a failed RI attempt is a single outcome distinct from the original study, it cannot provide any definitive conclusions necessitating additional studies that the RI approach has neither the ability nor intent of conducting making it a questionable replacement for self-correction. A failed RI attempt also has the potential to damage the reputation of the author of the original finding. Reproduction is frequently confused with replication, an issue that is more than semantic with the former denoting "similarity" and the latter an "exact copy" - an impossible outcome in research because of known and unknown technical, environmental and motivational differences between the original and reproduction studies. To date, the RI framework has negatively impacted efforts to improve reproducibility, confounding attempts to determine whether a research finding is real. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. A method to isolate bacterial communities and characterize ecosystems from food products: Validation and utilization in as a reproducible chicken meat model.

    Science.gov (United States)

    Rouger, Amélie; Remenant, Benoit; Prévost, Hervé; Zagorec, Monique

    2017-04-17

    Influenced by production and storage processes and by seasonal changes the diversity of meat products microbiota can be very variable. Because microbiotas influence meat quality and safety, characterizing and understanding their dynamics during processing and storage is important for proposing innovative and efficient storage conditions. Challenge tests are usually performed using meat from the same batch, inoculated at high levels with one or few strains. Such experiments do not reflect the true microbial situation, and the global ecosystem is not taken into account. Our purpose was to constitute live stocks of chicken meat microbiotas to create standard and reproducible ecosystems. We searched for the best method to collect contaminating bacterial communities from chicken cuts to store as frozen aliquots. We tested several methods to extract DNA of these stored communities for subsequent PCR amplification. We determined the best moment to collect bacteria in sufficient amounts during the product shelf life. Results showed that the rinsing method associated to the use of Mobio DNA extraction kit was the most reliable method to collect bacteria and obtain DNA for subsequent PCR amplification. Then, 23 different chicken meat microbiotas were collected using this procedure. Microbiota aliquots were stored at -80°C without important loss of viability. Their characterization by cultural methods confirmed the large variability (richness and abundance) of bacterial communities present on chicken cuts. Four of these bacterial communities were used to estimate their ability to regrow on meat matrices. Challenge tests performed on sterile matrices showed that these microbiotas were successfully inoculated and could overgrow the natural microbiota of chicken meat. They can therefore be used for performing reproducible challenge tests mimicking a true meat ecosystem and enabling the possibility to test the influence of various processing or storage conditions on complex meat

  13. Evaluating Climate Models: Should We Use Weather or Climate Observations?

    Science.gov (United States)

    Oglesby, R. J.; Rowe, C. M.; Maasch, K. A.; Erickson, D. J.; Hays, C.

    2009-12-01

    Calling the numerical models that we use for simulations of climate change 'climate models' is a bit of a misnomer. These 'general circulation models' (GCMs, AKA global climate models) and their cousins the 'regional climate models' (RCMs) are actually physically-based weather simulators. That is, these models simulate, either globally or locally, daily weather patterns in response to some change in forcing or boundary condition. These simulated weather patterns are then aggregated into climate statistics, very much as we aggregate observations into 'real climate statistics'. Traditionally, the output of GCMs has been evaluated using climate statistics, as opposed to their ability to simulate realistic daily weather observations. At the coarse global scale this may be a reasonable approach, however, as RCM's downscale to increasingly higher resolutions, the conjunction between weather and climate becomes more problematic. We present results from a series of present-day climate simulations using the WRF ARW for domains that cover North America, much of Latin America, and South Asia. The basic domains are at a 12 km resolution, but several inner domains at 4 km have also been simulated. These include regions of complex topography in Mexico, Colombia, Peru, and Sri Lanka, as well as a region of low topography and fairly homogeneous land surface type (the U.S. Great Plains). Model evaluations are performed using standard climate analyses (e.g., reanalyses; NCDC data) but also using time series of daily station observations. Preliminary results suggest little difference in the assessment of long-term mean quantities, but the variability on seasonal and interannual timescales is better described. Furthermore, the value-added by using daily weather observations as an evaluation tool increases with the model resolution.

  14. On prognostic models, artificial intelligence and censored observations.

    Science.gov (United States)

    Anand, S S; Hamilton, P W; Hughes, J G; Bell, D A

    2001-03-01

    The development of prognostic models for assisting medical practitioners with decision making is not a trivial task. Models need to possess a number of desirable characteristics and few, if any, current modelling approaches based on statistical or artificial intelligence can produce models that display all these characteristics. The inability of modelling techniques to provide truly useful models has led to interest in these models being purely academic in nature. This in turn has resulted in only a very small percentage of models that have been developed being deployed in practice. On the other hand, new modelling paradigms are being proposed continuously within the machine learning and statistical community and claims, often based on inadequate evaluation, being made on their superiority over traditional modelling methods. We believe that for new modelling approaches to deliver true net benefits over traditional techniques, an evaluation centric approach to their development is essential. In this paper we present such an evaluation centric approach to developing extensions to the basic k-nearest neighbour (k-NN) paradigm. We use standard statistical techniques to enhance the distance metric used and a framework based on evidence theory to obtain a prediction for the target example from the outcome of the retrieved exemplars. We refer to this new k-NN algorithm as Censored k-NN (Ck-NN). This reflects the enhancements made to k-NN that are aimed at providing a means for handling censored observations within k-NN.

  15. CONSTRAINING THE NFW POTENTIAL WITH OBSERVATIONS AND MODELING OF LOW SURFACE BRIGHTNESS GALAXY VELOCITY FIELDS

    International Nuclear Information System (INIS)

    Kuzio de Naray, Rachel; McGaugh, Stacy S.; Mihos, J. Christopher

    2009-01-01

    We model the Navarro-Frenk-White (NFW) potential to determine if, and under what conditions, the NFW halo appears consistent with the observed velocity fields of low surface brightness (LSB) galaxies. We present mock DensePak Integral Field Unit (IFU) velocity fields and rotation curves of axisymmetric and nonaxisymmetric potentials that are well matched to the spatial resolution and velocity range of our sample galaxies. We find that the DensePak IFU can accurately reconstruct the velocity field produced by an axisymmetric NFW potential and that a tilted-ring fitting program can successfully recover the corresponding NFW rotation curve. We also find that nonaxisymmetric potentials with fixed axis ratios change only the normalization of the mock velocity fields and rotation curves and not their shape. The shape of the modeled NFW rotation curves does not reproduce the data: these potentials are unable to simultaneously bring the mock data at both small and large radii into agreement with observations. Indeed, to match the slow rise of LSB galaxy rotation curves, a specific viewing angle of the nonaxisymmetric potential is required. For each of the simulated LSB galaxies, the observer's line of sight must be along the minor axis of the potential, an arrangement that is inconsistent with a random distribution of halo orientations on the sky.

  16. External Influences on Modeled and Observed Cloud Trends

    Science.gov (United States)

    Marvel, Kate; Zelinka, Mark; Klein, Stephen A.; Bonfils, Celine; Caldwell, Peter; Doutriaux, Charles; Santer, Benjamin D.; Taylor, Karl E.

    2015-01-01

    Understanding the cloud response to external forcing is a major challenge for climate science. This crucial goal is complicated by intermodel differences in simulating present and future cloud cover and by observational uncertainty. This is the first formal detection and attribution study of cloud changes over the satellite era. Presented herein are CMIP5 (Coupled Model Intercomparison Project - Phase 5) model-derived fingerprints of externally forced changes to three cloud properties: the latitudes at which the zonally averaged total cloud fraction (CLT) is maximized or minimized, the zonal average CLT at these latitudes, and the height of high clouds at these latitudes. By considering simultaneous changes in all three properties, the authors define a coherent multivariate fingerprint of cloud response to external forcing and use models from phase 5 of CMIP (CMIP5) to calculate the average time to detect these changes. It is found that given perfect satellite cloud observations beginning in 1983, the models indicate that a detectable multivariate signal should have already emerged. A search is then made for signals of external forcing in two observational datasets: ISCCP (International Satellite Cloud Climatology Project) and PATMOS-x (Advanced Very High Resolution Radiometer (AVHRR) Pathfinder Atmospheres - Extended). The datasets are both found to show a poleward migration of the zonal CLT pattern that is incompatible with forced CMIP5 models. Nevertheless, a detectable multivariate signal is predicted by models over the PATMOS-x time period and is indeed present in the dataset. Despite persistent observational uncertainties, these results present a strong case for continued efforts to improve these existing satellite observations, in addition to planning for new missions.

  17. Scale-free distribution of Dead Sea sinkholes: Observations and modeling

    Science.gov (United States)

    Yizhaq, H.; Ish-Shalom, C.; Raz, E.; Ashkenazy, Y.

    2017-05-01

    There are currently more than 5500 sinkholes along the Dead Sea in Israel. These were formed due to the dissolution of subsurface salt layers as a result of the replacement of hypersaline groundwater by fresh brackish groundwater. This process has been associated with a sharp decline in the Dead Sea water level, currently more than 1 m/yr, resulting in a lower water table that has allowed the intrusion of fresher brackish water. We studied the distribution of the sinkhole sizes and found that it is scale free with a power law exponent close to 2. We constructed a stochastic cellular automata model to understand the observed scale-free behavior and the growth of the sinkhole area in time. The model consists of a lower salt layer and an upper soil layer in which cavities that develop in the lower layer lead to collapses in the upper layer. The model reproduces the observed power law distribution without involving the threshold behavior commonly associated with criticality.

  18. Additive Manufacturing: Reproducibility of Metallic Parts

    Directory of Open Access Journals (Sweden)

    Konda Gokuldoss Prashanth

    2017-02-01

    Full Text Available The present study deals with the properties of five different metals/alloys (Al-12Si, Cu-10Sn and 316L—face centered cubic structure, CoCrMo and commercially pure Ti (CP-Ti—hexagonal closed packed structure fabricated by selective laser melting. The room temperature tensile properties of Al-12Si samples show good consistency in results within the experimental errors. Similar reproducible results were observed for sliding wear and corrosion experiments. The other metal/alloy systems also show repeatable tensile properties, with the tensile curves overlapping until the yield point. The curves may then follow the same path or show a marginal deviation (~10 MPa until they reach the ultimate tensile strength and a negligible difference in ductility levels (of ~0.3% is observed between the samples. The results show that selective laser melting is a reliable fabrication method to produce metallic materials with consistent and reproducible properties.

  19. Reconciling Simulated and Observed Views of Clouds: MODIS, ISCCP, and the Limits of Instrument Simulators in Climate Models

    Science.gov (United States)

    Pincus, Robert; Platnick, Steven E.; Ackerman, Steve; Hemler, Richard; Hofmann, Patrick

    2011-01-01

    The properties of clouds that may be observed by satellite instruments, such as optical depth and cloud top pressure, are only loosely related to the way clouds are represented in models of the atmosphere. One way to bridge this gap is through "instrument simulators," diagnostic tools that map the model representation to synthetic observations so that differences between simulator output and observations can be interpreted unambiguously as model error. But simulators may themselves be restricted by limited information available from the host model or by internal assumptions. This work examines the extent to which instrument simulators are able to capture essential differences between MODIS and ISCCP, two similar but independent estimates of cloud properties. We focus on the stark differences between MODIS and ISCCP observations of total cloudiness and the distribution of cloud optical thickness can be traced to different approaches to marginal pixels, which MODIS excludes and ISCCP treats as homogeneous. These pixels, which likely contain broken clouds, cover about 15% of the planet and contain almost all of the optically thinnest clouds observed by either instrument. Instrument simulators can not reproduce these differences because the host model does not consider unresolved spatial scales and so can not produce broken pixels. Nonetheless, MODIS and ISCCP observation are consistent for all but the optically-thinnest clouds, and models can be robustly evaluated using instrument simulators by excluding ambiguous observations.

  20. THE CENTRAL REGION IN M100 - OBSERVATIONS AND MODELING

    NARCIS (Netherlands)

    KNAPEN, JH; BECKMAN, JE; HELLER, CH; SHLOSMAN, [No Value; DEJONG, RS

    1995-01-01

    We present new high-resolution observations of the central region in the late-type spiral galaxy M100 (NGC 4321) supplemented by three-dimensional numerical modeling of stellar and gas dynamics, including star formation (SF). Near-infrared imaging has revealed a small bulge of 4'' effective

  1. The middle atmospheric response to short and long term solar UV variations: analysis of observations and 2D model results

    Science.gov (United States)

    Fleming, Eric L.; Chandra, Sushil; Jackman, Charles H.; Considine, David B.; Douglass, Anne R.

    1995-01-01

    We have investigated the middle atmospheric response to the 27-day and 11-yr solar UV flux variations at low to middle latitudes using a two-dimensional photochemical model. The model reproduced most features of the observed 27-day sensitivity and phase lag of the profile ozone response in the upper stratosphere and lower mesosphere, with a maximum sensitivity of +0.51% per 1% change in 205 nm flux. The model also reproduced the observed transition to a negative phase lag above 2 mb, reflecting the increasing importance with height of the solar modulated HO(x) chemistry on the ozone response above 45 km. The model revealed the general anti-correlation of ozone and solar UV at 65-75 km, and simulated strong UV responses of water vapor and HO(x) species in the mesosphere. Consistent with previous 1D model studies, the observed upper mesospheric positive ozone response averaged over +/- 40 was simulated only when the model water vapor concentrations above 75 km were significantly reduced relative to current observations. In agreement with observations, the model computed a low to middle latitude total ozone phase lag of +3 days and a sensitivity of +0.077% per 1% change in 205 nm flux for the 27-day solar variation, and a total ozone sensitivity of +0.27% for the 11-yr solar cycle. This factor of 3 sensitivity difference is indicative of the photochemical time constant for ozone in the lower stratosphere which is comparable to the 27-day solar rotation period but is much shorter than the 11-yr solar cycle.

  2. Made-to-measure modelling of observed galaxy dynamics

    Science.gov (United States)

    Bovy, Jo; Kawata, Daisuke; Hunt, Jason A. S.

    2018-01-01

    Amongst dynamical modelling techniques, the made-to-measure (M2M) method for modelling steady-state systems is amongst the most flexible, allowing non-parametric distribution functions in complex gravitational potentials to be modelled efficiently using N-body particles. Here, we propose and test various improvements to the standard M2M method for modelling observed data, illustrated using the simple set-up of a one-dimensional harmonic oscillator. We demonstrate that nuisance parameters describing the modelled system's orientation with respect to the observer - e.g. an external galaxy's inclination or the Sun's position in the Milky Way - as well as the parameters of an external gravitational field can be optimized simultaneously with the particle weights. We develop a method for sampling from the high-dimensional uncertainty distribution of the particle weights. We combine this in a Gibbs sampler with samplers for the nuisance and potential parameters to explore the uncertainty distribution of the full set of parameters. We illustrate our M2M improvements by modelling the vertical density and kinematics of F-type stars in Gaia DR1. The novel M2M method proposed here allows full probabilistic modelling of steady-state dynamical systems, allowing uncertainties on the non-parametric distribution function and on nuisance parameters to be taken into account when constraining the dark and baryonic masses of stellar systems.

  3. June 13, 2013 U.S. East Coast Meteotsunami: Comparing a Numerical Model With Observations

    Science.gov (United States)

    Wang, D.; Becker, N. C.; Weinstein, S.; Whitmore, P.; Knight, W.; Kim, Y.; Bouchard, R. H.; Grissom, K.

    2013-12-01

    On June 13, 2013, a tsunami struck the U.S. East Coast and caused several reported injuries. This tsunami occurred after a derecho moved offshore from North America into the Atlantic Ocean. The presence of this storm, the lack of a seismic source, and the fact that tsunami arrival times at tide stations and deep ocean-bottom pressure sensors cannot be attributed to a 'point-source' suggest this tsunami was caused by atmospheric forces, i.e., a meteotsunami. In this study we attempt to reproduce the observed phenomenon using a numerical model with idealized atmospheric pressure forcing resembling the propagation of the observed barometric anomaly. The numerical model was able to capture some observed features of the tsunami at some tide stations, including the time-lag between the time of pressure jump and the time of tsunami arrival. The model also captures the response at a deep ocean-bottom pressure gauge (DART 44402), including the primary wave and the reflected wave. There are two components of the oceanic response to the propagating pressure anomaly, inverted barometer response and dynamic response. We find that the dynamic response over the deep ocean to be much smaller than the inverted barometer response. The time lag between the pressure jump and tsunami arrival at tide stations is due to the dynamic response: waves generated and/or reflected at the shelf-break propagate shoreward and amplify due to the shoaling effect. The evolution of the derecho over the deep ocean (propagation direction and intensity) is not well defined, however, because of the lack of data so the forcing used for this study is somewhat speculative. Better definition of the pressure anomaly through increased observation or high resolution atmospheric models would improve meteotsunami forecast capabilities.

  4. The synergistic use of models and observations: understanding the mechanisms behind observed biomass dynamics at 14 Amazonian field sites and the implications for future biomass change

    Science.gov (United States)

    Levine, N. M.; Galbraith, D.; Christoffersen, B. J.; Imbuzeiro, H. A.; Restrepo-Coupe, N.; Malhi, Y.; Saleska, S. R.; Costa, M. H.; Phillips, O.; Andrade, A.; Moorcroft, P. R.

    2011-12-01

    The Amazonian rainforests play a vital role in global water, energy and carbon cycling. The sensitivity of this system to natural and anthropogenic disturbances therefore has important implications for the global climate. Some global models have predicted large-scale forest dieback and the savannization of Amazonia over the next century [Meehl et al., 2007]. While several studies have demonstrated the sensitivity of dynamic global vegetation models to changes in temperature, precipitation, and dry season length [e.g. Galbraith et al., 2010; Good et al., 2011], the ability of these models to accurately reproduce ecosystem dynamics of present-day transitional or low biomass tropical forests has not been demonstrated. A model-data intercomparison was conducted with four state-of-the-art terrestrial ecosystem models to evaluate the ability of these models to accurately represent structure, function, and long-term biomass dynamics over a range of Amazonian ecosystems. Each modeling group conducted a series of simulations for 14 sites including mature forest, transitional forest, savannah, and agricultural/pasture sites. All models were run using standard physical parameters and the same initialization procedure. Model results were compared against forest inventory and dendrometer data in addition to flux tower measurements. While the models compared well against field observations for the mature forest sites, significant differences were observed between predicted and measured ecosystem structure and dynamics for the transitional forest and savannah sites. The length of the dry season and soil sand content were good predictors of model performance. In addition, for the big leaf models, model performance was highest for sites dominated by late successional trees and lowest for sites with predominantly early and mid-successional trees. This study provides insight into tropical forest function and sensitivity to environmental conditions that will aid in predictions of the

  5. Testing Reproducibility in Earth Sciences

    Science.gov (United States)

    Church, M. A.; Dudill, A. R.; Frey, P.; Venditti, J. G.

    2017-12-01

    Reproducibility represents how closely the results of independent tests agree when undertaken using the same materials but different conditions of measurement, such as operator, equipment or laboratory. The concept of reproducibility is fundamental to the scientific method as it prevents the persistence of incorrect or biased results. Yet currently the production of scientific knowledge emphasizes rapid publication of previously unreported findings, a culture that has emerged from pressures related to hiring, publication criteria and funding requirements. Awareness and critique of the disconnect between how scientific research should be undertaken, and how it actually is conducted, has been prominent in biomedicine for over a decade, with the fields of economics and psychology more recently joining the conversation. The purpose of this presentation is to stimulate the conversation in earth sciences where, despite implicit evidence in widely accepted classifications, formal testing of reproducibility is rare.As a formal test of reproducibility, two sets of experiments were undertaken with the same experimental procedure, at the same scale, but in different laboratories. Using narrow, steep flumes and spherical glass beads, grain size sorting was examined by introducing fine sediment of varying size and quantity into a mobile coarse bed. The general setup was identical, including flume width and slope; however, there were some variations in the materials, construction and lab environment. Comparison of the results includes examination of the infiltration profiles, sediment mobility and transport characteristics. The physical phenomena were qualitatively reproduced but not quantitatively replicated. Reproduction of results encourages more robust research and reporting, and facilitates exploration of possible variations in data in various specific contexts. Following the lead of other fields, testing of reproducibility can be incentivized through changes to journal

  6. Three-dimensional Kinetic Pulsar Magnetosphere Models: Connecting to Gamma-Ray Observations

    Science.gov (United States)

    Kalapotharakos, Constantinos; Brambilla, Gabriele; Timokhin, Andrey; Harding, Alice K.; Kazanas, Demosthenes

    2018-04-01

    We present three-dimensional (3D) global kinetic pulsar magnetosphere models, where the charged particle trajectories and the corresponding electromagnetic fields are treated self-consistently. For our study, we have developed a Cartesian 3D relativistic particle-in-cell code that incorporates radiation reaction forces. We describe our code and discuss the related technical issues, treatments, and assumptions. Injecting particles up to large distances in the magnetosphere, we apply arbitrarily low to high particle injection rates, and obtain an entire spectrum of solutions from close to the vacuum-retarded dipole to close to the force-free (FF) solution, respectively. For high particle injection rates (close to FF solutions), significant accelerating electric field components are confined only near the equatorial current sheet outside the light cylinder. A judicious interpretation of our models allows the particle emission to be calculated, and consequently, the corresponding realistic high-energy sky maps and spectra to be derived. Using model parameters that cover the entire range of spin-down powers of Fermi young and millisecond pulsars, we compare the corresponding model γ-ray light curves, cutoff energies, and total γ-ray luminosities with those observed by Fermi to discover a dependence of the particle injection rate, { \\mathcal F }, on the spin-down power, \\dot{{ \\mathcal E }}, indicating an increase of { \\mathcal F } with \\dot{{ \\mathcal E }}. Our models, guided by Fermi observations, provide field structures and particle distributions that are not only consistent with each other but also able to reproduce a broad range of the observed γ-ray phenomenologies of both young and millisecond pulsars.

  7. Obs4MIPS: Satellite Observations for Model Evaluation

    Science.gov (United States)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2017-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. The project holdings now exceed 120 datasets with observations that directly correspond to CMIP5 model output variables, with new additions in response to the CMIP6 experiments. With the growth in climate model output data volume, it is increasing more difficult to bring the model output and the observations together to do evaluations. The positioning of the obs4MIPs datasets within the Earth System Grid Federation (ESGF) allows for the use of currently available and planned online tools within the ESGF to perform analysis using model output and observational datasets without necessarily downloading everything to a local workstation. This past year, obs4MIPs has updated its submission guidelines to closely align with changes in the CMIP6 experiments, and is implementing additional indicators and ancillary data to allow users to more easily determine the efficacy of an obs4MIPs dataset for specific evaluation purposes. This poster will present the new guidelines and indicators, and update the list of current obs4MIPs holdings and their connection to the ESGF evaluation and analysis tools currently available, and being developed for the CMIP6 experiments.

  8. An observer model for quantifying panning artifacts in digital pathology

    Science.gov (United States)

    Avanaki, Ali R. N.; Espig, Kathryn S.; Xthona, Albert; Lanciault, Christian; Kimpe, Tom R. L.

    2017-03-01

    Typically, pathologists pan from one region of a slide to another, choosing areas of interest for closer inspection. Due to finite frame rate and imperfect zero-order hold reconstruction (i.e., the non-zero time to reach the target brightness after a change in pixel drive), panning in whole slide images (WSI) cause visual artifacts. It is important to study the impact of such artifacts since research suggests that 49% of navigation is conducted in low-power/overview with digital pathology (Molin et al., Histopathology 2015). In this paper, we explain what types of medical information may be harmed by panning artifacts, propose a method to simulate panning artifacts, and design an observer model to predict the impact of panning artifacts on typical human observers' performance in basic diagnostically relevant visual tasks. The proposed observer model is based on derivation of perceived object border maps from luminance and chrominance information and may be tuned to account for visual acuity of the human observer to be modeled. Our results suggest that increasing the contrast (e.g., using a wide gamut display) with a slow response panel may not mitigate the panning artifacts which mostly affect visual tasks involving spatial discrimination of objects (e.g., normal vs abnormal structure, cell type and spatial relationships between them, and low-power nuclear morphology), and that the panning artifacts worsen with increasing panning speed. The proposed methods may be used as building blocks in an automatic WSI quality assessment framework.

  9. Laguerre-Gauss basis functions in observer models

    Science.gov (United States)

    Burgess, Arthur E.

    2003-05-01

    Observer models based on linear classifiers with basis functions (channels) are useful for evaluation of detection performance with medical images. They allow spatial domain calculations with a covariance matrix of tractable size. The term "channelized Fisher-Hotelling observer" will be used here. It is also called the "channelized Hotelling observer" model. There are an infinite number of basis function (channel ) sets that could be employed. Examples of channel sets that have been used include: difference of Gaussian (DOG) filters, difference of Mesa (DOM) filters and Laguerre-Gauss (LG) basis functions. Another option, sums of LG functions (LGS), will also be presented here. This set has the advantage of having no DC response. The effect of the number of images used to estimate model observer performance will be described, for both filtered 1/f3 noise and GE digital mammogram backgrounds. Finite sample image sets introduce both bias and variance to the estimate. The results presented here agree with previous work on linear classifiers. The LGS basis set gives a small but statistically significant reduction in bias. However, this may not be of much practical benefit. Finally, the effect of varying the number of basis functions included in the set will be addressed. It was found that four LG bases or three LGS bases are adequate.

  10. Southeast Atmosphere Studies: learning from model-observation syntheses

    Science.gov (United States)

    Mao, Jingqiu; Carlton, Annmarie; Cohen, Ronald C.; Brune, William H.; Brown, Steven S.; Wolfe, Glenn M.; Jimenez, Jose L.; Pye, Havala O. T.; Ng, Nga Lee; Xu, Lu; McNeill, V. Faye; Tsigaridis, Kostas; McDonald, Brian C.; Warneke, Carsten; Guenther, Alex; Alvarado, Matthew J.; de Gouw, Joost; Mickley, Loretta J.; Leibensperger, Eric M.; Mathur, Rohit; Nolte, Christopher G.; Portmann, Robert W.; Unger, Nadine; Tosca, Mika; Horowitz, Larry W.

    2018-02-01

    Concentrations of atmospheric trace species in the United States have changed dramatically over the past several decades in response to pollution control strategies, shifts in domestic energy policy and economics, and economic development (and resulting emission changes) elsewhere in the world. Reliable projections of the future atmosphere require models to not only accurately describe current atmospheric concentrations, but to do so by representing chemical, physical and biological processes with conceptual and quantitative fidelity. Only through incorporation of the processes controlling emissions and chemical mechanisms that represent the key transformations among reactive molecules can models reliably project the impacts of future policy, energy and climate scenarios. Efforts to properly identify and implement the fundamental and controlling mechanisms in atmospheric models benefit from intensive observation periods, during which collocated measurements of diverse, speciated chemicals in both the gas and condensed phases are obtained. The Southeast Atmosphere Studies (SAS, including SENEX, SOAS, NOMADSS and SEAC4RS) conducted during the summer of 2013 provided an unprecedented opportunity for the atmospheric modeling community to come together to evaluate, diagnose and improve the representation of fundamental climate and air quality processes in models of varying temporal and spatial scales.This paper is aimed at discussing progress in evaluating, diagnosing and improving air quality and climate modeling using comparisons to SAS observations as a guide to thinking about improvements to mechanisms and parameterizations in models. The effort focused primarily on model representation of fundamental atmospheric processes that are essential to the formation of ozone, secondary organic aerosol (SOA) and other trace species in the troposphere, with the ultimate goal of understanding the radiative impacts of these species in the southeast and elsewhere. Here we

  11. Southeast Atmosphere Studies: learning from model-observation syntheses

    Directory of Open Access Journals (Sweden)

    J. Mao

    2018-02-01

    Full Text Available Concentrations of atmospheric trace species in the United States have changed dramatically over the past several decades in response to pollution control strategies, shifts in domestic energy policy and economics, and economic development (and resulting emission changes elsewhere in the world. Reliable projections of the future atmosphere require models to not only accurately describe current atmospheric concentrations, but to do so by representing chemical, physical and biological processes with conceptual and quantitative fidelity. Only through incorporation of the processes controlling emissions and chemical mechanisms that represent the key transformations among reactive molecules can models reliably project the impacts of future policy, energy and climate scenarios. Efforts to properly identify and implement the fundamental and controlling mechanisms in atmospheric models benefit from intensive observation periods, during which collocated measurements of diverse, speciated chemicals in both the gas and condensed phases are obtained. The Southeast Atmosphere Studies (SAS, including SENEX, SOAS, NOMADSS and SEAC4RS conducted during the summer of 2013 provided an unprecedented opportunity for the atmospheric modeling community to come together to evaluate, diagnose and improve the representation of fundamental climate and air quality processes in models of varying temporal and spatial scales.This paper is aimed at discussing progress in evaluating, diagnosing and improving air quality and climate modeling using comparisons to SAS observations as a guide to thinking about improvements to mechanisms and parameterizations in models. The effort focused primarily on model representation of fundamental atmospheric processes that are essential to the formation of ozone, secondary organic aerosol (SOA and other trace species in the troposphere, with the ultimate goal of understanding the radiative impacts of these species in the southeast and

  12. Modelling and observing urban climate in the Netherlands

    International Nuclear Information System (INIS)

    Van Hove, B.; Steeneveld, G.J.; Heusinkveld, B.; Holtslag, B.; Jacobs, C.; Ter Maat, H.; Elbers, J.; Moors, E.

    2011-06-01

    The main aims of the present study are: (1) to evaluate the performance of two well-known mesoscale NWP (numerical weather prediction) models coupled to a UCM (Urban Canopy Models), and (2) to develop a proper measurement strategy for obtaining meteorological data that can be used in model evaluation studies. We choose the mesoscale models WRF (Weather Research and Forecasting Model) and RAMS (Regional Atmospheric Modeling System), respectively, because the partners in the present project have a large expertise with respect to these models. In addition WRF and RAMS have been successfully used in the meteorology and climate research communities for various purposes, including weather prediction and land-atmosphere interaction research. Recently, state-of-the-art UCM's were embedded within the land surface scheme of the respective models, in order to better represent the exchange of heat, momentum, and water vapour in the urban environment. Key questions addressed here are: What is the general model performance with respect to the urban environment?; How can useful and observational data be obtained that allow sensible validation and further parameterization of the models?; and Can the models be easily modified to simulate the urban climate under Dutch climatic conditions, urban configuration and morphology? Chapter 2 reviews the available Urban Canopy Models; we discuss their theoretical basis, the different representations of the urban environment, the required input and the output. Much of the information was obtained from the Urban Surface Energy Balance: Land Surface Scheme Comparison project (PILPS URBAN, PILPS stands for Project for Inter-comparison of Land-Surface Parameterization Schemes). This project started in March 2008 and was coordinated by the Department of Geography, King's College London. In order to test the performance of our models we participated in this project. Chapter 3 discusses the main results of the first phase of PILPS URBAN. A first

  13. ITK: enabling reproducible research and open science

    Science.gov (United States)

    McCormick, Matthew; Liu, Xiaoxiao; Jomier, Julien; Marion, Charles; Ibanez, Luis

    2014-01-01

    Reproducibility verification is essential to the practice of the scientific method. Researchers report their findings, which are strengthened as other independent groups in the scientific community share similar outcomes. In the many scientific fields where software has become a fundamental tool for capturing and analyzing data, this requirement of reproducibility implies that reliable and comprehensive software platforms and tools should be made available to the scientific community. The tools will empower them and the public to verify, through practice, the reproducibility of observations that are reported in the scientific literature. Medical image analysis is one of the fields in which the use of computational resources, both software and hardware, are an essential platform for performing experimental work. In this arena, the introduction of the Insight Toolkit (ITK) in 1999 has transformed the field and facilitates its progress by accelerating the rate at which algorithmic implementations are developed, tested, disseminated and improved. By building on the efficiency and quality of open source methodologies, ITK has provided the medical image community with an effective platform on which to build a daily workflow that incorporates the true scientific practices of reproducibility verification. This article describes the multiple tools, methodologies, and practices that the ITK community has adopted, refined, and followed during the past decade, in order to become one of the research communities with the most modern reproducibility verification infrastructure. For example, 207 contributors have created over 2400 unit tests that provide over 84% code line test coverage. The Insight Journal, an open publication journal associated with the toolkit, has seen over 360,000 publication downloads. The median normalized closeness centrality, a measure of knowledge flow, resulting from the distributed peer code review system was high, 0.46. PMID:24600387

  14. ITK: Enabling Reproducible Research and Open Science

    Directory of Open Access Journals (Sweden)

    Matthew Michael McCormick

    2014-02-01

    Full Text Available Reproducibility verification is essential to the practice of the scientific method. Researchers report their findings, which are strengthened as other independent groups in the scientific community share similar outcomes. In the many scientific fields where software has become a fundamental tool for capturing and analyzing data, this requirement of reproducibility implies that reliable and comprehensive software platforms and tools should be made available to the scientific community. The tools will empower them and the public to verify, through practice, the reproducibility of observations that are reported in the scientific literature.Medical image analysis is one of the fields in which the use of computational resources, both software and hardware, are an essential platform for performing experimental work. In this arena, the introduction of the Insight Toolkit (ITK in 1999 has transformed the field and facilitates its progress by accelerating the rate at which algorithmic implementations are developed, tested, disseminated and improved. By building on the efficiency and quality of open source methodologies, ITK has provided the medical image community with an effective platform on which to build a daily workflow that incorporates the true scientific practices of reproducibility verification.This article describes the multiple tools, methodologies, and practices that the ITK community has adopted, refined, and followed during the past decade, in order to become one of the research communities with the most modern reproducibility verification infrastructure. For example, 207 contributors have created over 2400 unit tests that provide over 84% code line test coverage. The Insight Journal, an open publication journal associated with the toolkit, has seen over 360,000 publication downloads. The median normalized closeness centrality, a measure of knowledge flow, resulting from the distributed peer code review system was high, 0.46.

  15. CrowdWater - Can people observe what models need?

    Science.gov (United States)

    van Meerveld, I. H. J.; Seibert, J.; Vis, M.; Etter, S.; Strobl, B.

    2017-12-01

    CrowdWater (www.crowdwater.ch) is a citizen science project that explores the usefulness of crowd-sourced data for hydrological model calibration and prediction. Hydrological models are usually calibrated based on observed streamflow data but it is likely easier for people to estimate relative stream water levels, such as the water level above or below a rock, than streamflow. Relative stream water levels may, therefore, be a more suitable variable for citizen science projects than streamflow. In order to test this assumption, we held surveys near seven different sized rivers in Switzerland and asked more than 450 volunteers to estimate the water level class based on a picture with a virtual staff gauge. The results show that people can generally estimate the relative water level well, although there were also a few outliers. We also asked the volunteers to estimate streamflow based on the stick method. The median estimated streamflow was close to the observed streamflow but the spread in the streamflow estimates was large and there were very large outliers, suggesting that crowd-based streamflow data is highly uncertain. In order to determine the potential value of water level class data for model calibration, we converted streamflow time series for 100 catchments in the US to stream level class time series and used these to calibrate the HBV model. The model was then validated using the streamflow data. The results of this modeling exercise show that stream level class data are useful for constraining a simple runoff model. Time series of only two stream level classes, e.g. above or below a rock in the stream, were already informative, especially when the class boundary was chosen towards the highest stream levels. There was hardly any improvement in model performance when more than five water level classes were used. This suggests that if crowd-sourced stream level observations are available for otherwise ungauged catchments, these data can be used to constrain

  16. Observations and Models of Highly Intermittent Phytoplankton Distributions

    Science.gov (United States)

    Mandal, Sandip; Locke, Christopher; Tanaka, Mamoru; Yamazaki, Hidekatsu

    2014-01-01

    The measurement of phytoplankton distributions in ocean ecosystems provides the basis for elucidating the influences of physical processes on plankton dynamics. Technological advances allow for measurement of phytoplankton data to greater resolution, displaying high spatial variability. In conventional mathematical models, the mean value of the measured variable is approximated to compare with the model output, which may misinterpret the reality of planktonic ecosystems, especially at the microscale level. To consider intermittency of variables, in this work, a new modelling approach to the planktonic ecosystem is applied, called the closure approach. Using this approach for a simple nutrient-phytoplankton model, we have shown how consideration of the fluctuating parts of model variables can affect system dynamics. Also, we have found a critical value of variance of overall fluctuating terms below which the conventional non-closure model and the mean value from the closure model exhibit the same result. This analysis gives an idea about the importance of the fluctuating parts of model variables and about when to use the closure approach. Comparisons of plot of mean versus standard deviation of phytoplankton at different depths, obtained using this new approach with real observations, give this approach good conformity. PMID:24787740

  17. Solar spectral irradiance variability in cycle 24: observations and models

    Science.gov (United States)

    Marchenko, Sergey V.; DeLand, Matthew T.; Lean, Judith L.

    2016-12-01

    Utilizing the excellent stability of the Ozone Monitoring Instrument (OMI), we characterize both short-term (solar rotation) and long-term (solar cycle) changes of the solar spectral irradiance (SSI) between 265 and 500 nm during the ongoing cycle 24. We supplement the OMI data with concurrent observations from the Global Ozone Monitoring Experiment-2 (GOME-2) and Solar Radiation and Climate Experiment (SORCE) instruments and find fair-to-excellent, depending on wavelength, agreement among the observations, and predictions of the Naval Research Laboratory Solar Spectral Irradiance (NRLSSI2) and Spectral And Total Irradiance REconstruction for the Satellite era (SATIRE-S) models.

  18. Solar spectral irradiance variability in cycle 24: observations and models

    Directory of Open Access Journals (Sweden)

    Marchenko Sergey V.

    2016-01-01

    Full Text Available Utilizing the excellent stability of the Ozone Monitoring Instrument (OMI, we characterize both short-term (solar rotation and long-term (solar cycle changes of the solar spectral irradiance (SSI between 265 and 500 nm during the ongoing cycle 24. We supplement the OMI data with concurrent observations from the Global Ozone Monitoring Experiment-2 (GOME-2 and Solar Radiation and Climate Experiment (SORCE instruments and find fair-to-excellent, depending on wavelength, agreement among the observations, and predictions of the Naval Research Laboratory Solar Spectral Irradiance (NRLSSI2 and Spectral And Total Irradiance REconstruction for the Satellite era (SATIRE-S models.

  19. Observational constraints from models of close binary evolution

    International Nuclear Information System (INIS)

    Greve, J.P. de; Packet, W.

    1984-01-01

    The evolution of a system of 9 solar masses + 5.4 solar masses is computed from Zero Age Main Sequence through an early case B of mass exchange, up to the second phase of mass transfer after core helium burning. Both components are calculated simultaneously. The evolution is divided into several physically different phases. The characteristics of the models in each of these phases are transformed into corresponding 'observable' quantities. The outlook of the system for photometric observations is discussed, for an idealized case. The influence of the mass of the loser and the initial mass ratio is considered. (Auth.)

  20. Reproducibility of a reaming test

    DEFF Research Database (Denmark)

    Pilny, Lukas; Müller, Pavel; De Chiffre, Leonardo

    2012-01-01

    The reproducibility of a reaming test was analysed to document its applicability as a performance test for cutting fluids. Reaming tests were carried out on a drilling machine using HSS reamers. Workpiece material was an austenitic stainless steel, machined using 4.75 m∙min-1 cutting speed and 0....

  1. Reproducibility of a reaming test

    DEFF Research Database (Denmark)

    Pilny, Lukas; Müller, Pavel; De Chiffre, Leonardo

    2014-01-01

    The reproducibility of a reaming test was analysed to document its applicability as a performance test for cutting fluids. Reaming tests were carried out on a drilling machine using HSS reamers. Workpiece material was an austenitic stainless steel, machined using 4.75 m•min−1 cutting speed and 0....

  2. Reproducible Bioinformatics Research for Biologists

    Science.gov (United States)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  3. Efficient and reproducible identification of mismatch repair deficient colon cancer

    DEFF Research Database (Denmark)

    Joost, Patrick; Bendahl, Pär-Ola; Halvarsson, Britta

    2013-01-01

    BACKGROUND: The identification of mismatch-repair (MMR) defective colon cancer is clinically relevant for diagnostic, prognostic and potentially also for treatment predictive purposes. Preselection of tumors for MMR analysis can be obtained with predictive models, which need to demonstrate ease...... of application and favorable reproducibility. METHODS: We validated the MMR index for the identification of prognostically favorable MMR deficient colon cancers and compared performance to 5 other prediction models. In total, 474 colon cancers diagnosed ≥ age 50 were evaluated with correlation between...... and efficiently identifies MMR defective colon cancers with high sensitivity and specificity. The model shows stable performance with low inter-observer variability and favorable performance when compared to other MMR predictive models....

  4. Bad Behavior: Improving Reproducibility in Behavior Testing.

    Science.gov (United States)

    Andrews, Anne M; Cheng, Xinyi; Altieri, Stefanie C; Yang, Hongyan

    2018-01-24

    Systems neuroscience research is increasingly possible through the use of integrated molecular and circuit-level analyses. These studies depend on the use of animal models and, in many cases, molecular and circuit-level analyses. Associated with genetic, pharmacologic, epigenetic, and other types of environmental manipulations. We illustrate typical pitfalls resulting from poor validation of behavior tests. We describe experimental designs and enumerate controls needed to improve reproducibility in investigating and reporting of behavioral phenotypes.

  5. Multi-observation integrated model of troposphere - current status

    Science.gov (United States)

    Wilgan, Karina; Rohm, Witold; Bosy, Jarosław; Sierny, Jan; Kapłon, Jan; Hadaś, Tomasz; Hordyniec, Paweł

    2014-05-01

    The Global Navigation Satellite Systems (GNSS) and meteorological observation systems in the past decades were developed to address separate challenges and were used by different communities. Currently, the inter-dependence between meteorology and GNSS processing is growing up, providing both communities incentives, data and research challenges. The GNSS community uses meteorological observations as well as Numerical Weather Prediction (NWP) models to reduce the troposphere impact on the signal propagation (i.e. eliminate tropospheric delay). On the other hand, meteorology community is assimilating the GNSS observations into weather forecasting, nowcasting or climate studies. To seamlessly use observations from both sides of the GNSS and meteorology spectra, the data have to be interoperable. In this study we present a current status of establishing an integrated model of troposphere. We investigated and compared a number of meteorological and GNSS data sources that are going to be integrated into the troposphere model with high temporal and spatial resolution. The integrated model will provide values of meteorological and GNSS parameters at any point and any time with known accuracy. First step in building this model is to inter-compare all available data sources and to establish the accuracy of parameters. Three main data sources were compared: ground-based GNSS products on ASG-EUPOS stations, NWP model COAMPS (Coupled Ocean/ Atmosphere Mesoscale Prediction System) and meteorological parameters from three kinds of stations - EUREF Permanent Network (EPN) stations, meteorological sensors at airports and synoptic Institute of Meteorology and Water Management. Data was provided with different temporal and spatial resolution, so it had to be interpolated prior to inter-comparison. Afterwards, the quality of the data was established. The results show that NWP model data quality is: 4hPa in terms of air pressure, 2hPa in terms of water vapor partial pressure, and 6K in

  6. New Cosmological Model and Its Implications on Observational Data Interpretation

    Directory of Open Access Journals (Sweden)

    Vlahovic Branislav

    2013-09-01

    Full Text Available The paradigm of ΛCDM cosmology works impressively well and with the concept of inflation it explains the universe after the time of decoupling. However there are still a few concerns; after much effort there is no detection of dark matter and there are significant problems in the theoretical description of dark energy. We will consider a variant of the cosmological spherical shell model, within FRW formalism and will compare it with the standard ΛCDM model. We will show that our new topological model satisfies cosmological principles and is consistent with all observable data, but that it may require new interpretation for some data. Considered will be constraints imposed on the model, as for instance the range for the size and allowed thickness of the shell, by the supernovae luminosity distance and CMB data. In this model propagation of the light is confined along the shell, which has as a consequence that observed CMB originated from one point or a limited space region. It allows to interpret the uniformity of the CMB without inflation scenario. In addition this removes any constraints on the uniformity of the universe at the early stage and opens a possibility that the universe was not uniform and that creation of galaxies and large structures is due to the inhomogeneities that originated in the Big Bang.

  7. A sliding mode observer for hemodynamic characterization under modeling uncertainties

    KAUST Repository

    Zayane, Chadia

    2014-06-01

    This paper addresses the case of physiological states reconstruction in a small region of the brain under modeling uncertainties. The misunderstood coupling between the cerebral blood volume and the oxygen extraction fraction has lead to a partial knowledge of the so-called balloon model describing the hemodynamic behavior of the brain. To overcome this difficulty, a High Order Sliding Mode observer is applied to the balloon system, where the unknown coupling is considered as an internal perturbation. The effectiveness of the proposed method is illustrated through a set of synthetic data that mimic fMRI experiments.

  8. Modelling shear wave splitting observations from Wellington, New Zealand

    Science.gov (United States)

    Marson-Pidgeon, Katrina; Savage, Martha K.

    2004-05-01

    Frequency-dependent anisotropy was previously observed at the permanent broad-band station SNZO, South Karori, Wellington, New Zealand. This has important implications for the interpretation of measurements in other subduction zones and hence for our understanding of mantle flow. This motivated us to make further splitting measurements using events recorded since the previous study and to develop a new modelling technique. Thus, in this study we have made 67 high-quality shear wave splitting measurements using events recorded at the SNZO station spanning a 10-yr period. This station is the only one operating in New Zealand for longer than 2 yr. Using a combination of teleseismic SKS and S phases and regional ScS phases provides good azimuthal coverage, allowing us to undertake detailed modelling. The splitting measurements indicate that in addition to the frequency dependence observed previously at this station, there are also variations with propagation and initial polarization directions. The fast polarization directions range between 2° and 103°, and the delay times range between 0.75 s and 3.05 s. These ranges are much larger than observed previously at SNZO or elsewhere in New Zealand. Because of the observed frequency dependence we measure the dominant frequency of the phase used to make the splitting measurement, and take this into account in the modelling. We fit the fast polarization directions fairly well with a two-layer anisotropic model with horizontal axes of symmetry. However, such a model does not fit the delay times or explain the frequency dependence. We have developed a new inversion method which allows for an inclined axis of symmetry in each of the two layers. However, applying this method to SNZO does not significantly improve the fit over a two-layer model with horizontal symmetry axes. We are therefore unable to explain the frequency dependence or large variation in delay time values with multiple horizontal layers of anisotropy, even

  9. Observational constraints on tachyonic chameleon dark energy model

    Science.gov (United States)

    Banijamali, A.; Bellucci, S.; Fazlpour, B.; Solbi, M.

    2018-03-01

    It has been recently shown that tachyonic chameleon model of dark energy in which tachyon scalar field non-minimally coupled to the matter admits stable scaling attractor solution that could give rise to the late-time accelerated expansion of the universe and hence alleviate the coincidence problem. In the present work, we use data from Type Ia supernova (SN Ia) and Baryon Acoustic oscillations to place constraints on the model parameters. In our analysis we consider in general exponential and non-exponential forms for the non-minimal coupling function and tachyonic potential and show that the scenario is compatible with observations.

  10. Model dependence of isospin sensitive observables at high densities

    International Nuclear Information System (INIS)

    Guo, Wen-Mei; Yong, Gao-Chan; Wang, Yongjia; Li, Qingfeng; Zhang, Hongfei; Zuo, Wei

    2013-01-01

    Within two different frameworks of isospin-dependent transport model, i.e., Boltzmann–Uehling–Uhlenbeck (IBUU04) and Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport models, sensitive probes of nuclear symmetry energy are simulated and compared. It is shown that neutron to proton ratio of free nucleons, π − /π + ratio as well as isospin-sensitive transverse and elliptic flows given by the two transport models with their “best settings”, all have obvious differences. Discrepancy of numerical value of isospin-sensitive n/p ratio of free nucleon from the two models mainly originates from different symmetry potentials used and discrepancies of numerical value of charged π − /π + ratio and isospin-sensitive flows mainly originate from different isospin-dependent nucleon–nucleon cross sections. These demonstrations call for more detailed studies on the model inputs (i.e., the density- and momentum-dependent symmetry potential and the isospin-dependent nucleon–nucleon cross section in medium) of isospin-dependent transport model used. The studies of model dependence of isospin sensitive observables can help nuclear physicists to pin down the density dependence of nuclear symmetry energy through comparison between experiments and theoretical simulations scientifically

  11. Observations in particle physics: from two neutrinos to standard model

    International Nuclear Information System (INIS)

    Lederman, L.M.

    1990-01-01

    Experiments, which have made their contribution to creation of the standard model, are discussed. Results of observations on the following concepts: long-lived neutral V-particles, violation of preservation of parity and charge invariance in meson decays, reaction with high-energy neutrino and existence of neutrino of two types, partons and dynamic quarks, dimuon resonance at 9.5 GeV in 400 GeV-proton-nucleus collisions, are considered

  12. The link between laboratory/field observations and models

    International Nuclear Information System (INIS)

    Cole, C.R.; Foley, M.G.

    1986-01-01

    The various linkages in system performance assessments that integrate disposal program elements must be understood. The linkage between model development and field/laboratory observations is described as the iterative program of site and system characterization for development of an observational-confirmatory data base. This data base is designed to develop, improve, and support conceptual models for site and system behavior. The program consists of data gathering and experiments to demonstrate understanding at various spatial and time scales and degrees of complexity. Understanding and accounting for the decreasing characterization certainty that arises with increasing space and time scales is an important aspect of the link between models and observations. The performance allocation process for setting performance goals and confidence levels, coupled with a performance assessment approach that provides these performance and confidence estimates, will determine when sufficient characterization has been achieved. At each iteration, performance allocation goals are reviewed and revised as necessary. The updated data base and appropriate performance assessment tools and approaches are utilized to identify and design additional tests and data needs necessary to meet current performance allocation goals

  13. Solar irradiance variability: a six-year comparison between SORCE observations and the SATIRE model

    Science.gov (United States)

    Ball, W. T.; Unruh, Y. C.; Krivova, N. A.; Solanki, S.; Harder, J. W.

    2011-06-01

    Aims: We investigate how well modeled solar irradiances agree with measurements from the SORCE satellite, both for total solar irradiance and broken down into spectral regions on timescales of several years. Methods: We use the SATIRE model and compare modeled total solar irradiance (TSI) with TSI measurements over the period 25 February 2003 to 1 November 2009. Spectral solar irradiance over 200-1630 nm is compared with the SIM instrument on SORCE over the period 21 April 2004 to 1 November 2009. We discuss the overall change in flux and the rotational and long-term trends during this period of decline from moderate activity to the recent solar minimum in ~10 nm bands and for three spectral regions of significant interest: the UV integrated over 200-300 nm, the visible over 400-691 nm and the IR between 972-1630 nm. Results: The model captures 97% of the observed TSI variation. This is on the order at which TSI detectors agree with each other during the period considered. In the spectral comparison, rotational variability is well reproduced, especially between 400 and 1200 nm. The magnitude of change in the long-term trends is many times larger in SIM at almost all wavelengths while trends in SIM oppose SATIRE in the visible between 500 and 700 nm and again between 1000 and 1200 nm. We discuss the remaining issues with both SIM data and the identified limits of the model, particularly with the way facular contributions are dealt with, the limit of flux identification in MDI magnetograms during solar minimum and the model atmospheres in the IR employed by SATIRE. However, it is unlikely that improvements in these areas will significantly enhance the agreement in the long-term trends. This disagreement implies that some mechanism other than surface magnetism is causing SSI variations, in particular between 2004 and 2006, if the SIM data are correct. Since SATIRE was able to reproduce UV irradiance between 1991 and 2002 from UARS, either the solar mechanism for SSI

  14. GeoTrust Hub: A Platform For Sharing And Reproducing Geoscience Applications

    Science.gov (United States)

    Malik, T.; Tarboton, D. G.; Goodall, J. L.; Choi, E.; Bhatt, A.; Peckham, S. D.; Foster, I.; Ton That, D. H.; Essawy, B.; Yuan, Z.; Dash, P. K.; Fils, G.; Gan, T.; Fadugba, O. I.; Saxena, A.; Valentic, T. A.

    2017-12-01

    Recent requirements of scholarly communication emphasize the reproducibility of scientific claims. Text-based research papers are considered poor mediums to establish reproducibility. Papers must be accompanied by "research objects", aggregation of digital artifacts that together with the paper provide an authoritative record of a piece of research. We will present GeoTrust Hub (http://geotrusthub.org), a platform for creating, sharing, and reproducing reusable research objects. GeoTrust Hub provides tools for scientists to create `geounits'--reusable research objects. Geounits are self-contained, annotated, and versioned containers that describe and package computational experiments in an efficient and light-weight manner. Geounits can be shared on public repositories such as HydroShare and FigShare, and also using their respective APIs reproduced on provisioned clouds. The latter feature enables science applications to have a lifetime beyond sharing, wherein they can be independently verified and trust be established as they are repeatedly reused. Through research use cases from several geoscience laboratories across the United States, we will demonstrate how tools provided from GeoTrust Hub along with Hydroshare as its public repository for geounits is advancing the state of reproducible research in the geosciences. For each use case, we will address different computational reproducibility requirements. Our first use case will be an example of setup reproducibility which enables a scientist to set up and reproduce an output from a model with complex configuration and development environments. Our second use case will be an example of algorithm/data reproducibility, where in a shared data science model/dataset can be substituted with an alternate one to verify model output results, and finally an example of interactive reproducibility, in which an experiment is dependent on specific versions of data to produce the result. Toward this we will use software and data

  15. Towards Improving Sea Ice Predictabiity: Evaluating Climate Models Against Satellite Sea Ice Observations

    Science.gov (United States)

    Stroeve, J. C.

    2014-12-01

    The last four decades have seen a remarkable decline in the spatial extent of the Arctic sea ice cover, presenting both challenges and opportunities to Arctic residents, government agencies and industry. After the record low extent in September 2007 effort has increased to improve seasonal, decadal-scale and longer-term predictions of the sea ice cover. Coupled global climate models (GCMs) consistently project that if greenhouse gas concentrations continue to rise, the eventual outcome will be a complete loss of the multiyear ice cover. However, confidence in these projections depends o HoHoweon the models ability to reproduce features of the present-day climate. Comparison between models participating in the World Climate Research Programme Coupled Model Intercomparison Project Phase 5 (CMIP5) and observations of sea ice extent and thickness show that (1) historical trends from 85% of the model ensemble members remain smaller than observed, and (2) spatial patterns of sea ice thickness are poorly represented in most models. Part of the explanation lies with a failure of models to represent details of the mean atmospheric circulation pattern that governs the transport and spatial distribution of sea ice. These results raise concerns regarding the ability of CMIP5 models to realistically represent the processes driving the decline of Arctic sea ice and to project the timing of when a seasonally ice-free Arctic may be realized. On shorter time-scales, seasonal sea ice prediction has been challenged to predict the sea ice extent from Arctic conditions a few months to a year in advance. Efforts such as the Sea Ice Outlook (SIO) project, originally organized through the Study of Environmental Change (SEARCH) and now managed by the Sea Ice Prediction Network project (SIPN) synthesize predictions of the September sea ice extent based on a variety of approaches, including heuristic, statistical and dynamical modeling. Analysis of SIO contributions reveals that when the

  16. Observer analysis and its impact on task performance modeling

    Science.gov (United States)

    Jacobs, Eddie L.; Brown, Jeremy B.

    2014-05-01

    Fire fighters use relatively low cost thermal imaging cameras to locate hot spots and fire hazards in buildings. This research describes the analyses performed to study the impact of thermal image quality on fire fighter fire hazard detection task performance. Using human perception data collected by the National Institute of Standards and Technology (NIST) for fire fighters detecting hazards in a thermal image, an observer analysis was performed to quantify the sensitivity and bias of each observer. Using this analysis, the subjects were divided into three groups representing three different levels of performance. The top-performing group was used for the remainder of the modeling. Models were developed which related image quality factors such as contrast, brightness, spatial resolution, and noise to task performance probabilities. The models were fitted to the human perception data using logistic regression, as well as probit regression. Probit regression was found to yield superior fits and showed that models with not only 2nd order parameter interactions, but also 3rd order parameter interactions performed the best.

  17. General Description of Fission Observables - JEFF Report 24. GEF Model

    International Nuclear Information System (INIS)

    Schmidt, Karl-Heinz; Jurado, Beatriz; Amouroux, Charlotte

    2014-06-01

    The Joint Evaluated Fission and Fusion (JEFF) Project is a collaborative effort among the member countries of the OECD Nuclear Energy Agency (NEA) Data Bank to develop a reference nuclear data library. The JEFF library contains sets of evaluated nuclear data, mainly for fission and fusion applications; it contains a number of different data types, including neutron and proton interaction data, radioactive decay data, fission yield data and thermal scattering law data. The General fission (GEF) model is based on novel theoretical concepts and ideas developed to model low energy nuclear fission. The GEF code calculates fission-fragment yields and associated quantities (e.g. prompt neutron and gamma) for a large range of nuclei and excitation energy. This opens up the possibility of a qualitative step forward to improve further the JEFF fission yields sub-library. This report describes the GEF model which explains the complex appearance of fission observables by universal principles of theoretical models and considerations on the basis of fundamental laws of physics and mathematics. The approach reveals a high degree of regularity and provides a considerable insight into the physics of the fission process. Fission observables can be calculated with a precision that comply with the needs for applications in nuclear technology. The relevance of the approach for examining the consistency of experimental results and for evaluating nuclear data is demonstrated. (authors)

  18. Photochemistry of an Urban Region using Observations and Numerical Modeling

    Science.gov (United States)

    Cantrell, C. A.; Mauldin, L.; Mukherjee, A. D.; Flocke, F. M.; Pfister, G.; Apel, E. C.; Bahreini, R.; Blake, D. R.; Blake, N. J.; Campos, T. L.; Cohen, R. C.; Farmer, D.; Fried, A.; Guenther, A. B.; Hall, S. R.; Heikes, B.; Hornbrook, R. S.; Huey, L. G.; Karl, T.; Kaser, L.; Nowak, J. B.; Ortega, J. V.; O'Sullivan, D. W.; Richter, D.; Smith, J. N.; Tanner, D.; Townsend-Small, A.; Ullmann, K.; Walega, J.; Weibring, P.; Weinheimer, A. J.

    2015-12-01

    The chemistry of HOx radicals in the troposphere can lead to the production of secondary products such as ozone and aerosols, while volatile organic compounds are degraded. The production rates and identities of secondary products depend on the abundance of NOx and other parameters. The amounts of VOCs and NOx can also affect the concentrations of OH, HO2 and RO2. Comparison of observations and model-derived values of HOx species can provide one way to assess the completeness and accuracy of model mechanisms. The functional dependence of measure-model agreement on various controlling parameters can also reveal details of current understanding of photochemistry in urban regions. During the Front Range Air Pollution and Photochemistry Experiment (FRAPPE), conducted during the summer of 2014, observations from ground-based and airborne platforms were performed to study the evolution of atmospheric composition over the Denver metropolitan area. Of particular interest in FRAPPE was the assessment of the roles of mixing of emissions from oil and gas exploration and extraction, and those from confined animal production operations, with urban emissions (e.g. from transportation, energy production, and industrial processes) on air quality in the metropolitan and surrounding region. Our group made measurements of OH, HO2, and HO2 + RO2 from the NSF/NCAR C-130 aircraft platform using selected ion chemical ionization mass spectrometry. The C-130 was equipped with instrumentation for the observation of a wide variety of photochemical-related species and parameters. These data are used to assess the photochemical regimes encountered during the period of the study, and to quantitatively describe the chemical processes involved in formation of secondary products. One of the tools used is a steady state model for short-lived species such as those that we observed. This presentation summarizes the behavior of species that were measured during FRAPPE and what the observations reveal

  19. Updated observational constraints on quintessence dark energy models

    Science.gov (United States)

    Durrive, Jean-Baptiste; Ooba, Junpei; Ichiki, Kiyotomo; Sugiyama, Naoshi

    2018-02-01

    The recent GW170817 measurement favors the simplest dark energy models, such as a single scalar field. Quintessence models can be classified in two classes, freezing and thawing, depending on whether the equation of state decreases towards -1 or departs from it. In this paper, we put observational constraints on the parameters governing the equations of state of tracking freezing, scaling freezing, and thawing models using updated data, from the Planck 2015 release, joint light-curve analysis, and baryonic acoustic oscillations. Because of the current tensions on the value of the Hubble parameter H0, unlike previous authors, we let this parameter vary, which modifies significantly the results. Finally, we also derive constraints on neutrino masses in each of these scenarios.

  20. Realistic modelling of observed seismic motion in complex sedimentary basins

    International Nuclear Information System (INIS)

    Faeh, D.; Panza, G.F.

    1994-03-01

    Three applications of a numerical technique are illustrated to model realistically the seismic ground motion for complex two-dimensional structures. First we consider a sedimentary basin in the Friuli region, and we model strong motion records from an aftershock of the 1976 earthquake. Then we simulate the ground motion caused in Rome by the 1915, Fucino (Italy) earthquake, and we compare our modelling with the damage distribution observed in the town. Finally we deal with the interpretation of ground motion recorded in Mexico City, as a consequence of earthquakes in the Mexican subduction zone. The synthetic signals explain the major characteristics (relative amplitudes, spectral amplification, frequency content) of the considered seismograms, and the space distribution of the available macroseismic data. For the sedimentary basin in the Friuli area, parametric studies demonstrate the relevant sensitivity of the computed ground motion to small changes in the subsurface topography of the sedimentary basin, and in the velocity and quality factor of the sediments. The total energy of ground motion, determined from our numerical simulation in Rome, is in very good agreement with the distribution of damage observed during the Fucino earthquake. For epicentral distances in the range 50km-100km, the source location and not only the local soil conditions control the local effects. For Mexico City, the observed ground motion can be explained as resonance effects and as excitation of local surface waves, and the theoretical and the observed maximum spectral amplifications are very similar. In general, our numerical simulations permit the estimate of the maximum and average spectral amplification for specific sites, i.e. are a very powerful tool for accurate micro-zonation. (author). 38 refs, 19 figs, 1 tab

  1. Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City

    Directory of Open Access Journals (Sweden)

    M. Zavala

    2009-01-01

    Full Text Available The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3, carbon monoxide (CO and nitrogen oxides (NOx suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio.

    This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM and the standard Brute Force Method (BFM in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with

  2. Seasonal patterns of Saharan dust over Cape Verde – a combined approach using observations and modelling

    Directory of Open Access Journals (Sweden)

    Carla Gama

    2015-02-01

    Full Text Available A characterisation of the dust transported from North Africa deserts to the Cape Verde Islands, including particle size distribution, concentrations and optical properties, for a complete annual cycle (the year 2011, is presented and discussed. The present analysis includes annual simulations of the BSC-DREAM8b and the NMMB/BSC-Dust models, 1-yr of surface aerosol measurements performed within the scope of the CV-DUST Project, AERONET direct-sun observations, and back-trajectories. A seasonal intrusion of dust from North West Africa affects Cape Verde at surface levels from October till March when atmospheric concentrations in Praia are very high (PM10 observed concentrations reach hourly values up to 710 µg/m3. The air masses responsible for the highest aerosol concentrations in Cape Verde describe a path over the central Saharan desert area in Algeria, Mali and Mauritania before reaching the Atlantic Ocean. During summer, dust from North Africa is transported towards the region at higher altitudes, yielding to high aerosol optical depths. The BSC-DREAM8b and the NMMB/BSC-Dust models, which are for the first time evaluated for surface concentration and size distribution in Africa for an annual cycle, are able to reproduce the majority of the dust episodes. Results from NMMB/BSC-Dust are in better agreement with observed particulate matter concentrations and aerosol optical depth throughout the year. For this model, the comparison between observed and modelled PM10 daily averaged concentrations yielded a correlation coefficient of 0.77 and a 29.0 µg/m3 ‘bias’, while for BSC-DREAM8b the correlation coefficient was 0.63 and ‘bias’ 32.9 µg/m3. From this value, 12–14 µg/m3 is due to the sea salt contribution, which is not considered by the model. In addition, the model does not take into account biomass-burning particles, secondary pollutants and local sources (i.e., resuspension. These results roughly allow for the establishment of a

  3. A Data Quality Information Model for Earth Observation

    Science.gov (United States)

    Yang, X.; Blower, J.; Cornford, D.; Maso, J.; Zabala, A.; Bastin, L.; Lush, V.; Diaz, P.

    2012-04-01

    The question of data quality is a prominent topic of current research in Earth observation. However, different users have different views and visions on data quality. There exists a set of standards and specifications in relation to data quality for Earth observation (e.g. ISO standards, W3C standards, QA4EO), and how to choose appropriate one for quality information representation also present a challenge. In order to address the need, we carried out interviews with environmental scientists to elicit their views on matters such as how they choose data for their studies, and what encourages them to trust the accuracy and validity of the data. Interviews were structured around a carefully-designed questionnaire. Face-to-face and telephone interviews were performed in order to gain maximum value from the consultation process. An array of views and visions on Earth observation data have been gathered, which will provide valuable input to the community and other data providers. Informed by the interview findings, we critically review the existing standards and specifications and propose a new, integrated quality information model for Earth observation. This builds upon existing models, notably the ISO standards suite, filling gaps that we have identified in order to encompass other important aspects of data quality. This work has been performed in the context of the EU FP7 GeoViQua project, which aims to augment the Global Earth Observation System of Systems (GEOSS) with information about the quality of data holdings, and to provide visualization capabilities for users to view data together with associated quality information.

  4. Cirrus Cloud Properties from a Cloud-Resolving Model Simulation Compared to Cloud Radar Observations.

    Science.gov (United States)

    Luo, Yali; Krueger, Steven K.; Mace, Gerald G.; Xu, Kuan-Man

    2003-02-01

    Cloud radar data collected at the Atmospheric Radiation Measurement (ARM) Program's Southern Great Plains site were used to evaluate the properties of cirrus clouds that occurred in a cloud-resolving model (CRM) simulation of the 29-day summer 1997 intensive observation period (IOP). The simulation was `forced' by the large-scale advective temperature and water vapor tendencies, horizontal wind velocity, and turbulent surface fluxes observed at the Southern Great Plains site. The large-scale advective condensate tendency was not observed. The correlation of CRM cirrus amount with Geostationary Operational Environmental Satellite (GOES) high cloud amount was 0.70 for the subperiods during which cirrus formation and decay occurred primarily locally, but only 0.30 for the entire IOP. This suggests that neglecting condensate advection has a detrimental impact on the ability of a model (CRM or single-column model) to properly simulate cirrus cloud occurrence.The occurrence, vertical location, and thickness of cirrus cloud layers, as well as the bulk microphysical properties of thin cirrus cloud layers, were determined from the cloud radar measurements for June, July, and August 1997. The composite characteristics of cirrus clouds derived from this dataset are well suited for evaluating CRMs because of the close correspondence between the timescales and space scales resolved by the cloud radar measurements and by CRMs. The CRM results were sampled at eight grid columns spaced 64 km apart using the same definitions of cirrus and thin cirrus as the cloud radar dataset. The composite characteristics of cirrus clouds obtained from the CRM were then compared to those obtained from the cloud radar.Compared with the cloud radar observations, the CRM cirrus clouds occur at lower heights and with larger physical thicknesses. The ice water paths in the CRM's thin cirrus clouds are similar to those observed. However, the corresponding cloud-layer-mean ice water contents are

  5. Observations and model calculations of trace gas scavenging in a dense Saharan dust plume during MINATROC

    Directory of Open Access Journals (Sweden)

    M. de Reus

    2005-01-01

    Full Text Available An intensive field measurement campaign was performed in July/August 2002 at the Global Atmospheric Watch station Izaña on Tenerife to study the interaction of mineral dust aerosol and tropospheric chemistry (MINATROC. A dense Saharan dust plume, with aerosol masses exceeding 500 µg m-3, persisted for three days. During this dust event strongly reduced mixing ratios of ROx (HO2, CH3O2 and higher organic peroxy radicals, H2O2, NOx (NO and NO2 and O3 were observed. A chemistry boxmodel, constrained by the measurements, has been used to study gas phase and heterogeneous chemistry. It appeared to be difficult to reproduce the observed HCHO mixing ratios with the model, possibly related to the representation of precursor gas concentrations or the absence of dry deposition. The model calculations indicate that the reduced H2O2 mixing ratios in the dust plume can be explained by including the heterogeneous removal reaction of HO2 with an uptake coefficient of 0.2, or by assuming heterogeneous removal of H2O2 with an accommodation coefficient of 5x10-4. However, these heterogeneous reactions cannot explain the low ROx mixing ratios observed during the dust event. Whereas a mean daytime net ozone production rate (NOP of 1.06 ppbv/hr occurred throughout the campaign, the reduced ROx and NOx mixing ratios in the Saharan dust plume contributed to a reduced NOP of 0.14-0.33 ppbv/hr, which likely explains the relatively low ozone mixing ratios observed during this event.

  6. Evaluation of multichannel reproduced sound

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Wickelmaier, Florian Maria

    2007-01-01

    from the quantified attributes predict overall preference well. The findings allow for some generalizations within musical program genres regarding the perception of and preference for certain spatial reproduction modes, but for limited generalizations across selections from different musical genres.......A study was conducted with the goal of quantifying auditory attributes which underlie listener preference for multichannel reproduced sound. Short musical excerpts were presented in mono, stereo and several multichannel formats to a panel of forty selected listeners. Scaling of auditory attributes...

  7. Observations and models of centrifugally supported magnetospheres in massive stars

    Science.gov (United States)

    Oksala, Mary Elizabeth

    Magnetic massive stars, via their strong magnetic fields and radiation-driven winds, strongly influence the dynamical and chemical evolution of their surroundings. The interaction between these two intrinsic stellar properties can produce dynamic circumstellar structures, and, in the case of rapidly rotating stars, centrifugally supported magnetospheres. This thesis uses new observations to confront current magnetosphere models, testing their predictive power using photometry and spectropolarimetry of the prototypical magnetic B2Vp star sigma Ori E. In addition, we present the discovery of a magnetic field in a second rapidly rotating massive star. At the time of its discovery, this star was the most rapidly rotating non-degenerate magnetic star. We begin with an overview of magnetism in massive stars and wind-field interactions (Chapter 2) and the observational techniques involved in their study (Chapter 3), and summarize historical studies of sigma Ori E (Chapter 4). Chapter 5 describes the detection of rotational braking in sigma Ori E. We find a 77 ms yr-1 lengthening of the rotational period, corresponding to a spindown time of 1.34+0.10 -0.09 Myr. This observed period change agrees well with theoretical predictions for angular momentum loss in a magnetically channeled, line-driven wind. Next we present new spectropolarimetric observations of sigma Ori E (Chapter 6). The observed Halpha variability matches the predictions from a rigidly rotating magnetosphere (RRM) model with an offset dipole magnetic field configuration. However, our new, precise longitudinal magnetic field measurements reveal significant discrepancies with respect to the RRM model, challenging the current form as applied to sigma Ori E and suggesting that the field configuration of this star is more complex than a simple dipole. Chapter 7 describes the first detection of a magnetic field in the B2Vn star HR 7355. From analyzing photometric data, we find a 0.5214404(6) d rotational period

  8. Land Surface Microwave Emissivity Dynamics: Observations, Analysis and Modeling

    Science.gov (United States)

    Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Kumar, Sujay; Ringerud, Sarah

    2014-01-01

    Land surface microwave emissivity affects remote sensing of both the atmosphere and the land surface. The dynamical behavior of microwave emissivity over a very diverse sample of land surface types is studied. With seven years of satellite measurements from AMSR-E, we identified various dynamical regimes of the land surface emission. In addition, we used two radiative transfer models (RTMs), the Community Radiative Transfer Model (CRTM) and the Community Microwave Emission Modeling Platform (CMEM), to simulate land surface emissivity dynamics. With both CRTM and CMEM coupled to NASA's Land Information System, global-scale land surface microwave emissivities were simulated for five years, and evaluated against AMSR-E observations. It is found that both models have successes and failures over various types of land surfaces. Among them, the desert shows the most consistent underestimates (by approx. 70-80%), due to limitations of the physical models used, and requires a revision in both systems. Other snow-free surface types exhibit various degrees of success and it is expected that parameter tuning can improve their performances.

  9. Arctic Pacific water dynamics from model intercomparison and observations

    Science.gov (United States)

    Aksenov, Yevgeny; Karcher, Michael; Proshutinsky, Andrey; Gerdes, Ruediger; Bacon, Sheldon; Nurser, George; Coward, Andrew; Golubeva, Elena; Kauker, Frank; Nguyen, An; Platov, Gennady; Wadley, Martin; Watanabe, Eiji

    2016-04-01

    Pacific Water imports heat and fresh water from the northern Pacific in the Arctic Ocean, impacting upper ocean mixing and dynamics, as well as Arctic sea ice. Pathways and the circulation of PW in the central Arctic Ocean are not well known due to the lack of observations. This study uses an ensemble of the sea ice-ocean models integrated with passive tracer released in the Bering Strait to simulate Pacific water spread. We investigate different branches and modes of Pacific water and analyse changes in the water mass distribution through the Arctic Ocean due to changes in the wind and ocean potential vorticity. We focus on seasonal cycle and inter-decadal variations. The first results have been published recently (Aksenov et al., 2015) as a part of Forum for Arctic Ocean Modeling and Observational Synthesis (FAMOS) project. In the present study we extend the examination further and discuss the role of the Pacific water variability in the recent changes in the Arctic heat and fresh water storage. We present insights in the projected future changes to Pacific water dynamics. Reference Aksenov, Y., et al. (2015), Arctic pathways of Pacific Water: Arctic Ocean Model Intercomparison experiments, J. Geophys. Res. Oceans, 120, doi:10.1002/2015JC011299.

  10. Observational Tests of Magnetospheric Accretion Models in Young Stars

    Directory of Open Access Journals (Sweden)

    Johns–Krull Christopher M.

    2014-01-01

    Full Text Available Magnetically controlled accretion of disk material onto the surface of Classical T Tauri stars is the dominant paradigm in our understanding of how these young stars interact with their surrounding disks. These stars provide a powerful test of magnetically controlled accretion models since all of the relevant parameters, including the magnetic field strength and geometry, are in principle measureable. Both the strength and the field geometry are key for understanding how these stars interact with their disks. This talk will focus on recent advances in magnetic field measurements on a large number of T Tauri stars, as well as very recent studies of the accretion rates onto a sample of young stars in NGC 2264 with known rotation periods. We discuss how these observations provide critical tests of magnetospheric accretion models which predict a rotational equilibrium is reached. We find good support for the model predictions once the complex geometry of the stellar magnetic field is taken into account. We will also explore how the observations of the accretion properties of the 2264 cluster stars can be used to test emerging ideas on how magnetic fields on young stars are generated and organized as a function of their internal structure (i.e. the presence of a radiative core. We do not find support for the hypothesis that large changes in the magentic field geometry occur when a radiative core appears in these young stars.

  11. Collision and Break-off : Numerical models and surface observables

    Science.gov (United States)

    Bottrill, Andrew; van Hunen, Jeroen; Allen, Mark

    2013-04-01

    The process of continental collision and slab break-off has been explored by many authors using a number of different numerical models and approaches (Andrews and Billen, 2009; Gerya et al., 2004; van Hunen and Allen, 2011). One of the challenges of using numerical models to explore collision and break-off is relating model predictions to real observables from current collision zones. Part of the reason for this is that collision zones by their nature destroy a lot of potentially useful surface evidence of deep dynamics. One observable that offers the possibility for recording mantle dynamics at collision zones is topography. Here we present topography predictions from numerical models and show how these can be related to actual topography changes recoded in the sedimentary record. Both 2D and 3D numerical simulation of the closure of a small oceanic basin are presented (Bottrill et al., 2012; van Hunen and Allen, 2011). Topography is calculated from the normal stress at the surface applied to an elastic beam, to give a more realist prediction of topography by accounting for the expected elasticity of the lithosphere. Predicted model topography showed a number of interesting features on the overriding plate. The first is the formation of a basin post collision at around 300km from the suture. Our models also showed uplift postdating collision between the suture and this basin, caused by subduction of buoyant material. Once break-off has occurred we found that this uplift moved further into the overriding plate due to redistribution of stresses from the subducted plate. With our 3D numerical models we simulate a collision that propagates laterally along a subduction system. These models show that a basin forms, similar to that found in our 2D models, which propagates along the system at the same rate as collision. The apparent link between collision and basin formation leads to the investigation into the stress state in the overriding lithosphere. Preliminary

  12. How well do environmental archives of atmospheric mercury deposition in the Arctic reproduce rates and trends depicted by atmospheric models and measurements?

    Science.gov (United States)

    Goodsite, M E; Outridge, P M; Christensen, J H; Dastoor, A; Muir, D; Travnikov, O; Wilson, S

    2013-05-01

    This review compares the reconstruction of atmospheric Hg deposition rates and historical trends over recent decades in the Arctic, inferred from Hg profiles in natural archives such as lake and marine sediments, peat bogs and glacial firn (permanent snowpack), against those predicted by three state-of-the-art atmospheric models based on global Hg emission inventories from 1990 onwards. Model veracity was first tested against atmospheric Hg measurements. Most of the natural archive and atmospheric data came from the Canadian-Greenland sectors of the Arctic, whereas spatial coverage was poor in other regions. In general, for the Canadian-Greenland Arctic, models provided good agreement with atmospheric gaseous elemental Hg (GEM) concentrations and trends measured instrumentally. However, there are few instrumented deposition data with which to test the model estimates of Hg deposition, and these data suggest models over-estimated deposition fluxes under Arctic conditions. Reconstructed GEM data from glacial firn on Greenland Summit showed the best agreement with the known decline in global Hg emissions after about 1980, and were corroborated by archived aerosol filter data from Resolute, Nunavut. The relatively stable or slowly declining firn and model GEM trends after 1990 were also corroborated by real-time instrument measurements at Alert, Nunavut, after 1995. However, Hg fluxes and trends in northern Canadian lake sediments and a southern Greenland peat bog did not exhibit good agreement with model predictions of atmospheric deposition since 1990, the Greenland firn GEM record, direct GEM measurements, or trends in global emissions since 1980. Various explanations are proposed to account for these discrepancies between atmosphere and archives, including problems with the accuracy of archive chronologies, climate-driven changes in Hg transfer rates from air to catchments, waters and subsequently into sediments, and post-depositional diagenesis in peat bogs

  13. Initializing a Mesoscale Boundary-Layer Model with Radiosonde Observations

    Science.gov (United States)

    Berri, Guillermo J.; Bertossa, Germán

    2018-01-01

    A mesoscale boundary-layer model is used to simulate low-level regional wind fields over the La Plata River of South America, a region characterized by a strong daily cycle of land-river surface-temperature contrast and low-level circulations of sea-land breeze type. The initial and boundary conditions are defined from a limited number of local observations and the upper boundary condition is taken from the only radiosonde observations available in the region. The study considers 14 different upper boundary conditions defined from the radiosonde data at standard levels, significant levels, level of the inversion base and interpolated levels at fixed heights, all of them within the first 1500 m. The period of analysis is 1994-2008 during which eight daily observations from 13 weather stations of the region are used to validate the 24-h surface-wind forecast. The model errors are defined as the root-mean-square of relative error in wind-direction frequency distribution and mean wind speed per wind sector. Wind-direction errors are greater than wind-speed errors and show significant dispersion among the different upper boundary conditions, not present in wind speed, revealing a sensitivity to the initialization method. The wind-direction errors show a well-defined daily cycle, not evident in wind speed, with the minimum at noon and the maximum at dusk, but no systematic deterioration with time. The errors grow with the height of the upper boundary condition level, in particular wind direction, and double the errors obtained when the upper boundary condition is defined from the lower levels. The conclusion is that defining the model upper boundary condition from radiosonde data closer to the ground minimizes the low-level wind-field errors throughout the region.

  14. Interannual and low-frequency variability of Upper Indus Basin winter/spring precipitation in observations and CMIP5 models

    Science.gov (United States)

    Greene, Arthur M.; Robertson, Andrew W.

    2017-12-01

    An assessment is made of the ability of general circulation models in the CMIP5 ensemble to reproduce observed modes of low-frequency winter/spring precipitation variability in the region of the Upper Indus basin (UIB) in south-central Asia. This season accounts for about two thirds of annual precipitation totals in the UIB and is characterized by "western disturbances" propagating along the eastward extension of the Mediterranean storm track. Observational data are utilized for for spatiotemporal characterization of the precipitation seasonal cycle, to compute seasonalized spectra and finally, to examine teleconnections, in terms of large-scale patterns in sea-surface temperature (SST) and atmospheric circulation. Annual and lowpassed variations are found to be associated primarily with SST modes in the tropical and extratropical Pacific. A more obscure link to North Atlantic SST, possibly related to the North Atlantic Oscillation, is also noted. An ensemble of 31 CMIP5 models is then similarly assessed, using unforced preindustrial multi-century control runs. Of these models, eight are found to reproduce well the two leading modes of the observed seasonal cycle. This model subset is then assessed in the spectral domain and with respect to teleconnection patterns, where a range of behaviors is noted. Two model families each account for three members of this subset. The degree of within-family similarity in behavior is shown to reflect underlying model differences. The results provide estimates of unforced regional hydroclimate variability over the UIB on interannual and decadal scales and the corresponding far-field influences, and are of potential relevance for the estimation of uncertainties in future water availability.

  15. Analysis and modeling of tropical convection observed by CYGNSS

    Science.gov (United States)

    Lang, T. J.; Li, X.; Roberts, J. B.; Mecikalski, J. R.

    2017-12-01

    The Cyclone Global Navigation Satellite System (CYGNSS) is a multi-satellite constellation that utilizes Global Positioning System (GPS) reflectometry to retrieve near-surface wind speeds over the ocean. While CYGNSS is primarily aimed at measuring wind speeds in tropical cyclones, our research has established that the mission may also provide valuable insight into the relationships between wind-driven surface fluxes and general tropical oceanic convection. Currently, we are examining organized tropical convection using a mixture of CYGNSS level 1 through level 3 data, IMERG (Integrated Multi-satellite Retrievals for Global Precipitation Measurement), and other ancillary datasets (including buoys, GPM level 1 and 2 data, as well as ground-based radar). In addition, observing system experiments (OSEs) are being performed using hybrid three-dimensional variational assimilation to ingest CYGNSS observations into a limited-domain, convection-resolving model. Our focus for now is on case studies of convective evolution, but we will also report on progress toward statistical analysis of convection sampled by CYGNSS. Our working hypothesis is that the typical mature phase of organized tropical convection is marked by the development of a sharp gust-front boundary from an originally spatially broader but weaker wind speed change associated with precipitation. This increase in the wind gradient, which we demonstrate is observable by CYGNSS, likely helps to focus enhanced turbulent fluxes of convection-sustaining heat and moisture near the leading edge of the convective system where they are more easily ingested by the updraft. Progress on the testing and refinement of this hypothesis, using a mixture of observations and modeling, will be reported.

  16. LEIR impedance model and coherent beam instability observations

    CERN Document Server

    Biancacci, N; Migliorati, M; Rijoff, T L

    2017-01-01

    The LEIR machine is the first synchrotron in the ion ac-celeration chain at CERN and it is responsible to deliverhigh intensity ion beams to the LHC. Following the recentprogress in the understanding of the intensity limitations,detailed studies of the machine impedance started. In thiswork we describe the present LEIR impedance model, detail-ing the contribution to the total longitudinal and transverseimpedance of several machine elements. We then comparethe machine tune shift versus intensity predictions againstmeasurements at injection energy and summarize the co-herent instability observations in the absence of transversefeedback.

  17. Observational and Modeling Studies of Clouds and the Hydrological Cycle

    Science.gov (United States)

    Somerville, Richard C. J.

    1997-01-01

    Our approach involved validating parameterizations directly against measurements from field programs, and using this validation to tune existing parameterizations and to guide the development of new ones. We have used a single-column model (SCM) to make the link between observations and parameterizations of clouds, including explicit cloud microphysics (e.g., prognostic cloud liquid water used to determine cloud radiative properties). Surface and satellite radiation measurements were used to provide an initial evaluation of the performance of the different parameterizations. The results of this evaluation will then used to develop improved cloud and cloud-radiation schemes, which were tested in GCM experiments.

  18. S-AMP for non-linear observation models

    DEFF Research Database (Denmark)

    Cakmak, Burak; Winther, Ole; Fleury, Bernard H.

    2015-01-01

    matrix has zero-mean iid Gaussian entries. Our derivation is based upon 1) deriving expectation-propagation-(EP)-like equations from the stationary-points equations of the Gibbs free energy under first- and second-moment constraints and 2) applying additive free convolution in free probability theory......Recently we presented the S-AMP approach, an extension of approximate message passing (AMP), to be able to handle general invariant matrix ensembles. In this contribution we extend S-AMP to non-linear observation models. We obtain generalized AMP (GAMP) as the special case when the measurement...

  19. Mismatch between observed and modeled trends in dissolved upper-ocean oxygen over the last 50 yr

    Directory of Open Access Journals (Sweden)

    L. Stramma

    2012-10-01

    Full Text Available Observations and model runs indicate trends in dissolved oxygen (DO associated with current and ongoing global warming. However, a large-scale observation-to-model comparison has been missing and is presented here. This study presents a first global compilation of DO measurements covering the last 50 yr. It shows declining upper-ocean DO levels in many regions, especially the tropical oceans, whereas areas with increasing trends are found in the subtropics and in some subpolar regions. For the Atlantic Ocean south of 20° N, the DO history could even be extended back to about 70 yr, showing decreasing DO in the subtropical South Atlantic. The global mean DO trend between 50° S and 50° N at 300 dbar for the period 1960 to 2010 is –0.066 μmol kg−1 yr−1. Results of a numerical biogeochemical Earth system model reveal that the magnitude of the observed change is consistent with CO2-induced climate change. However, the pattern correlation between simulated and observed patterns of past DO change is negative, indicating that the model does not correctly reproduce the processes responsible for observed regional oxygen changes in the past 50 yr. A negative pattern correlation is also obtained for model configurations with particularly low and particularly high diapycnal mixing, for a configuration that assumes a CO2-induced enhancement of the C : N ratios of exported organic matter and irrespective of whether climatological or realistic winds from reanalysis products are used to force the model. Depending on the model configuration the 300 dbar DO trend between 50° S and 50° N is −0.027 to –0.047 μmol kg−1 yr−1 for climatological wind forcing, with a much larger range of –0.083 to +0.027 μmol kg−1 yr−1 for different initializations of sensitivity runs with reanalysis wind forcing. Although numerical models reproduce the overall sign and, to

  20. Modelling 1-minute directional observations of the global irradiance.

    Science.gov (United States)

    Thejll, Peter; Pagh Nielsen, Kristian; Andersen, Elsa; Furbo, Simon

    2016-04-01

    Direct and diffuse irradiances from the sky has been collected at 1-minute intervals for about a year from the experimental station at the Technical University of Denmark for the IEA project "Solar Resource Assessment and Forecasting". These data were gathered by pyrheliometers tracking the Sun, as well as with apertured pyranometers gathering 1/8th and 1/16th of the light from the sky in 45 degree azimuthal ranges pointed around the compass. The data are gathered in order to develop detailed models of the potentially available solar energy and its variations at high temporal resolution in order to gain a more detailed understanding of the solar resource. This is important for a better understanding of the sub-grid scale cloud variation that cannot be resolved with climate and weather models. It is also important for optimizing the operation of active solar energy systems such as photovoltaic plants and thermal solar collector arrays, and for passive solar energy and lighting to buildings. We present regression-based modelling of the observed data, and focus, here, on the statistical properties of the model fits. Using models based on the one hand on what is found in the literature and on physical expectations, and on the other hand on purely statistical models, we find solutions that can explain up to 90% of the variance in global radiation. The models leaning on physical insights include terms for the direct solar radiation, a term for the circum-solar radiation, a diffuse term and a term for the horizon brightening/darkening. The purely statistical model is found using data- and formula-validation approaches picking model expressions from a general catalogue of possible formulae. The method allows nesting of expressions, and the results found are dependent on and heavily constrained by the cross-validation carried out on statistically independent testing and training data-sets. Slightly better fits -- in terms of variance explained -- is found using the purely

  1. Dispersion Relations for Electroweak Observables in Composite Higgs Models

    CERN Document Server

    Contino, Roberto

    2015-12-14

    We derive dispersion relations for the electroweak oblique observables measured at LEP in the context of $SO(5)/SO(4)$ composite Higgs models. It is shown how these relations can be used and must be modified when modeling the spectral functions through a low-energy effective description of the strong dynamics. The dispersion relation for the parameter $\\epsilon_3$ is then used to estimate the contribution from spin-1 resonances at the 1-loop level. Finally, it is shown that the sign of the contribution to the $\\hat S$ parameter from the lowest-lying spin-1 states is not necessarily positive definite, but depends on the energy scale at which the asymptotic behavior of current correlators is attained.

  2. Polar cap patches observed during the magnetic storm of November 2003: observations and modeling

    Directory of Open Access Journals (Sweden)

    C. E. Valladares

    2015-09-01

    Full Text Available We present multi-instrumented measurements and multi-technique analysis of polar cap patches observed early during the recovery phase of the major magnetic storm of 20 November 2003 to investigate the origin of the polar cap patches. During this event, the Qaanaaq imager observed elongated polar cap patches, some of which containing variable brightness; the Qaanaaq digisonde detected abrupt NmF2 fluctuations; the Sondrestrom incoherent scatter radar (ISR measured patches placed close to but poleward of the auroral oval–polar cap boundary; and the DMSP-F13 satellite intersected topside density enhancements, corroborating the presence of the patches seen by the imager, the digisonde, and the Sondrestrom ISR. A 2-D cross-correlation analysis was applied to series of two consecutive red-line images, indicating that the magnitude and direction of the patch velocities were in good agreement with the SuperDARN convection patterns. We applied a back-tracing analysis to the patch locations and found that most of the patches seen between 20:41 and 21:29 UT were likely transiting the throat region near 19:41 UT. Inspection of the SuperDARN velocities at this time indicates spatial and temporal collocation of a gap region between patches and large (1.7 km s−1 line-of-sight velocities. The variable airglow brightness of the patches observed between 20:33 and 20:43 UT was investigated using the numerical Global Theoretical Ionospheric Model (GTIM driven by the SuperDARN convection patterns and a variable upward/downward neutral wind. Our numerical results indicate that variations in the airglow intensity up to 265 R can be produced by a constant 70 m s−1 downward vertical wind.

  3. Observation and modelling of fog at Cold Lake, Alberta, Canada

    Science.gov (United States)

    Wu, Di; Boudala, Faisal; Weng, Wensong; Taylor, Peter A.; Gultepe, Ismail; Isaac, George A.

    2017-04-01

    observational data indicates that the surface-based in situ measurements agree well with aviation weather observation METAR reports and are comparable with model simulations. Both the HRDPS model and microwave radiometry data indicate low level fog and cloud formation but the depths and intensities differ considerably depending on environmental conditions. Causes for this are under investigation with the high resolution 1-D boundary-layer model.

  4. Chow-Liu trees are sufficient predictive models for reproducing key features of functional networks of periictal EEG time-series.

    Science.gov (United States)

    Steimer, Andreas; Zubler, Frédéric; Schindler, Kaspar

    2015-09-01

    Seizure freedom in patients suffering from pharmacoresistant epilepsies is still not achieved in 20-30% of all cases. Hence, current therapies need to be improved, based on a more complete understanding of ictogenesis. In this respect, the analysis of functional networks derived from intracranial electroencephalographic (iEEG) data has recently become a standard tool. Functional networks however are purely descriptive models and thus are conceptually unable to predict fundamental features of iEEG time-series, e.g., in the context of therapeutical brain stimulation. In this paper we present some first steps towards overcoming the limitations of functional network analysis, by showing that its results are implied by a simple predictive model of time-sliced iEEG time-series. More specifically, we learn distinct graphical models (so called Chow-Liu (CL) trees) as models for the spatial dependencies between iEEG signals. Bayesian inference is then applied to the CL trees, allowing for an analytic derivation/prediction of functional networks, based on thresholding of the absolute value Pearson correlation coefficient (CC) matrix. Using various measures, the thus obtained networks are then compared to those which were derived in the classical way from the empirical CC-matrix. In the high threshold limit we find (a) an excellent agreement between the two networks and (b) key features of periictal networks as they have previously been reported in the literature. Apart from functional networks, both matrices are also compared element-wise, showing that the CL approach leads to a sparse representation, by setting small correlations to values close to zero while preserving the larger ones. Overall, this paper shows the validity of CL-trees as simple, spatially predictive models for periictal iEEG data. Moreover, we suggest straightforward generalizations of the CL-approach for modeling also the temporal features of iEEG signals. Copyright © 2015 Elsevier Inc. All rights

  5. Theoretical Modeling and Computer Simulations for the Origins and Evolution of Reproducing Molecular Systems and Complex Systems with Many Interactive Parts

    Science.gov (United States)

    Liang, Shoudan

    2000-01-01

    Our research effort has produced nine publications in peer-reviewed journals listed at the end of this report. The work reported here are in the following areas: (1) genetic network modeling; (2) autocatalytic model of pre-biotic evolution; (3) theoretical and computational studies of strongly correlated electron systems; (4) reducing thermal oscillations in atomic force microscope; (5) transcription termination mechanism in prokaryotic cells; and (6) the low glutamine usage in thennophiles obtained by studying completely sequenced genomes. We discuss the main accomplishments of these publications.

  6. Using observations to evaluate biosphere-atmosphere interactions in models

    Science.gov (United States)

    Green, Julia; Konings, Alexandra G.; Alemohammad, Seyed H.; Gentine, Pierre

    2017-04-01

    Biosphere-atmosphere interactions influence the hydrologic cycle by altering climate and weather patterns (Charney, 1975; Koster et al., 2006; Seneviratne et al., 2006), contributing up to 30% of precipitation and radiation variability in certain regions (Green et al., 2017). They have been shown to contribute to the persistence of drought in Europe (Seneviratne et al., 2006), as well as to increase rainfall in the Amazon (Spracklen et al., 2012). Thus, a true representation of these feedbacks in Earth System Models (ESMs) is crucial for accurate forecasting and planning. However, it has been difficult to validate the performance of ESMs since often-times surface and atmospheric flux data are scarce and/or difficult to observe. In this study, we use the results of a new global observational study (using remotely sensed solar-induced fluorescence to represent the biosphere flux) (Green et al., 2017) to determine how well a suite of 13 ESMs capture biosphere-atmosphere feedbacks. We perform a Conditional Multivariate Granger Causality analysis in the frequency domain with radiation, precipitation and temperature as atmospheric inputs and GPP as the biospheric input. Performing the analysis in the frequency domain allows for separation of feedbacks at different time-scales (subseasonal, seasonal or interannual). Our findings can be used to determine whether there is agreement between models, as well as, to pinpoint regions or time-scales of model bias or inaccuracy, which will provide insight on potential improvement. We demonstrate that in addition to the well-known problem of convective parameterization over land in models, the main issue in representing feedbacks between the land and the atmosphere is due to the misrepresentation of water stress. These results provide a direct quantitative assessment of feedbacks in models and how to improve them. References: Charney, J.G. Dynamics of deserts and drought in the Sahel. Quarterly Journal of the Royal Meteorological

  7. Parental modelling of eating behaviours: observational validation of the Parental Modelling of Eating Behaviours scale (PARM).

    Science.gov (United States)

    Palfreyman, Zoe; Haycraft, Emma; Meyer, Caroline

    2015-03-01

    Parents are important role models for their children's eating behaviours. This study aimed to further validate the recently developed Parental Modelling of Eating Behaviours Scale (PARM) by examining the relationships between maternal self-reports on the PARM with the modelling practices exhibited by these mothers during three family mealtime observations. Relationships between observed maternal modelling and maternal reports of children's eating behaviours were also explored. Seventeen mothers with children aged between 2 and 6 years were video recorded at home on three separate occasions whilst eating a meal with their child. Mothers also completed the PARM, the Children's Eating Behaviour Questionnaire and provided demographic information about themselves and their child. Findings provided validation for all three PARM subscales, which were positively associated with their observed counterparts on the observational coding scheme (PARM-O). The results also indicate that habituation to observations did not change the feeding behaviours displayed by mothers. In addition, observed maternal modelling was significantly related to children's food responsiveness (i.e., their interest in and desire for foods), enjoyment of food, and food fussiness. This study makes three important contributions to the literature. It provides construct validation for the PARM measure and provides further observational support for maternal modelling being related to lower levels of food fussiness and higher levels of food enjoyment in their children. These findings also suggest that maternal feeding behaviours remain consistent across repeated observations of family mealtimes, providing validation for previous research which has used single observations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Adjustments in the Almod 3W2 code models for reproducing the net load trip test in Angra I nuclear power plant

    International Nuclear Information System (INIS)

    Camargo, C.T.M.; Madeira, A.A.; Pontedeiro, A.C.; Dominguez, L.

    1986-09-01

    The recorded traces got from the net load trip test in Angra I NPP yelded the oportunity to make fine adjustments in the ALMOD 3W2 code models. The changes are described and the results are compared against plant real data. (Author) [pt

  9. Reproducing the observed energy-dependent structure of Earth's electron radiation belts during storm recovery with an event-specific diffusion model

    Czech Academy of Sciences Publication Activity Database

    Ripoll, J.-F.; Reeves, G. D.; Cunningham, G. S.; Loridan, V.; Denton, M.; Santolík, Ondřej; Kurth, W. S.; Kletzing, C. A.; Turner, D. L.; Henderson, M. G.; Ukhorskiy, A. Y.

    2016-01-01

    Roč. 43, č. 11 (2016), s. 5616-5625 ISSN 0094-8276 R&D Projects: GA MŠk(CZ) LH15304 Institutional support: RVO:68378289 Keywords : radiation belts * slot region * electron losses * wave particle interactions * hiss waves * electron lifetimes Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 4.253, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/2016GL068869/full

  10. AirSWOT observations versus hydrodynamic model outputs of water surface elevation and slope in a multichannel river

    Science.gov (United States)

    Altenau, Elizabeth H.; Pavelsky, Tamlin M.; Moller, Delwyn; Lion, Christine; Pitcher, Lincoln H.; Allen, George H.; Bates, Paul D.; Calmant, Stéphane; Durand, Michael; Neal, Jeffrey C.; Smith, Laurence C.

    2017-04-01

    Anabranching rivers make up a large proportion of the world's major rivers, but quantifying their flow dynamics is challenging due to their complex morphologies. Traditional in situ measurements of water levels collected at gauge stations cannot capture out of bank flows and are limited to defined cross sections, which presents an incomplete picture of water fluctuations in multichannel systems. Similarly, current remotely sensed measurements of water surface elevations (WSEs) and slopes are constrained by resolutions and accuracies that limit the visibility of surface waters at global scales. Here, we present new measurements of river WSE and slope along the Tanana River, AK, acquired from AirSWOT, an airborne analogue to the Surface Water and Ocean Topography (SWOT) mission. Additionally, we compare the AirSWOT observations to hydrodynamic model outputs of WSE and slope simulated across the same study area. Results indicate AirSWOT errors are significantly lower than model outputs. When compared to field measurements, RMSE for AirSWOT measurements of WSEs is 9.0 cm when averaged over 1 km squared areas and 1.0 cm/km for slopes along 10 km reaches. Also, AirSWOT can accurately reproduce the spatial variations in slope critical for characterizing reach-scale hydraulics, while model outputs of spatial variations in slope are very poor. Combining AirSWOT and future SWOT measurements with hydrodynamic models can result in major improvements in model simulations at local to global scales. Scientists can use AirSWOT measurements to constrain model parameters over long reach distances, improve understanding of the physical processes controlling the spatial distribution of model parameters, and validate models' abilities to reproduce spatial variations in slope. Additionally, AirSWOT and SWOT measurements can be assimilated into lower-complexity models to try and approach the accuracies achieved by higher-complexity models.

  11. Electron acceleration in solar-flare magnetic traps: Model properties and their observational confirmations

    Science.gov (United States)

    Gritsyk, P. A.; Somov, B. V.

    2017-09-01

    Using an analytical solution of the kinetic equation, we have investigated the model properties of the coronal and chromospheric hard X-ray sources in the limb flare of July 19, 2012. We calculated the emission spectrum at the flare loop footpoints in the thick-target approximation with a reverse current and showed it to be consistent with the observed one. The spectrum of the coronal source located above the flare loop was calculated in the thin-target approximation. In this case, the slope of the hard X-ray spectrum is reproduced very accurately, but the intensity of the coronal emission is lower than the observed one by several times. Previously, we showed that this contradiction is completely removed if the additional (relative to the primary acceleration in the reconnecting current layer) electron acceleration in the coronal magnetic trap that contracts in the transverse direction and decreases in length during the impulsive flare phase is taken into account. In this paper we study in detail this effect in the context of a more realistic flare scenario, where a whole ensemble of traps existed in the hard X-ray burst time, each of which was at different stages of its evolution: formation, collapse, destruction. Our results point not only to the existence of first-order Fermi acceleration and betatron electron heating in solar flares but also to their high efficiency. Highly accurate observations of a specific flare are used as an example to show that the previously predicted theoretical features of the model find convincing confirmations.

  12. Land-Surface-Atmosphere Coupling in Observations and Models

    Directory of Open Access Journals (Sweden)

    Alan K Betts

    2009-07-01

    Full Text Available The diurnal cycle and the daily mean at the land-surface result from the coupling of many physical processes. The framework of this review is largely conceptual; looking for relationships and information in the coupling of processes in models and observations. Starting from the surface energy balance, the role of the surface and cloud albedos in the shortwave and longwave fluxes is discussed. A long-wave radiative scaling of the diurnal temperature range and the night-time boundary layer is summarized. Several aspects of the local surface energy partition are presented: the role of soilwater availability and clouds; vector methods for understanding mixed layer evolution, and the coupling between surface and boundary layer that determines the lifting condensation level. Moving to larger scales, evaporation-precipitation feedback in models is discussed; and the coupling of column water vapor, clouds and precipitation to vertical motion and moisture convergence over the Amazon. The final topic is a comparison of the ratio of surface shortwave cloud forcing to the diabatic precipitation forcing of the atmosphere in ERA-40 with observations.

  13. CORAL: model for no observed adverse effect level (NOAEL).

    Science.gov (United States)

    Toropov, Andrey A; Toropova, Alla P; Pizzo, Fabiola; Lombardo, Anna; Gadaleta, Domenico; Benfenati, Emilio

    2015-08-01

    The in vivo repeated dose toxicity (RDT) test is intended to provide information on the possible risk caused by repeated exposure to a substance over a limited period of time. The measure of the RDT is the no observed adverse effect level (NOAEL) that is the dose at which no effects are observed, i.e., this endpoint indicates the safety level for a substance. The need to replace in vivo tests, as required by some European Regulations (registration, evaluation authorization and restriction of chemicals) is leading to the searching for reliable alternative methods such as quantitative structure-activity relationships (QSAR). Considering the complexity of the RDT endpoint, for which data quality is limited and depends anyway on the study design, the development of QSAR for this endpoint is an attractive task. Starting from a dataset of 140 organic compounds with NOAEL values related to oral short term toxicity in rats, we developed a QSAR model based on optimal descriptors calculated with simplified molecular input-line entry systems and the graph of atomic orbitals by the Monte Carlo method, using CORAL software. Three different splits into the training, calibration, and validation sets are studied. The mechanistic interpretation of these models in terms of molecular fragment with positive or negative contributions to the endpoint is discussed. The probabilistic definition for the domain of applicability is suggested.

  14. Systematic Methodology for Reproducible Optimizing Batch Operation

    DEFF Research Database (Denmark)

    Bonné, Dennis; Jørgensen, Sten Bay

    2006-01-01

    contribution furthermore presents how the asymptotic convergence of Iterative Learning Control is combined with the closed-loop performance of Model Predictive Control to form a robust and asymptotically stable optimal controller for ensuring reliable and reproducible operation of batch processes....... This controller may also be used for Optimizing control. The modeling and control performance is demonstrated on a fed-batch protein cultivation example. The presented methodologies lend themselves directly for application as Process Analytical Technologies (PAT).......This contribution presents a systematic methodology for rapid acquirement of discrete-time state space model representations of batch processes based on their historical operation data. These state space models are parsimoniously parameterized as a set of local, interdependent models. The present...

  15. Ionospheric detection of tsunami earthquakes: observation, modeling and ideas for future early warning

    Science.gov (United States)

    Occhipinti, G.; Manta, F.; Rolland, L.; Watada, S.; Makela, J. J.; Hill, E.; Astafieva, E.; Lognonne, P. H.

    2017-12-01

    Detection of ionospheric anomalies following the Sumatra and Tohoku earthquakes (e.g., Occhipinti 2015) demonstrated that ionosphere is sensitive to earthquake and tsunami propagation: ground and oceanic vertical displacement induces acoustic-gravity waves propagating within the neutral atmosphere and detectable in the ionosphere. Observations supported by modelling proved that ionospheric anomalies related to tsunamis are deterministic and reproducible by numerical modeling via the ocean/neutral-atmosphere/ionosphere coupling mechanism (Occhipinti et al., 2008). To prove that the tsunami signature in the ionosphere is routinely detected we show here perturbations of total electron content (TEC) measured by GPS and following tsunamigenic earthquakes from 2004 to 2011 (Rolland et al. 2010, Occhipinti et al., 2013), nominally, Sumatra (26 December, 2004 and 12 September, 2007), Chile (14 November, 2007), Samoa (29 September, 2009) and the recent Tohoku-Oki (11 Mars, 2011). Based on the observations close to the epicenter, mainly performed by GPS networks located in Sumatra, Chile and Japan, we highlight the TEC perturbation observed within the first 8 min after the seismic rupture. This perturbation contains information about the ground displacement, as well as the consequent sea surface displacement resulting in the tsunami. In addition to GNSS-TEC observations close to the epicenter, new exciting measurements in the far-field were performed by airglow measurement in Hawaii show the propagation of the internal gravity waves induced by the Tohoku tsunami (Occhipinti et al., 2011). This revolutionary imaging technique is today supported by two new observations of moderate tsunamis: Queen Charlotte (M: 7.7, 27 October, 2013) and Chile (M: 8.2, 16 September 2015). We finally detail here our recent work (Manta et al., 2017) on the case of tsunami alert failure following the Mw7.8 Mentawai event (25 October, 2010), and its twin tsunami alert response following the Mw7

  16. Simulation of the hydrodynamic conditions of the eye to better reproduce the drug release from hydrogel contact lenses: experiments and modeling.

    Science.gov (United States)

    Pimenta, A F R; Valente, A; Pereira, J M C; Pereira, J C F; Filipe, H P; Mata, J L G; Colaço, R; Saramago, B; Serro, A P

    2016-12-01

    Currently, most in vitro drug release studies for ophthalmic applications are carried out in static sink conditions. Although this procedure is simple and useful to make comparative studies, it does not describe adequately the drug release kinetics in the eye, considering the small tear volume and flow rates found in vivo. In this work, a microfluidic cell was designed and used to mimic the continuous, volumetric flow rate of tear fluid and its low volume. The suitable operation of the cell, in terms of uniformity and symmetry of flux, was proved using a numerical model based in the Navier-Stokes and continuity equations. The release profile of a model system (a hydroxyethyl methacrylate-based hydrogel (HEMA/PVP) for soft contact lenses (SCLs) loaded with diclofenac) obtained with the microfluidic cell was compared with that obtained in static conditions, showing that the kinetics of release in dynamic conditions is slower. The application of the numerical model demonstrated that the designed cell can be used to simulate the drug release in the whole range of the human eye tear film volume and allowed to estimate the drug concentration in the volume of liquid in direct contact with the hydrogel. The knowledge of this concentration, which is significantly different from that measured in the experimental tests during the first hours of release, is critical to predict the toxicity of the drug release system and its in vivo efficacy. In conclusion, the use of the microfluidic cell in conjunction with the numerical model shall be a valuable tool to design and optimize new therapeutic drug-loaded SCLs.

  17. Reproducibility of isotope ratio measurements

    International Nuclear Information System (INIS)

    Elmore, D.

    1981-01-01

    The use of an accelerator as part of a mass spectrometer has improved the sensitivity for measuring low levels of long-lived radionuclides by several orders of magnitude. However, the complexity of a large tandem accelerator and beam transport system has made it difficult to match the precision of low energy mass spectrometry. Although uncertainties for accelerator measured isotope ratios as low as 1% have been obtained under favorable conditions, most errors quoted in the literature for natural samples are in the 5 to 20% range. These errors are dominated by statistics and generally the reproducibility is unknown since the samples are only measured once

  18. Glider observations and modeling of sediment transport in Hurricane Sandy

    Science.gov (United States)

    Miles, Travis; Seroka, Greg; Kohut, Josh; Schofield, Oscar; Glenn, Scott

    2015-03-01

    Regional sediment resuspension and transport are examined as Hurricane Sandy made landfall on the Mid-Atlantic Bight (MAB) in October 2012. A Teledyne-Webb Slocum glider, equipped with a Nortek Aquadopp current profiler, was deployed on the continental shelf ahead of the storm, and is used to validate sediment transport routines coupled to the Regional Ocean Modeling System (ROMS). The glider was deployed on 25 October, 5 days before Sandy made landfall in southern New Jersey (NJ) and flew along the 40 m isobath south of the Hudson Shelf Valley. We used optical and acoustic backscatter to compare with two modeled size classes along the glider track, 0.1 and 0.4 mm sand, respectively. Observations and modeling revealed full water column resuspension for both size classes for over 24 h during peak waves and currents, with transport oriented along-shelf toward the southwest. Regional model predictions showed over 3 cm of sediment eroded on the northern portion of the NJ shelf where waves and currents were the highest. As the storm passed and winds reversed from onshore to offshore on the southern portion of the domain waves and subsequently orbital velocities necessary for resuspension were reduced leading to over 3 cm of deposition across the entire shelf, just north of Delaware Bay. This study highlights the utility of gliders as a new asset in support of the development and verification of regional sediment resuspension and transport models, particularly during large tropical and extratropical cyclones when in situ data sets are not readily available.

  19. Fracture initiation associated with chemical degradation: observation and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Byoungho Choi; Zhenwen Zhou; Chudnovsky, Alexander [Illinois Univ., Dept. of Civil and Materials Engineering (M/C 246), Chicago, IL (United States); Stivala, Salvatore S. [Stevens Inst. of Technology, Dept. of Chemistry and Chemical Biology, Hoboken, NJ (United States); Sehanobish, Kalyan; Bosnyak, Clive P. [Dow Chemical Co., Freeport, TX (United States)

    2005-01-01

    The fracture initiation in engineering thermoplastics resulting from chemical degradation is usually observed in the form of a microcrack network within a surface layer of degraded polymer exposed to a combined action of mechanical stresses and chemically aggressive environment. Degradation of polymers is usually manifested in a reduction of molecular weight, increase of crystallinity in semi crystalline polymers, increase of material density, a subtle increase in yield strength, and a dramatic reduction in toughness. An increase in material density, i.e., shrinkage of the degraded layer is constrained by adjacent unchanged material results in a buildup of tensile stress within the degraded layer and compressive stress in the adjacent unchanged material due to increasing incompatibility between the two. These stresses are an addition to preexisting manufacturing and service stresses. At a certain level of degradation, a combination of toughness reduction and increase of tensile stress result in fracture initiation. A quantitative model of the described above processes is presented in these work. For specificity, the internally pressurized plastic pipes that transport a fluid containing a chemically aggressive (oxidizing) agent is used as the model of fracture initiation. Experimental observations of material density and toughness dependence on degradation reported elsewhere are employed in the model. An equation for determination of a critical level of degradation corresponding to the offset of fracture is constructed. The critical level of degradation for fracture initiation depends on the rates of toughness deterioration and build-up of the degradation related stresses as well as on the manufacturing and service stresses. A method for evaluation of the time interval prior to fracture initiation is also formulated. (Author)

  20. Is Grannum grading of the placenta reproducible?

    Science.gov (United States)

    Moran, Mary; Ryan, John; Brennan, Patrick C.; Higgins, Mary; McAuliffe, Fionnuala M.

    2009-02-01

    Current ultrasound assessment of placental calcification relies on Grannum grading. The aim of this study was to assess if this method is reproducible by measuring inter- and intra-observer variation in grading placental images, under strictly controlled viewing conditions. Thirty placental images were acquired and digitally saved. Five experienced sonographers independently graded the images on two separate occasions. In order to eliminate any technological factors which could affect data reliability and consistency all observers reviewed images at the same time. To optimise viewing conditions ambient lighting was maintained between 25-40 lux, with monitors calibrated to the GSDF standard to ensure consistent brightness and contrast. Kappa (κ) analysis of the grades assigned was used to measure inter- and intra-observer reliability. Intra-observer agreement had a moderate mean κ-value of 0.55, with individual comparisons ranging from 0.30 to 0.86. Two images saved from the same patient, during the same scan, were each graded as I, II and III by the same observer. A mean κ-value of 0.30 (range from 0.13 to 0.55) indicated fair inter-observer agreement over the two occasions and only one image was graded consistently the same by all five observers. The study findings confirmed the lack of reproducibility associated with Grannum grading of the placenta despite optimal viewing conditions and highlight the need for new methods of assessing placental health in order to improve neonatal outcomes. Alternative methods for quantifying placental calcification such as a software based technique and 3D ultrasound assessment need to be explored.

  1. Europlanet/IDIS: Combining Diverse Planetary Observations and Models

    Science.gov (United States)

    Schmidt, Walter; Capria, Maria Teresa; Chanteur, Gerard

    2013-04-01

    Planetary research involves a diversity of research fields from astrophysics and plasma physics to atmospheric physics, climatology, spectroscopy and surface imaging. Data from all these disciplines are collected from various space-borne platforms or telescopes, supported by modelling teams and laboratory work. In order to interpret one set of data often supporting data from different disciplines and other missions are needed while the scientist does not always have the detailed expertise to access and utilize these observations. The Integrated and Distributed Information System (IDIS) [1], developed in the framework of the Europlanet-RI project, implements a Virtual Observatory approach ([2] and [3]), where different data sets, stored in archives around the world and in different formats, are accessed, re-formatted and combined to meet the user's requirements without the need of familiarizing oneself with the different technical details. While observational astrophysical data from different observatories could already earlier be accessed via Virtual Observatories, this concept is now extended to diverse planetary data and related model data sets, spectral data bases etc. A dedicated XML-based Europlanet Data Model (EPN-DM) [4] was developed based on data models from the planetary science community and the Virtual Observatory approach. A dedicated editor simplifies the registration of new resources. As the EPN-DM is a super-set of existing data models existing archives as well as new spectroscopic or chemical data bases for the interpretation of atmospheric or surface observations, or even modeling facilities at research institutes in Europe or Russia can be easily integrated and accessed via a Table Access Protocol (EPN-TAP) [5] adapted from the corresponding protocol of the International Virtual Observatory Alliance [6] (IVOA-TAP). EPN-TAP allows to search catalogues, retrieve data and make them available through standard IVOA tools if the access to the archive

  2. Observational constraints on successful model of quintessential Inflation

    Energy Technology Data Exchange (ETDEWEB)

    Geng, Chao-Qiang [Chongqing University of Posts and Telecommunications, Chongqing, 400065 (China); Lee, Chung-Chi [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Sami, M. [Centre for Theoretical Physics, Jamia Millia Islamia, New Delhi 110025 (India); Saridakis, Emmanuel N. [Physics Division, National Technical University of Athens, 15780 Zografou Campus, Athens (Greece); Starobinsky, Alexei A., E-mail: geng@phys.nthu.edu.tw, E-mail: lee.chungchi16@gmail.com, E-mail: sami@iucaa.ernet.in, E-mail: Emmanuel_Saridakis@baylor.edu, E-mail: alstar@landau.ac.ru [L. D. Landau Institute for Theoretical Physics RAS, Moscow 119334 (Russian Federation)

    2017-06-01

    We study quintessential inflation using a generalized exponential potential V (φ)∝ exp(−λ φ {sup n} / M {sub Pl} {sup n} ), n >1, the model admits slow-roll inflation at early times and leads to close-to-scaling behaviour in the post inflationary era with an exit to dark energy at late times. We present detailed investigations of the inflationary stage in the light of the Planck 2015 results, study post-inflationary dynamics and analytically confirm the existence of an approximately scaling solution. Additionally, assuming that standard massive neutrinos are non-minimally coupled, makes the field φ dominant once again at late times giving rise to present accelerated expansion of the Universe. We derive observational constraints on the field and time-dependent neutrino masses. In particular, for n =6 (8), the parameter λ is constrained to be, log λ > −7.29 (−11.7); the model produces the spectral index of the power spectrum of primordial scalar (matter density) perturbations as n {sub s} = 0.959 ± 0.001 (0.961 ± 0.001) and tiny tensor-to-scalar ratio, r <1.72 × 10{sup −2} (2.32 × 10{sup −2}) respectively. Consequently, the upper bound on possible values of the sum of neutrino masses Σ m {sub ν} ∼< 2.5 eV significantly enhances compared to that in the standard ΛCDM model.

  3. Data Science Innovations That Streamline Development, Documentation, Reproducibility, and Dissemination of Models in Computational Thermodynamics: An Application of Image Processing Techniques for Rapid Computation, Parameterization and Modeling of Phase Diagrams

    Science.gov (United States)

    Ghiorso, M. S.

    2014-12-01

    Computational thermodynamics (CT) represents a collection of numerical techniques that are used to calculate quantitative results from thermodynamic theory. In the Earth sciences, CT is most often applied to estimate the equilibrium properties of solutions, to calculate phase equilibria from models of the thermodynamic properties of materials, and to approximate irreversible reaction pathways by modeling these as a series of local equilibrium steps. The thermodynamic models that underlie CT calculations relate the energy of a phase to temperature, pressure and composition. These relationships are not intuitive and they are seldom well constrained by experimental data; often, intuition must be applied to generate a robust model that satisfies the expectations of use. As a consequence of this situation, the models and databases the support CT applications in geochemistry and petrology are tedious to maintain as new data and observations arise. What is required to make the process more streamlined and responsive is a computational framework that permits the rapid generation of observable outcomes from the underlying data/model collections, and importantly, the ability to update and re-parameterize the constitutive models through direct manipulation of those outcomes. CT procedures that take models/data to the experiential reference frame of phase equilibria involve function minimization, gradient evaluation, the calculation of implicit lines, curves and surfaces, contour extraction, and other related geometrical measures. All these procedures are the mainstay of image processing analysis. Since the commercial escalation of video game technology, open source image processing libraries have emerged (e.g., VTK) that permit real time manipulation and analysis of images. These tools find immediate application to CT calculations of phase equilibria by permitting rapid calculation and real time feedback between model outcome and the underlying model parameters.

  4. Lightning NOx emissions over the USA constrained by TES ozone observations and the GEOS-Chem model

    Directory of Open Access Journals (Sweden)

    K. E. Pickering

    2010-01-01

    Full Text Available Improved estimates of NOx from lightning sources are required to understand tropospheric NOx and ozone distributions, the oxidising capacity of the troposphere and corresponding feedbacks between chemistry and climate change. In this paper, we report new satellite ozone observations from the Tropospheric Emission Spectrometer (TES instrument that can be used to test and constrain the parameterization of the lightning source of NOx in global models. Using the National Lightning Detection (NLDN and the Long Range Lightning Detection Network (LRLDN data as well as the HYPSLIT transport and dispersion model, we show that TES provides direct observations of ozone enhanced layers downwind of convective events over the USA in July 2006. We find that the GEOS-Chem global chemistry-transport model with a parameterization based on cloud top height, scaled regionally and monthly to OTD/LIS (Optical Transient Detector/Lightning Imaging Sensor climatology, captures the ozone enhancements seen by TES. We show that the model's ability to reproduce the location of the enhancements is due to the fact that this model reproduces the pattern of the convective events occurrence on a daily basis during the summer of 2006 over the USA, even though it does not well represent the relative distribution of lightning intensities. However, this model with a value of 6 Tg N/yr for the lightning source (i.e.: with a mean production of 260 moles NO/Flash over the USA in summer underestimates the intensities of the ozone enhancements seen by TES. By imposing a production of 520 moles NO/Flash for lightning occurring in midlatitudes, which better agrees with the values proposed by the most recent studies, we decrease the bias between TES and GEOS-Chem ozone over the USA in July 2006 by 40%. However, our conclusion on the strength of the lightning source of NOx is limited by the fact that the contribution from the stratosphere is underestimated in the GEOS-Chem simulations.

  5. A UNIFIED EMPIRICAL MODEL FOR INFRARED GALAXY COUNTS BASED ON THE OBSERVED PHYSICAL EVOLUTION OF DISTANT GALAXIES

    International Nuclear Information System (INIS)

    Béthermin, Matthieu; Daddi, Emanuele; Sargent, Mark T.; Elbaz, David; Mullaney, James; Pannella, Maurilio; Magdis, Georgios; Hezaveh, Yashar; Le Borgne, Damien; Buat, Véronique; Charmandaris, Vassilis; Lagache, Guilaine; Scott, Douglas

    2012-01-01

    We reproduce the mid-infrared to radio galaxy counts with a new empirical model based on our current understanding of the evolution of main-sequence (MS) and starburst (SB) galaxies. We rely on a simple spectral energy distribution (SED) library based on Herschel observations: a single SED for the MS and another one for SB, getting warmer with redshift. Our model is able to reproduce recent measurements of galaxy counts performed with Herschel, including counts per redshift slice. This agreement demonstrates the power of our 2-Star-Formation Modes (2SFM) decomposition in describing the statistical properties of infrared sources and their evolution with cosmic time. We discuss the relative contribution of MS and SB galaxies to the number counts at various wavelengths and flux densities. We also show that MS galaxies are responsible for a bump in the 1.4 GHz radio counts around 50 μJy. Material of the model (predictions, SED library, mock catalogs, etc.) is available online.

  6. Immortalized keratinocytes derived from patients with epidermolytic ichthyosis reproduce the disease phenotype: a useful in vitro model for testing new treatments.

    Science.gov (United States)

    Chamcheu, J C; Pihl-Lundin, I; Mouyobo, C E; Gester, T; Virtanen, M; Moustakas, A; Navsaria, H; Vahlquist, A; Törmä, H

    2011-02-01

    Epidermolytic ichthyosis (EI) is a skin fragility disorder caused by mutations in genes encoding suprabasal keratins 1 and 10. While the aetiology of EI is known, model systems are needed for pathophysiological studies and development of novel therapies. To generate immortalized keratinocyte lines from patients with EI for studies of EI cell pathology and the effects of chemical chaperones as putative therapies. We derived keratinocytes from three patients with EI and one healthy control and established immortalized keratinocytes using human papillomavirus 16-E6/E7. Growth and differentiation characteristics, ability to regenerate organotypic epidermis, keratin expression, formation of cytoskeletal aggregates, and responses to heat shock and chemical chaperones were assessed. The cell lines EH11 (K1_p.Val176_Lys197del), EH21 (K10_p.156Arg>Gly), EH31 (K10_p.Leu161_Asp162del) and NKc21 (wild-type) currently exceed 160 population doublings and differentiate when exposed to calcium. At resting state, keratin aggregates were detected in 9% of calcium-differentiated EH31 cells, but not in any other cell line. Heat stress further increased this proportion to 30% and also induced aggregates in 3% of EH11 cultures. Treatment with trimethylamine N-oxide and 4-phenylbutyrate (4-PBA) reduced the fraction of aggregate-containing cells and affected the mRNA expression of keratins 1 and 10 while 4-PBA also modified heat shock protein 70 (HSP70) expression. Furthermore, in situ proximity ligation assay suggested a colocalization between HSP70 and keratins 1 and 10. Reconstituted epidermis from EI cells cornified but EH21 and EH31 cells produced suprabasal cytolysis, closely resembling the in vivo phenotype. These immortalized cell lines represent a useful model for studying EI biology and novel therapies. © 2011 The Authors. BJD © 2011 British Association of Dermatologists.

  7. Capacitance Online Estimation Based on Adaptive Model Observer

    Directory of Open Access Journals (Sweden)

    Cen Zhaohui

    2016-01-01

    Full Text Available As a basic component in electrical and electronic devices, capacitors are very popular in electrical circuits. Conventional capacitors such as electrotype capacitors are easy to degradation, aging and fatigue due to long‐time running and outer damages such as mechanical and electrical stresses. In this paper, a novel online capacitance measurement/estimation approach is proposed. Firstly, an Adaptive Model Observer (AMO is designed based on the capacitor's circuit equations. Secondly, the AMO’s stability and convergence are analysed and discussed. Finally, Capacitors with different capacitance and different initial voltages in a buck converter topology are tested and validated. Simulation results demonstrate the effectiveness and superiority of our proposed approach.

  8. Venus Aerosol Properties from Modelling and Akatsuki IR2 Observations

    Science.gov (United States)

    McGouldrick, K.

    2017-09-01

    I am creating computer simulations of the clouds of Venus. In these simulations, I make changes to the properties of the aerosols that affect their ability to form, grow, evaporate, or combine with other particles. I then use the results of these models to predict how bright or dark Venus might appear when viewed at infrared wavelengths. By comparing this calculated brightness with the infrared observations made by the IR2 infrared camera on the Akatsuki spacecraft (currently orbiting Venus since its arrival in December 2015, built by the Japan Aerospace Exploration Agency (JAXA)), I hope to explain the causes for the changes that are seen to occur in the clouds via these and other images.

  9. Evaluation of Statistical Downscaling Skill at Reproducing Extreme Events

    Science.gov (United States)

    McGinnis, S. A.; Tye, M. R.; Nychka, D. W.; Mearns, L. O.

    2015-12-01

    Climate model outputs usually have much coarser spatial resolution than is needed by impacts models. Although higher resolution can be achieved using regional climate models for dynamical downscaling, further downscaling is often required. The final resolution gap is often closed with a combination of spatial interpolation and bias correction, which constitutes a form of statistical downscaling. We use this technique to downscale regional climate model data and evaluate its skill in reproducing extreme events. We downscale output from the North American Regional Climate Change Assessment Program (NARCCAP) dataset from its native 50-km spatial resolution to the 4-km resolution of University of Idaho's METDATA gridded surface meterological dataset, which derives from the PRISM and NLDAS-2 observational datasets. We operate on the major variables used in impacts analysis at a daily timescale: daily minimum and maximum temperature, precipitation, humidity, pressure, solar radiation, and winds. To interpolate the data, we use the patch recovery method from the Earth System Modeling Framework (ESMF) regridding package. We then bias correct the data using Kernel Density Distribution Mapping (KDDM), which has been shown to exhibit superior overall performance across multiple metrics. Finally, we evaluate the skill of this technique in reproducing extreme events by comparing raw and downscaled output with meterological station data in different bioclimatic regions according to the the skill scores defined by Perkins et al in 2013 for evaluation of AR4 climate models. We also investigate techniques for improving bias correction of values in the tails of the distributions. These techniques include binned kernel density estimation, logspline kernel density estimation, and transfer functions constructed by fitting the tails with a generalized pareto distribution.

  10. Evaluating climate model performance with various parameter sets using observations over the recent past

    Directory of Open Access Journals (Sweden)

    M. F. Loutre

    2011-05-01

    Full Text Available Many sources of uncertainty limit the accuracy of climate projections. Among them, we focus here on the parameter uncertainty, i.e. the imperfect knowledge of the values of many physical parameters in a climate model. Therefore, we use LOVECLIM, a global three-dimensional Earth system model of intermediate complexity and vary several parameters within a range based on the expert judgement of model developers. Nine climatic parameter sets and three carbon cycle parameter sets are selected because they yield present-day climate simulations coherent with observations and they cover a wide range of climate responses to doubled atmospheric CO2 concentration and freshwater flux perturbation in the North Atlantic. Moreover, they also lead to a large range of atmospheric CO2 concentrations in response to prescribed emissions. Consequently, we have at our disposal 27 alternative versions of LOVECLIM (each corresponding to one parameter set that provide very different responses to some climate forcings. The 27 model versions are then used to illustrate the range of responses provided over the recent past, to compare the time evolution of climate variables over the time interval for which they are available (the last few decades up to more than one century and to identify the outliers and the "best" versions over that particular time span. For example, between 1979 and 2005, the simulated global annual mean surface temperature increase ranges from 0.24 °C to 0.64 °C, while the simulated increase in atmospheric CO2 concentration varies between 40 and 50 ppmv. Measurements over the same period indicate an increase in global annual mean surface temperature of 0.45 °C (Brohan et al., 2006 and an increase in atmospheric CO2 concentration of 44 ppmv (Enting et al., 1994; GLOBALVIEW-CO2, 2006. Only a few parameter sets yield simulations that reproduce the observed key variables of the climate system over the last

  11. Recent trends of high-latitude vegetation activity assessed and explained by contrasting modelling approaches with earth observation data

    Science.gov (United States)

    Forkel, M.; Carvalhais, N.; Reichstein, M.; Thonicke, K.

    2012-04-01

    Satellite observations of Normalized Difference Vegetation Index (NDVI) showed increasing trends in the arctic tundra and the boreal forests since the 1980s. This greening is related to an increase in photosynthetic activity and is driven by increasing temperatures and a prolongation of the growing season. However, NDVI experienced a decrease in large regions of the boreal forests since the mid-1990s. This browning is related to fire disturbances, temperature-induced summer drought and potentially to insect infestations and diseases. Terrestrial biosphere models (TBM) can be used to assess the impacts of these changes in vegetation productivity on the carbon and water cycles and on the climate system. In general, these models provide descriptions of ecosystem processes and states that are forced by and feedback to the climate system such as photosynthesis and transpiration, ecosystem respiration, soil carbon and water stocks and vegetation composition. The evaluation of TBMs against observations is a necessary step to assess their suitability to simulate such processes and dynamics. The increasing availability of long-term observations of vegetation activity enables us to evaluate the model ability to diagnose these vegetation greening and browning trends in arctic and boreal regions. The first aim of this study is to evaluate trends in vegetation activity in high-latitude regions as simulated by TBMs against observed trends in vegetation activity. The second aim is to identify potential drivers of these observed and simulated trends to evaluate the ability of models to reproduce the observed functional relations between climatic and environmental drivers and the vegetation trends. The trends in vegetation activity were estimated for a set of satellite-based remote sensing products: NDVI from AVHRR (Advanced Very High Resolution Radiometer) and MODIS (Moderate Resolution Imaging Spectrometer), as well as FAPAR observations (Fraction of Observed Photosynthetically

  12. Modelling of particular phenomena observed in PANDA with Gothic

    International Nuclear Information System (INIS)

    Bandurski, Th.; Putz, F.; Andreani, M.; Analytis, M.

    2000-01-01

    PANDA is a large scale facility for investigating the long-term decay heat removal from the containment of a next generation 'passive' Advanced Light Water Reactor (ALWR). The first test series was aimed at the investigation of the long-term LOCA response of the Passive Containment Cooling System (PCCS) for the General Electric (GE) Simplified Boiling Water Reactor (SBWR). Recently, the facility is used in the framework of two European projects for investigating the performance of four passive cooling systems, i.e. the Building Condenser (BC) designed by Siemens for the SWR-1000 long-term containment cooling, the Passive Containment Cooling System for the European Simplified Boiling Water Reactor (ESBWR), the Containment Plate Condenser (CPC) and the Isolation Condenser (IC) for cooling of a BWR core. The PANDA tests have the dual objectives of improving confidence in the performance of the passive heat removal mechanisms underlying the design of the tested safety systems and extending the data base available for containment analysis code qualification. Among others, the containment analysis code Gothic was chosen for the analysis of particular phenomena observed during the PANDA tests. Ibis paper presents selected safety relevant phenomena observed in the PANDA tests and identified for the analyses and possible approaches for their modeling with Gothic. (author)

  13. Confronting the outflow-regulated cluster formation model with observations

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Fumitaka [National Astronomical Observatory, Mitaka, Tokyo 181-8588 (Japan); Li, Zhi-Yun, E-mail: fumitaka.nakamura@nao.ac.jp, E-mail: zl4h@virginia.edu [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904 (United States)

    2014-03-10

    Protostellar outflows have been shown theoretically to be capable of maintaining supersonic turbulence in cluster-forming clumps and keeping the star formation rate per free-fall time as low as a few percent. We aim to test two basic predictions of this outflow-regulated cluster formation model, namely, (1) the clump should be close to virial equilibrium and (2) the turbulence dissipation rate should be balanced by the outflow momentum injection rate, using recent outflow surveys toward eight nearby cluster-forming clumps (B59, L1551, L1641N, Serpens Main Cloud, Serpens South, ρ Oph, IC 348, and NGC 1333). We find, for almost all sources, that the clumps are close to virial equilibrium and the outflow momentum injection rate exceeds the turbulence momentum dissipation rate. In addition, the outflow kinetic energy is significantly smaller than the clump gravitational energy for intermediate and massive clumps with M {sub cl} ≳ a few × 10{sup 2} M {sub ☉}, suggesting that the outflow feedback is not enough to disperse the clump as a whole. The number of observed protostars also indicates that the star formation rate per free-fall time is as small as a few percent for all clumps. These observationally based results strengthen the case for outflow-regulated cluster formation.

  14. A Mouse Model That Reproduces the Developmental Pathways and Site Specificity of the Cancers Associated With the Human BRCA1 Mutation Carrier State

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2015-10-01

    Full Text Available Predisposition to breast and extrauterine Müllerian carcinomas in BRCA1 mutation carriers is due to a combination of cell-autonomous consequences of BRCA1 inactivation on cell cycle homeostasis superimposed on cell-nonautonomous hormonal factors magnified by the effects of BRCA1 mutations on hormonal changes associated with the menstrual cycle. We used the Müllerian inhibiting substance type 2 receptor (Mis2r promoter and a truncated form of the Follicle stimulating hormone receptor (Fshr promoter to introduce conditional knockouts of Brca1 and p53 not only in mouse mammary and Müllerian epithelia, but also in organs that control the estrous cycle. Sixty percent of the double mutant mice developed invasive Müllerian and mammary carcinomas. Mice carrying heterozygous mutations in Brca1 and p53 also developed invasive tumors, albeit at a lesser (30% rate, in which the wild type alleles were no longer present due to loss of heterozygosity. While mice carrying heterozygous mutations in both genes developed mammary tumors, none of the mice carrying only a heterozygous p53 mutation developed such tumors (P < 0.0001, attesting to a role for Brca1 mutations in tumor development. This mouse model is attractive to investigate cell-nonautonomous mechanisms associated with cancer predisposition in BRCA1 mutation carriers and to investigate the merit of chemo-preventive drugs targeting such mechanisms.

  15. Modeling the Ionosphere with GPS and Rotation Measure Observations

    Science.gov (United States)

    Malins, J. B.; Taylor, G. B.; White, S. M.; Dowell, J.

    2017-12-01

    Advances in digital processing have created new tools for looking at and examining the ionosphere. We have combined data from dual frequency GPSs, digital ionosondes and observations from The Long Wavelength Array (LWA), a 256 dipole low frequency radio telescope situated in central New Mexico in order to examine ionospheric profiles. By studying polarized pulsars, the LWA is able to very accurately determine the Faraday rotation caused by the ionosphere. By combining this data with the international geomagnetic reference field, the LWA can evaluate ionospheric profiles and how well they predict the actual Faraday rotation. Dual frequency GPS measurements of total electron content, as well as measurements from digisonde data were used to model the ionosphere, and to predict the Faraday rotation to with in 0.1 rad/m2. Additionally, it was discovered that the predicted topside profile of the digisonde data did not accurate predict faraday rotation measurements, suggesting a need to reexamine the methods for creating the topside predicted profile. I will discuss the methods used to measure rotation measure and ionosphere profiles as well as discuss possible corrections to the topside model.

  16. Representation of tropical deep convection in atmospheric models – Part 1: Meteorology and comparison with satellite observations

    Directory of Open Access Journals (Sweden)

    M. R. Russo

    2011-03-01

    Full Text Available Fast convective transport in the tropics can efficiently redistribute water vapour and pollutants up to the upper troposphere. In this study we compare tropical convection characteristics for the year 2005 in a range of atmospheric models, including numerical weather prediction (NWP models, chemistry transport models (CTMs, and chemistry-climate models (CCMs. The model runs have been performed within the framework of the SCOUT-O3 (Stratospheric-Climate Links with Emphasis on the Upper Troposphere and Lower Stratosphere project. The characteristics of tropical convection, such as seasonal cycle, land/sea contrast and vertical extent, are analysed using satellite observations as a benchmark for model simulations. The observational datasets used in this work comprise precipitation rates, outgoing longwave radiation, cloud-top pressure, and water vapour from a number of independent sources, including ERA-Interim analyses. Most models are generally able to reproduce the seasonal cycle and strength of precipitation for continental regions but show larger discrepancies with observations for the Maritime Continent region. The frequency distribution of high clouds from models and observations is calculated using highly temporally-resolved (up to 3-hourly cloud top data. The percentage of clouds above 15 km varies significantly between the models. Vertical profiles of water vapour in the upper troposphere-lower stratosphere (UTLS show large differences between the models which can only be partly attributed to temperature differences. If a convective plume reaches above the level of zero net radiative heating, which is estimated to be ~15 km in the tropics, the air detrained from it can be transported upwards by radiative heating into the lower stratosphere. In this context, we discuss the role of tropical convection as a precursor for the transport of short-lived species into the lower stratosphere.

  17. Observed and CMIP5 modeled influence of large-scale circulation on summer precipitation and drought in the South-Central United States

    Science.gov (United States)

    Ryu, Jung-Hee; Hayhoe, Katharine

    2017-12-01

    Annual precipitation in the largely agricultural South-Central United States is characterized by a primary wet season in May and June, a mid-summer dry period in July and August, and a second precipitation peak in September and October. Of the 22 CMIP5 global climate models with sufficient output available, 16 are able to reproduce this bimodal distribution (we refer to these as "BM" models), while 6 have trouble simulating the mid-summer dry period, instead producing an extended wet season ("EW" models). In BM models, the timing and amplitude of the mid-summer westward extension of the North Atlantic Subtropical High (NASH) are realistic, while the magnitude of the Great Plains Lower Level Jet (GPLLJ) tends to be overestimated, particularly in July. In EW models, temporal variations and geophysical locations of the NASH and GPLLJ appear reasonable compared to reanalysis but their magnitudes are too weak to suppress mid-summer precipitation. During warm-season droughts, however, both groups of models reproduce the observed tendency towards a stronger NASH that remains over the region through September, and an intensification and northward extension of the GPLLJ. Similarly, future simulations from both model groups under a +1 to +3 °C transient increase in global mean temperature show decreases in summer precipitation concurrent with an enhanced NASH and an intensified GPLLJ, though models differ regarding the months in which these decreases are projected to occur: early summer in the BM models, and late summer in the EW models. Overall, these results suggest that projected future decreases in summer precipitation over the South-Central region appear to be closely related to anomalous patterns of large-scale circulation already observed and modeled during historical dry years, patterns that are consistently reproduced by CMIP5 models.

  18. Are classifications of proximal radius fractures reproducible?

    Directory of Open Access Journals (Sweden)

    dos Santos João BG

    2009-10-01

    Full Text Available Abstract Background Fractures of the proximal radius need to be classified in an appropriate and reproducible manner. The aim of this study was to assess the reliability of the three most widely used classification systems. Methods Elbow radiographs images of patients with proximal radius fractures were classified according to Mason, Morrey, and Arbeitsgemeinschaft für osteosynthesefragen/Association for the Study of Internal Fixation (AO/ASIF classifications by four observers with different experience with this subject to assess their intra- and inter-observer agreement. Each observer analyzed the images on three different occasions on a computer with numerical sequence randomly altered. Results We found that intra-observer agreement of Mason and Morrey classifications were satisfactory (κ = 0.582 and 0.554, respectively, while the AO/ASIF classification had poor intra-observer agreement (κ = 0.483. Inter-observer agreement was higher in the Mason (κ = 0.429-0.560 and Morrey (κ = 0.319-0.487 classifications than in the AO/ASIF classification (κ = 0.250-0.478, which showed poor reliability. Conclusion Inter- and intra-observer agreement of the Mason and Morey classifications showed overall satisfactory reliability when compared to the AO/ASIF system. The Mason classification is the most reliable system.

  19. Observations

    DEFF Research Database (Denmark)

    Rossiter, John R.; Percy, Larry

    2013-01-01

    product or service or to achieve a higher price that consumers are willing to pay than would obtain in the absence of advertising. What has changed in recent years is the notable worsening of the academic-practitioner divide, which has seen academic advertising researchers pursuing increasingly...... as requiring a new model of how advertising communicates and persuades, which, as the authors' textbooks explain, is sheer nonsense and contrary to the goal of integrated marketing. We provide in this article a translation of practitioners' jargon into more scientifically acceptable terminology as well...

  20. Models for the water-ice librational band in cool dust: possible observational test

    Science.gov (United States)

    Robinson, G.

    2014-01-01

    Of all the water-ice (H2O-ice) bands the librational band, occurring at a wavelength of about 12 μm, has proved to be the most difficult to detect observationally and also to reproduce in radiative transfer models. In fact, the case for the positive identification of the feature is strong in only a few astronomical objects. A previously suggested explanation for this is that so-called radiative transfer effects may mask the feature. In this paper, radiative transfer models are produced which unambiguously reveal the presence of the librational band as a separate resolved feature provided that there is no dust present which radiates significantly in the 10-μm region, specifically silicate-type dust. This means that the maximum dust temperature must be ≲50 K. In this case, the models indicate that the librational band may clearly be observed as an absorption feature against the stellar continuum. This suggests that the feature may be best observed by obtaining the 10-μm spectrum of stars either with very cool circumstellar dust shells, with Tmax ≲ 50 K, or those without circumstellar dust shells at all but with interstellar extinction. The first option might, however, require unrealistically large amounts of dust in the circumstellar shell in order to produce measurable absorption. Thus, the best place to look for the water-ice librational band may not be protostars with the remnants of their dust cloud still present, or evolved objects with ejected dust shells, as one might first think, because of the warm dust (Tmax ≫ 50 K) usually present in the shells of these objects. If objects associated with very cool dust exclusively do show the 3.1-μm water-ice band in deep absorption, but the librational band still does not appear, this may imply that it is not radiative transfer effects which suppress the librational band, and that some other mechanism for its suppression is in play. One possibility is that a low water-ice to silicate abundance may mask the

  1. Odessa Tsunami of 27 June 2014: Observations and Numerical Modelling

    Science.gov (United States)

    Šepić, Jadranka; Rabinovich, Alexander B.; Sytov, Victor N.

    2017-11-01

    On 27 June, a 1-2-m high wave struck the beaches of Odessa, the third largest Ukrainian city, and the neighbouring port-town Illichevsk (northwestern Black Sea). Throughout the day, prominent seiche oscillations were observed in several other ports of the Black Sea. Tsunamigenic synoptic conditions were found over the Black Sea, stretching from Romania in the west to the Crimean Peninsula in the east. Intense air pressure disturbances and convective thunderstorm clouds were associated with these conditions; right at the time of the event, a 1.5-hPa air pressure jump was recorded at Odessa and a few hours earlier in Romania. We have utilized a barotropic ocean numerical model to test two hypotheses: (1) a tsunami-like wave was generated by an air pressure disturbance propagating directly over Odessa ("Experiment 1"); (2) a tsunami-like wave was generated by an air pressure disturbance propagating offshore, approximately 200 km to the south of Odessa, and along the shelf break ("Experiment 2"). Both experiments decisively confirm the meteorological origin of the tsunami-like waves on the coast of Odessa and imply that intensified long ocean waves in this region were generated via the Proudman resonance mechanism while propagating over the northwestern Black Sea shelf. The "Odessa tsunami" of 27 June 2014 was identified as a "beach meteotsunami", similar to events regularly observed on the beaches of Florida, USA, but different from the "harbour meteotsunamis", which occurred 1-3 days earlier in Ciutadella (Baleares, Spain), Mazara del Vallo (Sicily, Italy) and Vela Luka (Croatia) in the Mediterranean Sea, despite that they were associated with the same atmospheric system moving over the Mediterranean/Black Sea region on 23-27 June 2014.

  2. Improving the representation of river-groundwater interactions in land surface modeling at the regional scale: Observational evidence and parameterization applied in the Community Land Model

    KAUST Repository

    Zampieri, Matteo

    2012-02-01

    Groundwater is an important component of the hydrological cycle, included in many land surface models to provide a lower boundary condition for soil moisture, which in turn plays a key role in the land-vegetation-atmosphere interactions and the ecosystem dynamics. In regional-scale climate applications land surface models (LSMs) are commonly coupled to atmospheric models to close the surface energy, mass and carbon balance. LSMs in these applications are used to resolve the momentum, heat, water and carbon vertical fluxes, accounting for the effect of vegetation, soil type and other surface parameters, while lack of adequate resolution prevents using them to resolve horizontal sub-grid processes. Specifically, LSMs resolve the large-scale runoff production associated with infiltration excess and sub-grid groundwater convergence, but they neglect the effect from loosing streams to groundwater. Through the analysis of observed data of soil moisture obtained from the Oklahoma Mesoscale Network stations and land surface temperature derived from MODIS we provide evidence that the regional scale soil moisture and surface temperature patterns are affected by the rivers. This is demonstrated on the basis of simulations from a land surface model (i.e., Community Land Model - CLM, version 3.5). We show that the model cannot reproduce the features of the observed soil moisture and temperature spatial patterns that are related to the underlying mechanism of reinfiltration of river water to groundwater. Therefore, we implement a simple parameterization of this process in CLM showing the ability to reproduce the soil moisture and surface temperature spatial variabilities that relate to the river distribution at regional scale. The CLM with this new parameterization is used to evaluate impacts of the improved representation of river-groundwater interactions on the simulated water cycle parameters and the surface energy budget at the regional scale. © 2011 Elsevier B.V.

  3. Evaluation of the agonist PET radioligand [¹¹C]GR103545 to image kappa opioid receptor in humans: kinetic model selection, test-retest reproducibility and receptor occupancy by the antagonist PF-04455242.

    Science.gov (United States)

    Naganawa, Mika; Jacobsen, Leslie K; Zheng, Ming-Qiang; Lin, Shu-Fei; Banerjee, Anindita; Byon, Wonkyung; Weinzimmer, David; Tomasi, Giampaolo; Nabulsi, Nabeel; Grimwood, Sarah; Badura, Lori L; Carson, Richard E; McCarthy, Timothy J; Huang, Yiyun

    2014-10-01

    Kappa opioid receptors (KOR) are implicated in several brain disorders. In this report, a first-in-human positron emission tomography (PET) study was conducted with the potent and selective KOR agonist tracer, [(11)C]GR103545, to determine an appropriate kinetic model for analysis of PET imaging data and assess the test-retest reproducibility of model-derived binding parameters. The non-displaceable distribution volume (V(ND)) was estimated from a blocking study with naltrexone. In addition, KOR occupancy of PF-04455242, a selective KOR antagonist that is active in preclinical models of depression, was also investigated. For determination of a kinetic model and evaluation of test-retest reproducibility, 11 subjects were scanned twice with [(11)C]GR103545. Seven subjects were scanned before and 75 min after oral administration of naltrexone (150 mg). For the KOR occupancy study, six subjects were scanned at baseline and 1.5 h and 8 h after an oral dose of PF-04455242 (15 mg, n=1 and 30 mg, n=5). Metabolite-corrected arterial input functions were measured and all scans were 150 min in duration. Regional time-activity curves (TACs) were analyzed with 1- and 2-tissue compartment models (1TC and 2TC) and the multilinear analysis (MA1) method to derive regional volume of distribution (V(T)). Relative test-retest variability (TRV), absolute test-retest variability (aTRV) and intra-class coefficient (ICC) were calculated to assess test-retest reproducibility of regional VT. Occupancy plots were computed for blocking studies to estimate occupancy and V(ND). The half maximal inhibitory concentration (IC50) of PF-04455242 was determined from occupancies and drug concentrations in plasma. [(11)C]GR103545 in vivo K(D) was also estimated. Regional TACs were well described by the 2TC model and MA1. However, 2TC VT was sometimes estimated with high standard error. Thus MA1 was the model of choice. Test-retest variability was ~15%, depending on the outcome measure. The blocking

  4. Observation- and model-based estimates of particulate dry nitrogen deposition to the oceans

    Directory of Open Access Journals (Sweden)

    A. R. Baker

    2017-07-01

    expected to be more robust than TM4, while TM4 gives access to speciated parameters (NO3− and NH4+ that are more relevant to the observed parameters and which are not available in ACCMIP. Dry deposition fluxes (CalDep were calculated from the observed concentrations using estimates of dry deposition velocities. Model–observation ratios (RA, n, weighted by grid-cell area and number of observations, were used to assess the performance of the models. Comparison in the three study regions suggests that TM4 overestimates NO3− concentrations (RA, n =  1.4–2.9 and underestimates NH4+ concentrations (RA, n =  0.5–0.7, with spatial distributions in the tropical Atlantic and northern Indian Ocean not being reproduced by the model. In the case of NH4+ in the Indian Ocean, this discrepancy was probably due to seasonal biases in the sampling. Similar patterns were observed in the various comparisons of CalDep to ModDep (RA, n =  0.6–2.6 for NO3−, 0.6–3.1 for NH4+. Values of RA, n for NHx CalDep–ModDep comparisons were approximately double the corresponding values for NH4+ CalDep–ModDep comparisons due to the significant fraction of gas-phase NH3 deposition incorporated in the TM4 and ACCMIP NHx model products. All of the comparisons suffered due to the scarcity of observational data and the large uncertainty in dry deposition velocities used to derive deposition fluxes from concentrations. These uncertainties have been a major limitation on estimates of the flux of material to the oceans for several decades. Recommendations are made for improvements in N deposition estimation through changes in observations, modelling and model–observation comparison procedures. Validation of modelled dry deposition requires effective comparisons to observable aerosol-phase species' concentrations, and this cannot be achieved if model products only report dry deposition flux over the ocean.

  5. Observed seasonal cycles in tropospheric ozone at three marine boundary layer locations and their comparison with models

    Science.gov (United States)

    Derwent, Richard

    2016-04-01

    Observational data have been used to define the seasonal cycles in tropospheric ozone at the surface at three marine boundary layer (MBL) locations at Mace Head in Ireland, Trinidad Head in the USA and at Cape Grim in Tasmania. Least-squares fits of a sine function to the observed monthly mean ozone mixing ratios allowed ozone seasonal cycles to be defined quantitatively, as follows: y = Y0 + A1 sin(θ + φ1) + A2 sin(2θ + φ2), where Y0 is the annual average ozone mixing ratio over the entire set of observations or model results, A1 and A2 are amplitudes, φ1 and φ2 are phase angles and θ is a variable that spans one year's time period in radians. The seasonal cycles of fourteen tropospheric ozone models, together with our own STOCHEM-CRI model, at the three MBL stations were then analysed by fitting sine curves and defining the five parameters: Y0, A1, φ1, A2, φ2. Compared to the fundamental term: A1 sin(θ + φ1), all models more accurately reproduced the observed second harmonic terms: A2 sin(2θ + φ2). This accurate agreement both in amplitude and phase angle suggested that the term arose from a cyclic phenomenon that was well predicted by all models, namely, the photochemical destruction of ozone. Model treatments of the fundamental term were in many cases far removed from the observations and it was not clear why there was so much variability across the tropospheric ozone models.

  6. Mapping urban air quality in near real-time using observations from low-cost sensors and model information.

    Science.gov (United States)

    Schneider, Philipp; Castell, Nuria; Vogt, Matthias; Dauge, Franck R; Lahoz, William A; Bartonova, Alena

    2017-09-01

    The recent emergence of low-cost microsensors measuring various air pollutants has significant potential for carrying out high-resolution mapping of air quality in the urban environment. However, the data obtained by such sensors are generally less reliable than that from standard equipment and they are subject to significant data gaps in both space and time. In order to overcome this issue, we present here a data fusion method based on geostatistics that allows for merging observations of air quality from a network of low-cost sensors with spatial information from an urban-scale air quality model. The performance of the methodology is evaluated for nitrogen dioxide in Oslo, Norway, using both simulated datasets and real-world measurements from a low-cost sensor network for January 2016. The results indicate that the method is capable of producing realistic hourly concentration fields of urban nitrogen dioxide that inherit the spatial patterns from the model and adjust the prior values using the information from the sensor network. The accuracy of the data fusion method is dependent on various factors including the total number of observations, their spatial distribution, their uncertainty (both in terms of systematic biases and random errors), as well as the ability of the model to provide realistic spatial patterns of urban air pollution. A validation against official data from air quality monitoring stations equipped with reference instrumentation indicates that the data fusion method is capable of reproducing city-wide averaged official values with an R 2 of 0.89 and a root mean squared error of 14.3 μg m -3 . It is further capable of reproducing the typical daily cycles of nitrogen dioxide. Overall, the results indicate that the method provides a robust way of extracting useful information from uncertain sensor data using only a time-invariant model dataset and the knowledge contained within an entire sensor network. Copyright © 2017 The Authors. Published

  7. Asymmetric distribution of the ionospheric electric potential in the opposite hemispheres as inferred from the SuperDARN observations and FAC-based convection model

    DEFF Research Database (Denmark)

    Lukianova, R.; Hanuise, C.; Christiansen, Freddy

    2008-01-01

    We compare the SuperDARN convection patterns with the predictions of a new numerical model of the global distribution of ionospheric electric potentials. The model utilizes high-precision statistical maps of field-aligned currents (FAC) derived from measurements made by polar-orbiting low...... governed by the IMF clock angle and solar zenith angle. We calculate the convection patterns for specific cases caused by the sign of By and season and demonstrate the capability of the FAC-based model reproduce the radar observations. The simulation confirms that the solar zenith angle should be linked...

  8. Electroweak Precision Observables in the Minimal Supersymmetric Standard Model

    CERN Document Server

    Heinemeyer, S; Weiglein, Georg

    2006-01-01

    The current status of electroweak precision observables in the Minimal Supersymmetric Standard Model (MSSM) is reviewed. We focus in particular on the $W$ boson mass, M_W, the effective leptonic weak mixing angle, sin^2 theta_eff, the anomalous magnetic moment of the muon, (g-2)_\\mu, and the lightest CP-even MSSM Higgs boson mass, m_h. We summarize the current experimental situation and the status of the theoretical evaluations. An estimate of the current theoretical uncertainties from unknown higher-order corrections and from the experimental errors of the input parameters is given. We discuss future prospects for both the experimental accuracies and the precision of the theoretical predictions. Confronting the precision data with the theory predictions within the unconstrained MSSM and within specific SUSY-breaking scenarios, we analyse how well the data are described by the theory. The mSUGRA scenario with cosmological constraints yields a very good fit to the data, showing a clear preference for a relativ...

  9. Simulation of temperature extremes in the Tibetan Plateau from CMIP5 models and comparison with gridded observations

    Science.gov (United States)

    You, Qinglong; Jiang, Zhihong; Wang, Dai; Pepin, Nick; Kang, Shichang

    2017-09-01

    Understanding changes in temperature extremes in a warmer climate is of great importance for society and for ecosystem functioning due to potentially severe impacts of such extreme events. In this study, temperature extremes defined by the Expert Team on Climate Change Detection and Indices (ETCCDI) from CMIP5 models are evaluated by comparison with homogenized gridded observations at 0.5° resolution across the Tibetan Plateau (TP) for 1961-2005. Using statistical metrics, the models have been ranked in terms of their ability to reproduce similar patterns in extreme events to the observations. Four CMIP5 models have good performance (BNU-ESM, HadGEM2-ES, CCSM4, CanESM2) and are used to create an optimal model ensemble (OME). Most temperature extreme indices in the OME are closer to the observations than in an ensemble using all models. Best performance is given for threshold temperature indices and extreme/absolute value indices are slightly less well modelled. Thus the choice of model in the OME seems to have more influences on temperature extreme indices based on thresholds. There is no significant correlation between elevation and modelled bias of the extreme indices for both the optimal/all model ensembles. Furthermore, the minimum temperature (Tmin) is significanlty positive correlations with the longwave radiation and cloud variables, respectively, but the Tmax fails to find the correlation with the shortwave radiation and cloud variables. This suggests that the cloud-radiation differences influence the Tmin in each CMIP5 model to some extent, and result in the temperature extremes based on Tmin.

  10. Interannual variation patterns of total ozone and lower stratospheric temperature in observations and model simulations

    Directory of Open Access Journals (Sweden)

    W. Steinbrecht

    2006-01-01

    Full Text Available We report results from a multiple linear regression analysis of long-term total ozone observations (1979 to 2000, by TOMS/SBUV, of temperature reanalyses (1958 to 2000, NCEP, and of two chemistry-climate model simulations (1960 to 1999, by ECHAM4.L39(DLR/CHEM (=E39/C, and MAECHAM4-CHEM. The model runs are transient experiments, where observed sea surface temperatures, increasing source gas concentrations (CO2, CFCs, CH4, N2O, NOx, 11-year solar cycle, volcanic aerosols and the quasi-biennial oscillation (QBO are all accounted for. MAECHAM4-CHEM covers the atmosphere from the surface up to 0.01 hPa (≈80 km. For a proper representation of middle atmosphere (MA dynamics, it includes a parametrization for momentum deposition by dissipating gravity wave spectra. E39/C, on the other hand, has its top layer centered at 10 hPa (≈30 km. It is targeted on processes near the tropopause, and has more levels in this region. Despite some problems, both models generally reproduce the observed amplitudes and much of the observed low-latitude patterns of the various modes of interannual variability in total ozone and lower stratospheric temperature. In most aspects MAECHAM4-CHEM performs slightly better than E39/C. MAECHAM4-CHEM overestimates the long-term decline of total ozone, whereas underestimates the decline over Antarctica and at northern mid-latitudes. The true long-term decline in winter and spring above the Arctic may be underestimated by a lack of TOMS/SBUV observations in winter, particularly in the cold 1990s. Main contributions to the observed interannual variations of total ozone and lower stratospheric temperature at 50 hPa come from a linear trend (up to -10 DU/decade at high northern latitudes, up to -40 DU/decade at high southern latitudes, and around -0.7 K/decade over much of the globe, from the intensity of the polar vortices (more than 40 DU, or 8 K peak to peak, the QBO (up to 20 DU, or 2 K peak to peak, and from

  11. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  12. Polarimetry of Solar System Objects: Observations vs. Models

    Science.gov (United States)

    Yanamandra-Fisher, P. A.

    2014-04-01

    results of main belt comets, asteroids with ring system, lunar studies, planned exploration of planetary satellites that may harbour sub-surface oceans, there is increasing need to include polarimetric (linear, circular and differential) as an integral observing mode of instruments and facilities. For laboratory measurements, there is a need to identify simulants that mimic the polarimetric behaviour of solar system small bodies and measure their polarimetric behavior as function of various physical process they are subject to and have undergone radiation changes of their surfaces. Therefore, inclusion of polarimetric remote sensing and development of spectropolarimeters for groundbased facilities and instruments on space missions is needed, with similar maturation of vector radiative transfer models and related laboratory measurements.

  13. A Multi-model Study on Warm Precipitation Biases in Global Models Compared to Satellite Observations

    Science.gov (United States)

    Jing, X.; Suzuki, K.; Guo, H.; Goto, D.; Ogura, T.; Koshiro, T.; Mulmenstadt, J.

    2017-12-01

    The cloud-to-precipitation transition process in warm clouds simulated by state-of-the-art global climate models (GCMs), including both traditional climate models and a global cloud-resolving model, is evaluated against A-Train satellites observations. The models and satellite observations are compared in the form of the statistics obtained from combined analysis of multiple satellite observables that probe signatures of the cloud-to-precipitation transition process. One common problem identified among these models is the too frequent occurrence of warm precipitation. The precipitation is found to form when the cloud particle size and the liquid water path (LWP) are both much smaller than those in observations. The too efficient formation of precipitation is found to be compensated for by errors of cloud microphysical properties, such as underestimated cloud particle size and LWP, to an extent that varies among the models. However, this does not completely cancel the precipitation formation bias. Robust errors are also found in the evolution of cloud microphysical properties in precipitation process in some GCMs, implying unrealistic interaction between precipitation and cloud water. Nevertheless, auspicious information is found for future improvement of warm precipitation representations: the adoption of more realistic autoconversion scheme or subgrid variability scheme is shown to improve the triggering of precipitation and evolution of cloud microphysical properties.

  14. Inverse modelling of European N2O emissions. Assimilating observations from different networks

    Energy Technology Data Exchange (ETDEWEB)

    Corazza, M.; Bergamaschi, P.; Dentener, F. [European Commission Joint Research Centre, Institute for Environment and Sustainability, 21027 Ispra (Italy); Vermeulen, A.T.; Popa, E. [Energy research Centre of the Netherlands ECN, Petten (Netherlands); Aalto, T. [Finnish Meteorological Institute FMI, Helsinki (Finland); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Umweltbundesamt UBA, Messstelle Schauinsland, Kirchzarten (Germany); O' Doherty, S. [School of Chemistry, University of Bristol, Bristol (United Kingdom); Thompson, R. [Laboratoire des Sciences du Climat et de l' Environment LSCE, Gif sur Yvette (France); Moncrieff, J. [Edinburgh University, Edinburgh (United Kingdom); Steinbacher, M. [Swiss Federal Laboratories for Materials Science and Technology Empa, Duebendorf (Switzerland); Jordan, A. [Max Planck Institute for Biogeochemistry, Jena (Germany); Dlugokencky, E. [NOAA Earth System Research Laboratory, Global Monitoring Division, Boulder, CO (United States); Bruehl, C. [Max Planck Institute for Chemistry, Mainz (Germany); Krol, M. [Wageningen University and Research Centre WUR, Wageningen (Netherlands)

    2010-07-01

    We describe the setup and first results of an inverse modelling system for atmospheric N2O, based on a four-dimensional variational (4DVAR) technique and the atmospheric transport zoom model TM5. We focus in this study on the European domain, utilizing a comprehensive set of quasi-continuous measurements over Europe, complemented by N2O measurements from the Earth System Research Laboratory of the National Oceanic and Atmospheric Administration (NOAA/ESRL) cooperative global air sampling network. Despite ongoing measurement comparisons among networks parallel measurements at a limited number of stations show that significant offsets exist among the different laboratories. Since the spatial gradients of N2O mixing ratios are of the same order of magnitude as these biases, the direct use of these biased datasets would lead to significant errors in the derived emissions. Therefore, in order to also use measurements with unknown offsets, a new bias correction scheme has been implemented within the TM5-4DVAR inverse modelling system, thus allowing the simultaneous assimilation of observations from different networks. The N2O bias corrections determined in the TM5-4DVAR system agree within 0.1 ppb (dry-air mole fraction) with the bias derived from the measurements at monitoring stations where parallel NOAA discrete air samples are available. The N2O emissions derived for the northwest European countries for 2006 show good agreement with the bottom-up emission inventories reported to the United Nations Framework Convention on Climate Change (UNFCCC). Moreover, the inverse model can significantly narrow the uncertainty range reported in N2O emission inventories, while the lack of measurements does not allow for better emission estimates in southern Europe. Several sensitivity experiments were performed to test the robustness of the results. It is shown that also inversions without detailed a priori spatio-temporal emission distributions are capable to reproduce major

  15. InSAR Observations and Finite Element Modeling of Crustal Deformation Around a Surging Glacier, Iceland

    Science.gov (United States)

    Spaans, K.; Auriac, A.; Sigmundsson, F.; Hooper, A. J.; Bjornsson, H.; Pálsson, F.; Pinel, V.; Feigl, K. L.

    2014-12-01

    Icelandic ice caps, covering ~11% of the country, are known to be surging glaciers. Such process implies an important local crustal subsidence due to the large ice mass being transported to the ice edge during the surge in a few months only. In 1993-1995, a glacial surge occurred at four neighboring outlet glaciers in the southwestern part of Vatnajökull ice cap, the largest ice cap in Iceland. We estimated that ~16±1 km3 of ice have been moved during this event while the fronts of some of the outlet glaciers advanced by ~1 km.Surface deformation associated with this surge has been surveyed using Interferometric Synthetic Aperture Radar (InSAR) acquisitions from 1992-2002, providing high resolution ground observations of the study area. The data show about 75 mm subsidence at the ice edge of the outlet glaciers following the transport of the large volume of ice during the surge (Fig. 1). The long time span covered by the InSAR images enabled us to remove ~12 mm/yr of uplift occurring in this area due to glacial isostatic adjustment from the retreat of Vatnajökull ice cap since the end of the Little Ice Age in Iceland. We then used finite element modeling to investigate the elastic Earth response to the surge, as well as confirm that no significant viscoelastic deformation occurred as a consequence of the surge. A statistical approach based on Bayes' rule was used to compare the models to the observations and obtain an estimate of the Young's modulus (E) and Poisson's ratio (v) in Iceland. The best-fitting models are those using a one-kilometer thick top layer with v=0.17 and E between 12.9-15.3 GPa underlain by a layer with v=0.25 and E from 67.3 to 81.9 GPa. Results demonstrate that InSAR data and finite element models can be used successfully to reproduce crustal deformation induced by ice mass variations at Icelandic ice caps.Fig. 1: Interferograms spanning 1993 July 31 to 1995 June 19, showing the surge at Tungnaárjökull (Tu.), Skaftárjökull (Sk.) and S

  16. A comprehensive study on rotation reversal in KSTAR: experimental observations and modelling

    Science.gov (United States)

    Na, D. H.; Na, Yong-Su; Angioni, C.; Yang, S. M.; Kwon, J. M.; Jhang, Hogun; Camenen, Y.; Lee, S. G.; Shi, Y. J.; Ko, W. H.; Lee, J. A.; Hahm, T. S.; KSTAR Team

    2017-12-01

    Dedicated experiments have been performed in KSTAR Ohmic plasmas to investigate the detailed physics of the rotation reversal phenomena. Here we adapt the more general definition of rotation reversal, a large change of the intrinsic toroidal rotation gradient produced by minor changes in the control parameters (Camenen et al 2017 Plasma Phys. Control. Fusion 59 034001), which is commonly observed in KSTAR regardless of the operating conditions. The two main phenomenological features of the rotation reversal are the normalized toroidal rotation gradient ({{u}\\prime} ) change in the gradient region and the existence of an anchor point. For the KSTAR Ohmic plasma database including the experiment results up to the 2016 experimental campaign, both features were investigated. First, the observations show that the locations of the gradient and the anchor point region are dependent on {{q}95} . Second, a strong dependence of {{u}\\prime} on {νeff} is clearly observed in the gradient region, whereas the dependence on R/{{L}{{Ti}}} , R/{{L}{{Te}}} , and R/{{L}{{ne}}} is unclear considering the usual variation of the normalized gradient length in KSTAR. The experimental observations were compared against several theoretical models. The rotation reversal might not occur due to the transition of the dominant turbulence from the trapped electron mode to the ion temperature gradient mode or the neoclassical equilibrium effect in KSTAR. Instead, it seems that the profile shearing effects associated with a finite ballooning tilting well reproduce the experimental observations of both the gradient region and the anchor point; the difference seems to be related to the magnetic shear and the q value. Further analysis implies that the increase of {{u}\\prime} in the gradient region with the increase of the collisionality would occur when the reduction of the momentum diffusivity is comparatively larger than the reduction of the residual stress. It is supported by the perturbative

  17. Reproducibility in Research: Systems, Infrastructure, Culture

    Directory of Open Access Journals (Sweden)

    Tom Crick

    2017-11-01

    Full Text Available The reproduction and replication of research results has become a major issue for a number of scientific disciplines. In computer science and related computational disciplines such as systems biology, the challenges closely revolve around the ability to implement (and exploit novel algorithms and models. Taking a new approach from the literature and applying it to a new codebase frequently requires local knowledge missing from the published manuscripts and transient project websites. Alongside this issue, benchmarking, and the lack of open, transparent and fair benchmark sets present another barrier to the verification and validation of claimed results. In this paper, we outline several recommendations to address these issues, driven by specific examples from a range of scientific domains. Based on these recommendations, we propose a high-level prototype open automated platform for scientific software development which effectively abstracts specific dependencies from the individual researcher and their workstation, allowing easy sharing and reproduction of results. This new e-infrastructure for reproducible computational science offers the potential to incentivise a culture change and drive the adoption of new techniques to improve the quality and efficiency – and thus reproducibility – of scientific exploration.

  18. A model based approach in observing the activity of neuronal populations for the prediction of epileptic seizures

    International Nuclear Information System (INIS)

    Chong, M.S.; Nesic, D.; Kuhlmann, L.; Postoyan, R.; Varsavsky, A.; Cook, M.

    2010-01-01

    Full text: Epilepsy is a common neurological disease that affects 0.5-1 % of the world's population. In cases where known treatments cannot achieve complete recovery, seizure prediction is essential so that preventive measures can be undertaken to prevent resultant injury. The elcctroencephalogram (EEG) is a widely used diagnostic tool for epilepsy. However, the EEG does not provide a detailed view of the underlying seizure causing neuronal mechanisms. Knowing the dynamics of the neuronal population is useful because tracking the evolution of the neuronal mechanisms will allow us to track the brain's progression from interictal to ictal state. Wendling and colleagues proposed a parameterised mathematical model that represents the activity of interconnected neuronal populations. By modifying the parameters, this model is able to reproduce signals that are very similar to the real EEG depicting commonly observed patterns during interictal and ictal periods. The transition from non-seizure to seizure activity, as seen in the EEG. is hypothesised to be due to the impairment of inhibition. Using Wendling's model, we designed a deterministic nonlinear estimator to recover the average membrane potential of the neuronal populations from a single channel EEG signal. for any fixed and known parameter values. Our nonlinear estimator is analytically proven to asymptotically converge to the true state of the model and illustrated in simulations. We were able to computationally observe the dynamics of the three neuronal populations described in the model: excitatory, fast and slow inhibitory populations. This forms a first step towards the prediction of epileptic seiwres. (author)

  19. Impacts of bromine and iodine chemistry on tropospheric OH and HO2: comparing observations with box and global model perspectives

    Science.gov (United States)

    Stone, Daniel; Sherwen, Tomás; Evans, Mathew J.; Vaughan, Stewart; Ingham, Trevor; Whalley, Lisa K.; Edwards, Peter M.; Read, Katie A.; Lee, James D.; Moller, Sarah J.; Carpenter, Lucy J.; Lewis, Alastair C.; Heard, Dwayne E.

    2018-03-01

    The chemistry of the halogen species bromine and iodine has a range of impacts on tropospheric composition, and can affect oxidising capacity in a number of ways. However, recent studies disagree on the overall sign of the impacts of halogens on the oxidising capacity of the troposphere. We present simulations of OH and HO2 radicals for comparison with observations made in the remote tropical ocean boundary layer during the Seasonal Oxidant Study at the Cape Verde Atmospheric Observatory in 2009. We use both a constrained box model, using detailed chemistry derived from the Master Chemical Mechanism (v3.2), and the three-dimensional global chemistry transport model GEOS-Chem. Both model approaches reproduce the diurnal trends in OH and HO2. Absolute observed concentrations are well reproduced by the box model but are overpredicted by the global model, potentially owing to incomplete consideration of oceanic sourced radical sinks. The two models, however, differ in the impacts of halogen chemistry. In the box model, halogen chemistry acts to increase OH concentrations (by 9.8 % at midday at the Cape Verde Atmospheric Observatory), while the global model exhibits a small increase in OH at the Cape Verde Atmospheric Observatory (by 0.6 % at midday) but overall shows a decrease in the global annual mass-weighted mean OH of 4.5 %. These differences reflect the variety of timescales through which the halogens impact the chemical system. On short timescales, photolysis of HOBr and HOI, produced by reactions of HO2 with BrO and IO, respectively, increases the OH concentration. On longer timescales, halogen-catalysed ozone destruction cycles lead to lower primary production of OH radicals through ozone photolysis, and thus to lower OH concentrations. The global model includes more of the longer timescale responses than the constrained box model, and overall the global impact of the longer timescale response (reduced primary production due to lower O3 concentrations

  20. Comparison of model-observer and human-observer performance for breast tomosynthesis: effect of reconstruction and acquisition parameters

    Science.gov (United States)

    Das, Mini; Gifford, Howard C.

    2011-03-01

    The problem of optimizing the acquisition and reconstruction parameters for breast-cancer detection with digital breast tomosynthesis (DBT) is becoming increasingly important due to the potential of DBT for clinical screening. Ideally, one wants a set of parameters suitable for both microcalcification (MC) and mass detection that specifies the lowest possible radiation dose to the patient. Attacking this multiparametric optimization problem using human-observer studies (which are the gold standard) would be very expensive. On the other hand, there are numerous limitations to having existing mathematical model observers as replacements. Our aim is to develop a model observer that can reliably mimic human observers at clinically realistic DBT detection tasks. In this paper, we present a novel visual-search (VS) model observer for MC detection and localization. Validation of this observer against human data was carried out in a study with simulated DBT test images. Radiation dose was a study parameter, with tested acquisition levels of 0.7, 1.0 and 1.5 mGy. All test images were reconstructed with a penalized-maximum-likelihood reconstruction method. Good agreement at all three dose levels was obtained between the VS and human observers. We believe that this new model observer has the potential to take the field of image-quality research in a new direction with a number of practical clinical ramifications.

  1. Evaluation of a terrestrial carbon cycle submodel in an earth system model using networks of eddy covariance observations

    Science.gov (United States)

    Ichii, K.; Suzuki, T.

    2010-12-01

    Improvement of terrestrial submodels in earth system models (ESMs) is important to reduce uncertainties in future projections of global carbon cycle and climate. Since these submodels lack detailed validation, evaluation of terrestrial submodels using networks of field observations is necessary. The purpose of this study is to improve an ESM by refining a terrestrial submodel using eddy covariance observations. To evaluate and improve the terrestrial submodel included in the UVic-ESCM, we conducted two experiments: an off-line experiment and an ESM experiment. In the off-line experiment, we used the terrestrial submodel forced by observed climate inputs (off-line model run). We evaluated and refined the model at a point scale using eddy covariance observations. In this process, we first tested the default terrestrial submodel with eddy covariance observations. Next, we refined the terrestrial submodel using eddy covariance observations. Then, in the ESM experiment (carbon cycle-climate coupled model run), we used the default and refined terrestrial submodel as a terrestrial submodel in the ESM. We tested the effects of the terrestrial submodel improvements on the ESM simulations at both site and global scales. First, we evaluated the terrestrial submodel as an off-line mode (the terrestrial submodel was extracted from ESM) at point scales using 48 eddy covariance observation data, and improved it through fixing model parameters and structures. The modifications were conducted to reproduce the seasonal simulation of the terrestrial carbon cycle by removing several biases in the terrestrial submodel through fixing parameters and models related to such as snow melting process, rooting depth, and photosynthesis efficiency. The terrestrial submodel was improved with the reduction of the root mean square error and the closer simulation of the seasonal carbon fluxes. Second, using the UVic-ESCM with the improved terrestrial submodel, we confirmed model improvement at most

  2. A novel opinion dynamics model based on expanded observation ranges and individuals’ social influences in social networks

    Science.gov (United States)

    Diao, Su-Meng; Liu, Yun; Zeng, Qing-An; Luo, Gui-Xun; Xiong, Fei

    2014-12-01

    In this paper, we propose an opinion dynamics model in order to investigate opinion evolution and interactions and the behavior of individuals. By introducing social influence and its feedback mechanism, the proposed model can highlight the heterogeneity of individuals and reproduce realistic online opinion interactions. It can also expand the observation range of affected individuals. Combining psychological studies on the social impact of majorities and minorities, affected individuals update their opinions by balancing social impact from both supporters and opponents. It can be seen that complete consensus is not always obtained. When the initial density of either side is greater than 0.8, the enormous imbalance leads to complete consensus. Otherwise, opinion clusters consisting of a set of tightly connected individuals who hold similar opinions appear. Moreover, a tradeoff is discovered between high interaction intensity and low stability with regard to observation ranges. The intensity of each interaction is negatively correlated with observation range, while the stability of each individual’s opinion positively affects the correlation. Furthermore, the proposed model presents the power-law properties in the distribution of individuals’ social influences, which is in agreement with people’s daily cognition. Additionally, it is proven that the initial distribution of individuals’ social influences has little effect on the evolution.

  3. Tidal Movement of Nioghalvfjerdsfjorden Glacier, Northeast Greenland: Observations and Modelling

    DEFF Research Database (Denmark)

    Reeh, Niels; Mayer, C.; Olesen, O. B.

    2000-01-01

    , 1997 and 1998. As part of this work, tidal-movement observations were carried out by simultaneous differential global positioning system (GPS) measurements at several locations distributed on the glacier surface. The GPS observations were performed continuously over several tidal cycles. At the same...

  4. Modelling dust polarization observations of molecular clouds through MHD simulations

    Science.gov (United States)

    King, Patrick K.; Fissel, Laura M.; Chen, Che-Yu; Li, Zhi-Yun

    2018-03-01

    The BLASTPol observations of Vela C have provided the most detailed characterization of the polarization fraction p and dispersion in polarization angles S for a molecular cloud. We compare the observed distributions of p and S with those obtained in synthetic observations of simulations of molecular clouds, assuming homogeneous grain alignment. We find that the orientation of the mean magnetic field relative to the observer has a significant effect on the p and S distributions. These distributions for Vela C are most consistent with synthetic observations where the mean magnetic field is close to the line of sight. Our results point to apparent magnetic disorder in the Vela C molecular cloud, although it can be due to either an inclination effect (i.e. observing close to the mean field direction) or significant field tangling from strong turbulence/low magnetization. The joint correlations of p with column density and of S with column density for the synthetic observations generally agree poorly with the Vela C joint correlations, suggesting that understanding these correlations requires a more sophisticated treatment of grain alignment physics.

  5. Reproducing an extreme flood with uncertain post-event information

    Directory of Open Access Journals (Sweden)

    D. Fuentes-Andino

    2017-07-01

    Full Text Available Studies for the prevention and mitigation of floods require information on discharge and extent of inundation, commonly unavailable or uncertain, especially during extreme events. This study was initiated by the devastating flood in Tegucigalpa, the capital of Honduras, when Hurricane Mitch struck the city. In this study we hypothesized that it is possible to estimate, in a trustworthy way considering large data uncertainties, this extreme 1998 flood discharge and the extent of the inundations that followed from a combination of models and post-event measured data. Post-event data collected in 2000 and 2001 were used to estimate discharge peaks, times of peak, and high-water marks. These data were used in combination with rain data from two gauges to drive and constrain a combination of well-known modelling tools: TOPMODEL, Muskingum–Cunge–Todini routing, and the LISFLOOD-FP hydraulic model. Simulations were performed within the generalized likelihood uncertainty estimation (GLUE uncertainty-analysis framework. The model combination predicted peak discharge, times of peaks, and more than 90 % of the observed high-water marks within the uncertainty bounds of the evaluation data. This allowed an inundation likelihood map to be produced. Observed high-water marks could not be reproduced at a few locations on the floodplain. Identifications of these locations are useful to improve model set-up, model structure, or post-event data-estimation methods. Rainfall data were of central importance in simulating the times of peak and results would be improved by a better spatial assessment of rainfall, e.g. from radar data or a denser rain-gauge network. Our study demonstrated that it was possible, considering the uncertainty in the post-event data, to reasonably reproduce the extreme Mitch flood in Tegucigalpa in spite of no hydrometric gauging during the event. The method proposed here can be part of a Bayesian framework in which more events

  6. Ecosystem function in complex mountain terrain: Combining models and long-term observations to advance process-based understanding

    Science.gov (United States)

    Wieder, William R.; Knowles, John F.; Blanken, Peter D.; Swenson, Sean C.; Suding, Katharine N.

    2017-04-01

    Abiotic factors structure plant community composition and ecosystem function across many different spatial scales. Often, such variation is considered at regional or global scales, but here we ask whether ecosystem-scale simulations can be used to better understand landscape-level variation that might be particularly important in complex terrain, such as high-elevation mountains. We performed ecosystem-scale simulations by using the Community Land Model (CLM) version 4.5 to better understand how the increased length of growing seasons may impact carbon, water, and energy fluxes in an alpine tundra landscape. The model was forced with meteorological data and validated with observations from the Niwot Ridge Long Term Ecological Research Program site. Our results demonstrate that CLM is capable of reproducing the observed carbon, water, and energy fluxes for discrete vegetation patches across this heterogeneous ecosystem. We subsequently accelerated snowmelt and increased spring and summer air temperatures in order to simulate potential effects of climate change in this region. We found that vegetation communities that were characterized by different snow accumulation dynamics showed divergent biogeochemical responses to a longer growing season. Contrary to expectations, wet meadow ecosystems showed the strongest decreases in plant productivity under extended summer scenarios because of disruptions in hydrologic connectivity. These findings illustrate how Earth system models such as CLM can be used to generate testable hypotheses about the shifting nature of energy, water, and nutrient limitations across space and through time in heterogeneous landscapes; these hypotheses may ultimately guide further experimental work and model development.

  7. Modelling spatial and temporal dynamics of gross primary production in the Sahel from earth-observation-based photosynthetic capacity and quantum efficiency

    DEFF Research Database (Denmark)

    Tagesson, Håkan Torbern; Ardoe, Jonas; Cappelaere, Bernard

    2017-01-01

    based on earth observation (EO) (normalized difference vegetation index (NDVI), renormalized difference vegetation index (RDVI), enhanced vegetation index (EVI) and shortwave infrared water stress index (SIWSI)); and (3) to study the applicability of EO upscaled Fopt and α for GPP modelling purposes...... related to RDVI being affected by chlorophyll abundance. Spatial and inter-annual dynamics in Fopt and α were closely coupled to NDVI and RDVI, respectively. Modelled GPP based on Fopt and α upscaled using EO-based indices reproduced in situ GPP well for all except a cropped site that was strongly...

  8. Bayesian estimation of analytical conduit-flow model parameters from magma discharge rate observed during explosive eruptions

    Science.gov (United States)

    Koyaguchi, T.; Anderson, K. R.; Kozono, T.

    2017-12-01

    Recent development of conduit-flow models has revealed that the evolution of a volcanic eruption (e.g., changes in magma discharge rate and chamber pressure) is sensitively dependent on model parameters related to geological and petrological conditions (such as properties of the magma and the volume and depth of the magma chamber). On the other hand, time-varying observations of ground deformation and magma discharge rate are now increasingly available, which allows us to estimate the model parameters through a Bayesian inverse analysis of those observations (Anderson and Segall, 2013); however, this approach has not yet been applied to explosive eruptions because of mathematical and computational difficulties in the conduit-flow models. Here, we perform a Bayesian inverse to estimate the conduit-flow model parameters of explosive eruptions utilizing an approximate time-dependent eruption model. This model is based on the analytical solutions of a steady conduit-flow model (Koyaguchi, 2005; Kozono and Koyaguchi, 2009) coupled to a simple elastic magma chamber. It reproduces diverse features of evolutions of magma discharge rate and chamber pressure during explosive eruptions, and also allows us to analytically derive the mathematical relationships describing those evolutions. The derived relationships show that the mass flow rate just before the cessation of explosive eruptions is expressed by a simple function of dimensionless magma viscosity and dimensionless gas-permeability in magma. We are also able to derive a relationship between dimensionless viscosity and mass flow rate, which may be available from field data. Consequently, the posterior probability density functions of the conduit-flow model parameters of explosive eruptions (e.g., the radius of conduit) are constrained by the intersection of these two relationships. The validity of the present analytical method was tested by a numerical method using a Markov chain Monte Carlo (MCMC) algorithm. We also

  9. Observations and modeling of deterministic properties of human ...

    Indian Academy of Sciences (India)

    We show that the properties of both models are different from those obtained for Type-I intermittency in the presence of additive noise. The two models help to explain some of the features seen in the intermittency in human heart rate variability. Keywords. Heart rate variability; intermittency; non-stationary dynamical systems.

  10. Observation-based correction of dynamical models using thermostats

    NARCIS (Netherlands)

    Myerscough, Keith W.; Frank, Jason; Leimkuhler, Benedict

    2017-01-01

    Models used in simulation may give accurate shortterm trajectories but distort long-Term (statistical) properties. In this work, we augment a given approximate model with a control law (a 'thermostat') that gently perturbs the dynamical system to target a thermodynamic state consistent with a set of

  11. Observations and modeling of deterministic properties of human ...

    Indian Academy of Sciences (India)

    of two classes of models of Type-I intermittency: (a) the control parameter of the logistic map is changed dichotomously from a value within the intermittency range to just below the bifurcation point and back; (b) the control parameter is changed randomly within the same parameter range as in the model class (a). We show ...

  12. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  13. Global-mean BC lifetime as an indicator of model skill? Constraining the vertical aerosol distribution using aircraft observations

    Science.gov (United States)

    Lund, M. T.; Samset, B. H.; Skeie, R. B.; Berntsen, T.

    2017-12-01

    Several recent studies have used observations from the HIPPO flight campaigns to constrain the modeled vertical distribution of black carbon (BC) over the Pacific. Results indicate a relatively linear relationship between global-mean atmospheric BC residence time, or lifetime, and bias in current models. A lifetime of less than 5 days is necessary for models to reasonably reproduce these observations. This is shorter than what many global models predict, which will in turn affect their estimates of BC climate impacts. Here we use the chemistry-transport model OsloCTM to examine whether this relationship between global BC lifetime and model skill also holds for a broader a set of flight campaigns from 2009-2013 covering both remote marine and continental regions at a range of latitudes. We perform four sets of simulations with varying scavenging efficiency to obtain a spread in the modeled global BC lifetime and calculate the model error and bias for each campaign and region. Vertical BC profiles are constructed using an online flight simulator, as well by averaging and interpolating monthly mean model output, allowing us to quantify sampling errors arising when measurements are compared with model output at different spatial and temporal resolutions. Using the OsloCTM coupled with a microphysical aerosol parameterization, we investigate the sensitivity of modeled BC vertical distribution to uncertainties in the aerosol aging and scavenging processes in more detail. From this, we can quantify how model uncertainties in the BC life cycle propagate into uncertainties in its climate impacts. For most campaigns and regions, a short global-mean BC lifetime corresponds with the lowest model error and bias. On an aggregated level, sampling errors appear to be small, but larger differences are seen in individual regions. However, we also find that model-measurement discrepancies in BC vertical profiles cannot be uniquely attributed to uncertainties in a single process or

  14. On the dependence of the OH* Meinel emission altitude on vibrational level: SCIAMACHY observations and model simulations

    Directory of Open Access Journals (Sweden)

    J. P. Burrows

    2012-09-01

    Full Text Available Measurements of the OH Meinel emissions in the terrestrial nightglow are one of the standard ground-based techniques to retrieve upper mesospheric temperatures. It is often assumed that the emission peak altitudes are not strongly dependent on the vibrational level, although this assumption is not based on convincing experimental evidence. In this study we use Envisat/SCIAMACHY (Scanning Imaging Absorption spectroMeter for Atmospheric CHartographY observations in the near-IR spectral range to retrieve vertical volume emission rate profiles of the OH(3-1, OH(6-2 and OH(8-3 Meinel bands in order to investigate whether systematic differences in emission peak altitudes can be observed between the different OH Meinel bands. The results indicate that the emission peak altitudes are different for the different vibrational levels, with bands originating from higher vibrational levels having higher emission peak altitudes. It is shown that this finding is consistent with the majority of the previously published results. The SCIAMACHY observations yield differences in emission peak altitudes of up to about 4 km between the OH(3-1 and the OH(8-3 band. The observations are complemented by model simulations of the fractional population of the different vibrational levels and of the vibrational level dependence of the emission peak altitude. The model simulations reproduce the observed vibrational level dependence of the emission peak altitude well – both qualitatively and quantitatively – if quenching by atomic oxygen as well as multi-quantum collisional relaxation by O2 is considered. If a linear relationship between emission peak altitude and vibrational level is assumed, then a peak altitude difference of roughly 0.5 km per vibrational level is inferred from both the SCIAMACHY observations and the model simulations.

  15. Understanding Transient Forcing with Plasma Instability Model, Ionospheric Propagation Model and GNSS Observations

    Science.gov (United States)

    Deshpande, K.; Zettergren, M. D.; Datta-Barua, S.

    2017-12-01

    Fluctuations in the Global Navigation Satellite Systems (GNSS) signals observed as amplitude and phase scintillations are produced by plasma density structures in the ionosphere. Phase scintillation events in particular occur due to structures at Fresnel scales, typically about 250 meters at ionospheric heights and GNSS frequency. Likely processes contributing to small-scale density structuring in auroral and polar regions include ionospheric gradient-drift instability (GDI) and Kelvin-Helmholtz instability (KHI), which result, generally, from magnetosphere-ionosphere interactions (e.g. reconnection) associated with cusp and auroral zone regions. Scintillation signals, ostensibly from either GDI or KHI, are frequently observed in the high latitude ionosphere and are potentially useful diagnostics of how energy from the transient forcing in the cusp or polar cap region cascades, via instabilities, to small scales. However, extracting quantitative details of instabilities leading to scintillation using GNSS data drastically benefits from both a model of the irregularities and a model of GNSS signal propagation through irregular media. This work uses a physics-based model of the generation of plasma density irregularities (GEMINI - Geospace Environment Model of Ion-Neutral Interactions) coupled to an ionospheric radio wave propagation model (SIGMA - Satellite-beacon Ionospheric-scintillation Global Model of the upper Atmosphere) to explore the cascade of density structures from medium to small (sub-kilometer) scales. Specifically, GEMINI-SIGMA is used to simulate expected scintillation from different instabilities during various stages of evolution to determine features of the scintillation that may be useful to studying ionospheric density structures. Furthermore we relate the instabilities producing GNSS scintillations to the transient space and time-dependent magnetospheric phenomena and further predict characteristics of scintillation in different geophysical

  16. Observations and models of the decimetric radio emission from Jupiter

    International Nuclear Information System (INIS)

    Pater, I. de.

    1980-01-01

    The high energy electron distribution as a function of energy, pitch angle and spatial coordinates in Jupiter's inner magnetosphere was derived from a comparison of radio data and model calculations of Jupiter's synchrotron radiation. (Auth.)

  17. NACP Regional: Gridded 1-deg Observation Data and Biosphere and Inverse Model Outputs

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: This data set contains standardized gridded observation data, terrestrial biosphere model output data, and inverse model simulations of carbon flux...

  18. NACP Regional: Gridded 1-deg Observation Data and Biosphere and Inverse Model Outputs

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set contains standardized gridded observation data, terrestrial biosphere model output data, and inverse model simulations of carbon flux parameters that...

  19. NACP Regional: Original Observation Data and Biosphere and Inverse Model Outputs

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set contains the originally-submitted observation measurement data, terrestrial biosphere model output data, and inverse model simulations that various...

  20. NACP Regional: Original Observation Data and Biosphere and Inverse Model Outputs

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: This data set contains the originally-submitted observation measurement data, terrestrial biosphere model output data, and inverse model simulations that...

  1. Modeling and observational constraints on the sulfur cycle in the marine troposphere: a focus on reactive halogens and multiphase chemistry

    Science.gov (United States)

    Chen, Q.; Breider, T.; Schmidt, J.; Sherwen, T.; Evans, M. J.; Xie, Z.; Quinn, P.; Bates, T. S.; Alexander, B.

    2017-12-01

    The radiative forcing from marine boundary layer clouds is still highly uncertain, which partly stems from our poor understanding of cloud condensation nuclei (CCN) formation. The oxidation of dimethyl sulfide (DMS) and subsequent chemical evolution of its products (e.g. DMSO) are key processes in CCN formation, but are generally very simplified in large-scale models. Recent research has pointed out the importance of reactive halogens (e.g. BrO and Cl) and multiphase chemistry in the tropospheric sulfur cycle. In this study, we implement a series of sulfur oxidation mechanisms into the GEOS-Chem global chemical transport model, involving both gas-phase and multiphase oxidation of DMS, DMSO, MSIA and MSA, to improve our understanding of the sulfur cycle in the marine troposphere. DMS observations from six locations around the globe and MSA/nssSO42- ratio observations from two ship cruises covering a wide range of latitudes and longitudes are used to assess the model. Preliminary results reveal the important role of BrO for DMS oxidation at high latitudes (up to 50% over Southern Ocean). Oxidation of DMS by Cl radicals is small in the model (within 10% in the marine troposphere), probably due to an underrepresentation of Cl sources. Multiphase chemistry (e.g. oxidation by OH and O3 in cloud droplets) is not important for DMS oxidation but is critical for DMSO oxidation and MSA production and removal. In our model, about half of the DMSO is oxidized in clouds, leading to the formation of MSIA, which is further oxidized to form MSA. Overall, with the addition of reactive halogens and multiphase chemistry, the model is able to better reproduce observations of seasonal variations of DMS and MSA/nssSO42- ratios.

  2. Asteroseismic observations and modelling of 70 Ophiuchi AB

    Energy Technology Data Exchange (ETDEWEB)

    Eggenberger, P; Miglio, A [Institut d' Astrophysique et de Geophysique de l' Universite de Liege, 17 Allee du 6 Aout, B-4000 Liege (Belgium); Carrier, F [Institute of Astronomy, University of Leuven, Celestijnenlaan 200 D, B-3001 Leuven (Belgium); Fernandes, J [Observatorio Astronomico da Universidade de Coimbra e Departamento de Matematica, FCTUC (Portugal); Santos, N C [Centro de AstrofIsica, Universidade do Porto, Rua das Estrelas, P-4150-762 Porto (Portugal)], E-mail: eggenberger@astro.ulg.ac.be

    2008-10-15

    The analysis of solar-like oscillations for stars belonging to a binary system provides an opportunity to probe the internal stellar structure and to test our knowledge of stellar physics. We present asteroseismic observations of 70 Oph A performed with the HARPS spectrograph together with a comprehensive theoretical calibration of the 70 Ophiuchi system.

  3. Intelligent Cognitive Radio Models for Enhancing Future Radio Astronomy Observations

    Directory of Open Access Journals (Sweden)

    Ayodele Abiola Periola

    2016-01-01

    Full Text Available Radio astronomy organisations desire to optimise the terrestrial radio astronomy observations by mitigating against interference and enhancing angular resolution. Ground telescopes (GTs experience interference from intersatellite links (ISLs. Astronomy source radio signals received by GTs are analysed at the high performance computing (HPC infrastructure. Furthermore, observation limitation conditions prevent GTs from conducting radio astronomy observations all the time, thereby causing low HPC utilisation. This paper proposes mechanisms that protect GTs from ISL interference without permanent prevention of ISL data transmission and enhance angular resolution. The ISL transmits data by taking advantage of similarities in the sequence of observed astronomy sources to increase ISL connection duration. In addition, the paper proposes a mechanism that enhances angular resolution by using reconfigurable earth stations. Furthermore, the paper presents the opportunistic computing scheme (OCS to enhance HPC utilisation. OCS enables the underutilised HPC to be used to train learning algorithms of a cognitive base station. The performances of the three mechanisms are evaluated. Simulations show that the proposed mechanisms protect GTs from ISL interference, enhance angular resolution, and improve HPC utilisation.

  4. Citizen observations contributing to flood modelling: opportunities and challenges

    Science.gov (United States)

    Assumpção, Thaine H.; Popescu, Ioana; Jonoski, Andreja; Solomatine, Dimitri P.

    2018-02-01

    Citizen contributions to science have been successfully implemented in many fields, and water resources is one of them. Through citizens, it is possible to collect data and obtain a more integrated decision-making process. Specifically, data scarcity has always been an issue in flood modelling, which has been addressed in the last decades by remote sensing and is already being discussed in the citizen science context. With this in mind, this article aims to review the literature on the topic and analyse the opportunities and challenges that lie ahead. The literature on monitoring, mapping and modelling, was evaluated according to the flood-related variable citizens contributed to. Pros and cons of the collection/analysis methods were summarised. Then, pertinent publications were mapped into the flood modelling cycle, considering how citizen data properties (spatial and temporal coverage, uncertainty and volume) are related to its integration into modelling. It was clear that the number of studies in the area is rising. There are positive experiences reported in collection and analysis methods, for instance with velocity and land cover, and also when modelling is concerned, for example by using social media mining. However, matching the data properties necessary for each part of the modelling cycle with citizen-generated data is still challenging. Nevertheless, the concept that citizen contributions can be used for simulation and forecasting is proved and further work lies in continuing to develop and improve not only methods for collection and analysis, but certainly for integration into models as well. Finally, in view of recent automated sensors and satellite technologies, it is through studies as the ones analysed in this article that the value of citizen contributions, complementing such technologies, is demonstrated.

  5. Citizen observations contributing to flood modelling: opportunities and challenges

    Directory of Open Access Journals (Sweden)

    T. H. Assumpção

    2018-02-01

    Full Text Available Citizen contributions to science have been successfully implemented in many fields, and water resources is one of them. Through citizens, it is possible to collect data and obtain a more integrated decision-making process. Specifically, data scarcity has always been an issue in flood modelling, which has been addressed in the last decades by remote sensing and is already being discussed in the citizen science context. With this in mind, this article aims to review the literature on the topic and analyse the opportunities and challenges that lie ahead. The literature on monitoring, mapping and modelling, was evaluated according to the flood-related variable citizens contributed to. Pros and cons of the collection/analysis methods were summarised. Then, pertinent publications were mapped into the flood modelling cycle, considering how citizen data properties (spatial and temporal coverage, uncertainty and volume are related to its integration into modelling. It was clear that the number of studies in the area is rising. There are positive experiences reported in collection and analysis methods, for instance with velocity and land cover, and also when modelling is concerned, for example by using social media mining. However, matching the data properties necessary for each part of the modelling cycle with citizen-generated data is still challenging. Nevertheless, the concept that citizen contributions can be used for simulation and forecasting is proved and further work lies in continuing to develop and improve not only methods for collection and analysis, but certainly for integration into models as well. Finally, in view of recent automated sensors and satellite technologies, it is through studies as the ones analysed in this article that the value of citizen contributions, complementing such technologies, is demonstrated.

  6. Occurrence of blowing snow events at an alpine site over a 10-year period: Observations and modelling

    Science.gov (United States)

    Vionnet, V.; Guyomarc'h, G.; Naaim Bouvet, F.; Martin, E.; Durand, Y.; Bellot, H.; Bel, C.; Puglièse, P.

    2013-05-01

    Blowing snow events control the evolution of the snow pack in mountainous areas and cause inhomogeneous snow distribution. The goal of this study is to identify the main features of blowing snow events at an alpine site and assess the ability of the detailed snowpack model Crocus to reproduce the occurrence of these events in a 1D configuration. We created a database of blowing snow events observed over 10 years at our experimental site. Occurrences of blowing snow events were divided into cases with and without concurrent falling snow. Overall, snow transport is observed during 10.5% of the time in winter and occurs with concurrent falling snow 37.3% of the time. Wind speed and snow age control the frequency of occurrence. Model results illustrate the necessity of taking the wind-dependence of falling snow grain characteristics into account to simulate periods of snow transport and mass fluxes satisfactorily during those periods. The high rate of false alarms produced by the model is investigated in detail for winter 2010/2011 using measurements from snow particle counters.

  7. The cosmological Janus model: comparison with observational data

    Science.gov (United States)

    Petit, Jean-Pierre; Dagostini, Gilles

    2017-01-01

    In 2014 we presented a model based on a system of two coupled field equations to describe two populations of particles, one positive and the other mass of negative mass. The analysis of this system by Newtonian approximation show that the masses of the same signs attract according to Newton's law while the masses of opposite signs repel according to an anti-Newton law. This eliminates the runaway phenomenon. It uses the time-dependent exact solution of this system to build the bolometric magnitude distribution of the red-shift. Comparing the prediction of our model -which requires adjustment with a single parameter- with the data from 740 supernovae highlighting the acceleration of the universe gives an excellent agreement. The comparison is then made with the multi-parametric Λ CDM model.

  8. Observing and modeling nonlinear dynamics in an internal combustion engine

    International Nuclear Information System (INIS)

    Daw, C.S.; Kennel, M.B.; Finney, C.E.; Connolly, F.T.

    1998-01-01

    We propose a low-dimensional, physically motivated, nonlinear map as a model for cyclic combustion variation in spark-ignited internal combustion engines. A key feature is the interaction between stochastic, small-scale fluctuations in engine parameters and nonlinear deterministic coupling between successive engine cycles. Residual cylinder gas from each cycle alters the in-cylinder fuel-air ratio and thus the combustion efficiency in succeeding cycles. The model close-quote s simplicity allows rapid simulation of thousands of engine cycles, permitting statistical studies of cyclic-variation patterns and providing physical insight into this technologically important phenomenon. Using symbol statistics to characterize the noisy dynamics, we find good quantitative matches between our model and experimental time-series measurements. copyright 1998 The American Physical Society

  9. Identifying Clusters with Mixture Models that Include Radial Velocity Observations

    Science.gov (United States)

    Czarnatowicz, Alexis; Ybarra, Jason E.

    2018-01-01

    The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).

  10. Evaluation of 11 terrestrial carbon–nitrogen cycle models against observations from two temperate Free-Air CO2 Enrichment studies

    Science.gov (United States)

    Zaehle, Sönke; Medlyn, Belinda E; De Kauwe, Martin G; Walker, Anthony P; Dietze, Michael C; Hickler, Thomas; Luo, Yiqi; Wang, Ying-Ping; El-Masri, Bassil; Thornton, Peter; Jain, Atul; Wang, Shusen; Warlind, David; Weng, Ensheng; Parton, William; Iversen, Colleen M; Gallet-Budynek, Anne; McCarthy, Heather; Finzi, Adrien; Hanson, Paul J; Prentice, I Colin; Oren, Ram; Norby, Richard J

    2014-01-01

    We analysed the responses of 11 ecosystem models to elevated atmospheric [CO2] (eCO2) at two temperate forest ecosystems (Duke and Oak Ridge National Laboratory (ORNL) Free-Air CO2 Enrichment (FACE) experiments) to test alternative representations of carbon (C)–nitrogen (N) cycle processes. We decomposed the model responses into component processes affecting the response to eCO2 and confronted these with observations from the FACE experiments. Most of the models reproduced the observed initial enhancement of net primary production (NPP) at both sites, but none was able to simulate both the sustained 10-yr enhancement at Duke and the declining response at ORNL: models generally showed signs of progressive N limitation as a result of lower than observed plant N uptake. Nonetheless, many models showed qualitative agreement with observed component processes. The results suggest that improved representation of above-ground–below-ground interactions and better constraints on plant stoichiometry are important for a predictive understanding of eCO2 effects. Improved accuracy of soil organic matter inventories is pivotal to reduce uncertainty in the observed C–N budgets. The two FACE experiments are insufficient to fully constrain terrestrial responses to eCO2, given the complexity of factors leading to the observed diverging trends, and the consequential inability of the models to explain these trends. Nevertheless, the ecosystem models were able to capture important features of the experiments, lending some support to their projections. PMID:24467623

  11. Photometric observations and numerical modeling of AW Sge

    Science.gov (United States)

    Montgomery, M. M.; Voloshina, I.; Goel, Amit

    2016-01-01

    In this work, we present R-band photometric light curves of Cataclysmic Variable AW Sge, an SU Uma type, near superoutburst maximum. The positive superhump shape changes over three days, from single peaked on October 11, 2013 to one maximum near phase ϕ ˜ 0.3 followed by minor peaks near phases ϕ ˜ 0.6 and ϕ ˜ 0.9, respectively, on October 13, 2013. Using the maxima from October 11-13, 2013 (JD 2456577-2356579), the observed positive superhump period is 0.074293 ± 0.000025 days. In addition to the observations, we also provide a three dimensional Smoothed Particle Hydrodynamic simulation near superoutburst maximum, for comparison, assuming a secondary-to-primary mass ratio q =M2 /M1 = 0.6 M⊙/0.132 M⊙ = 0.22. The simulation produces positive superhump shapes that are similar to the observations. The simulated positive superhump has a period of 0.076923 days, which is approximately 6% longer than the orbital period, assuming an orbital period Porb = 0.0724 days. The 3.5% difference from the observed positive superhump period is likely due to the assumptions used in generating the simulations, as the orbital period and masses are not well known. From an analysis of the simulated positive superhump shape near superoutburst maximum, the maximum occurs near ϕ ˜ 0.3, when the disk is highly elliptical and eccentric and at least one of the two density waves is compressing with the disk rim. Based on the simulation, we find that the disk may be tilted and precessing in the retrograde direction at a time that is just before the next outburst and/or superoutburst.

  12. Deep Orographic Gravity Wave Dynamics over Subantarctic Islands as Observed and Modeled during the Deep Propagating Gravity Wave Experiment (DEEPWAVE)

    Science.gov (United States)

    Eckermann, S. D.; Broutman, D.; Ma, J.; Doyle, J. D.; Pautet, P. D.; Taylor, M. J.; Bossert, K.; Williams, B. P.; Fritts, D. C.; Smith, R. B.; Kuhl, D.; Hoppel, K.; McCormack, J. P.; Ruston, B. C.; Baker, N. L.; Viner, K.; Whitcomb, T.; Hogan, T. F.; Peng, M.

    2016-12-01

    The Deep Propagating Gravity Wave Experiment (DEEPWAVE) was an international aircraft-based field program to observe and study the end-to-end dynamics of atmospheric gravity waves from 0-100 km altitude and the effects on atmospheric circulations. On 14 July 2014, aircraft remote-sensing instruments detected large-amplitude gravity-wave oscillations within mesospheric airglow and sodium layers downstream of the Auckland Islands, located 1000 km south of Christchurch, New Zealand. A high-altitude reanalysis and a three-dimensional Fourier gravity wave model are used to investigate the dynamics of this event from the surface to the mesosphere. At 0700 UTC when first observations were made, surface flow across the islands' terrain generated linear three-dimensional wavefields that propagated rapidly to ˜78 km altitude, where intense breaking occurred in a narrow layer beneath a zero-wind region at ˜83 km altitude. In the following hours, the altitude of weak winds descended under the influence of a large-amplitude migrating semidiurnal tide, leading to intense breaking of these wavefields in subsequent observations starting at 1000 UTC. The linear Fourier model constrained by upstream reanalysis reproduces the salient aspects of observed wavefields, including horizontal wavelengths, phase orientations, temperature and vertical displacement amplitudes, heights and locations of incipient wave breaking, and momentum fluxes. Wave breaking has huge effects on local circulations, with inferred layer-averaged westward mean-flow accelerations of ˜350 m s-1 hour-1 and dynamical heating rates of ˜8 K hour-1, supporting recent speculation of important impacts of orographic gravity waves from subantarctic islands on the mean circulation and climate of the middle atmosphere during austral winter. We also study deep orographic gravity waves from islands during DEEPWAVE more widely using observations from the Atmospheric Infrared Sounder (AIRS) and high-resolution high

  13. Ionospheric conductance distribution and MHD wave structure: observation and model

    Directory of Open Access Journals (Sweden)

    F. Budnik

    Full Text Available The ionosphere influences magnetohydrodynamic waves in the magnetosphere by damping because of Joule heating and by varying the wave structure itself. There are different eigenvalues and eigensolutions of the three dimensional toroidal wave equation if the height integrated Pedersen conductivity exceeds a critical value, namely the wave conductance of the magnetosphere. As a result a jump in frequency can be observed in ULF pulsation records. This effect mainly occurs in regions with gradients in the Pedersen conductances, as in the auroral oval or the dawn and dusk areas. A pulsation event recorded by the geostationary GOES-6 satellite is presented. We explain the observed change in frequency as a change in the wave structure while crossing the terminator. Furthermore, selected results of numerical simulations in a dipole magnetosphere with realistic ionospheric conditions are discussed. These are in good agreement with the observational data.

    Key words. Ionosphere · (Ionosphere · magnetosphere interactions · Magnetospheric physics · Magnetosphere · ionosphere interactions · MHD waves and instabilities.

  14. Ionospheric conductance distribution and MHD wave structure: observation and model

    Directory of Open Access Journals (Sweden)

    F. Budnik

    1998-02-01

    Full Text Available The ionosphere influences magnetohydrodynamic waves in the magnetosphere by damping because of Joule heating and by varying the wave structure itself. There are different eigenvalues and eigensolutions of the three dimensional toroidal wave equation if the height integrated Pedersen conductivity exceeds a critical value, namely the wave conductance of the magnetosphere. As a result a jump in frequency can be observed in ULF pulsation records. This effect mainly occurs in regions with gradients in the Pedersen conductances, as in the auroral oval or the dawn and dusk areas. A pulsation event recorded by the geostationary GOES-6 satellite is presented. We explain the observed change in frequency as a change in the wave structure while crossing the terminator. Furthermore, selected results of numerical simulations in a dipole magnetosphere with realistic ionospheric conditions are discussed. These are in good agreement with the observational data.Key words. Ionosphere · (Ionosphere · magnetosphere interactions · Magnetospheric physics · Magnetosphere · ionosphere interactions · MHD waves and instabilities.

  15. Comparing observed and modelled growth of larval herring (Clupea harengusz: Testing individual-based model parameterisations

    Directory of Open Access Journals (Sweden)

    Helena M. Hauss

    2009-10-01

    Full Text Available Experiments that directly test larval fish individual-based model (IBM growth predictions are uncommon since it is difficult to simultaneously measure all relevant metabolic and behavioural attributes. We compared observed and modelled somatic growth of larval herring (Clupea harengus in short-term (50 degree-day laboratory trials conducted at 7 and 13°C in which larvae were either unfed or fed ad libitum on different prey sizes (~100 to 550 µm copepods, Acartia tonsa. The larval specific growth rate (SGR, % DW d-1 was generally overestimated by the model, especially for larvae foraging on large prey items. Model parameterisations were adjusted to explore the effect of 1 temporal variability in foraging of individuals, and 2 reduced assimilation efficiency due to rapid gut evacuation at high feeding rates. With these adjustments, the model described larval growth well across temperatures, prey sizes, and larval sizes. Although the experiments performed verified the growth model, variability in growth and foraging behaviour among larvae shows that it is necessary to measure both the physiology and feeding behaviour of the same individual. This is a challenge for experimentalists but will ultimately yield the most valuable data to adequately model environmental impacts on the survival and growth of marine fish early life stages.

  16. Modelling and mapping tick dynamics using volunteered observations

    NARCIS (Netherlands)

    Garcia-Martí, Irene; Zurita-Milla, Raúl; Vliet, van Arnold J.H.; Takken, Willem

    2017-01-01

    Background: Tick populations and tick-borne infections have steadily increased since the mid-1990s posing an ever-increasing risk to public health. Yet, modelling tick dynamics remains challenging because of the lack of data and knowledge on this complex phenomenon. Here we present an approach to

  17. Blending geological observations and convection models to reconstruct mantle dynamics

    Science.gov (United States)

    Coltice, Nicolas; Bocher, Marie; Fournier, Alexandre; Tackley, Paul

    2015-04-01

    Knowledge of the state of the Earth mantle and its temporal evolution is fundamental to a variety of disciplines in Earth Sciences, from the internal dynamics to its many expressions in the geological record (postglacial rebound, sea level change, ore deposit, tectonics or geomagnetic reversals). Mantle convection theory is the centerpiece to unravel the present and past state of the mantle. For the past 40 years considerable efforts have been made to improve the quality of numerical models of mantle convection. However, they are still sparsely used to estimate the convective history of the solid Earth, in comparison to ocean or atmospheric models for weather and climate prediction. The main shortcoming is their inability to successfully produce Earth-like seafloor spreading and continental drift self-consistently. Recent convection models have begun to successfully predict these processes. Such breakthrough opens the opportunity to retrieve the recent dynamics of the Earth's mantle by blending convection models together with advanced geological datasets. A proof of concept will be presented, consisting in a synthetic test based on a sequential data assimilation methodology.

  18. Observations and modeling of deterministic properties of human ...

    Indian Academy of Sciences (India)

    Simple models show that in Type-I intermittency a characteristic U-shaped probability distribution is obtained for the laminar phase length. The laminar phase length distribution characteristic for Type-I intermittency may be obtained in human heart rate variability data for some cases of pathology. The heart and its regulatory ...

  19. Quantification of fungal growth: models, experiment, and observations

    NARCIS (Netherlands)

    Lamour, A.

    2002-01-01

    This thesis is concerned with the growth of microscopic mycelial fungi (Section I), and that of macroscopic fungi, which form specialised hyphal structures such as rhizomorphs (Section II). A growth model is developed in Section I in relation to soil organic

  20. Capturing Characteristics of Atmospheric Refractivity Using Observations and Modeling Approaches

    Science.gov (United States)

    2015-06-01

    model uses the Kiefer (1941) equation and doesn’t correct for salinity as suggested by Sverdrup et al. (1942). 32 The LKB-based evaporation duct...Santa Barbara Santa Maria Monterey Buoy # 1 2 3 4 5 6 7 8 NOAA ID # 44066 41048 42003 46025 46029 46054 46011 46042 Valid Year 2009 2013 2013 2013 2012

  1. Runoff modeling of the Mara River using Satellite Observed Soil ...

    African Journals Online (AJOL)

    ecosystem, famous for the scenic large scale seasonal wildebeest migration. In the south-western ... MATERIALS AND METHODS. 2.1. In-situ measurements. Runoff data was utilized for validation and calibration of the soil moisture-runoff model. The data was obtained for Mara ... In this study we apply a modified version of ...

  2. Aircraft-based Observations and Modeling of Wintertime Submicron Aerosol Composition over the Northeastern U.S.

    Science.gov (United States)

    Shah, V.; Jaegle, L.; Schroder, J. C.; Campuzano-Jost, P.; Jimenez, J. L.; Guo, H.; Sullivan, A.; Weber, R. J.; Green, J. R.; Fiddler, M.; Bililign, S.; Lopez-Hilfiker, F.; Lee, B. H.; Thornton, J. A.

    2017-12-01

    Submicron aerosol particles (PM1) remain a major air pollution concern in the urban areas of northeastern U.S. While SO2 and NOx emission controls have been effective at reducing summertime PM1 concentrations, this has not been the case for wintertime sulfate and nitrate concentrations, suggesting a nonlinear response during winter. During winter, organic aerosol (OA) is also an important contributor to PM1 mass despite low biogenic emissions, suggesting the presence of important urban sources. We use aircraft-based observations collected during the Wintertime INvestigation of Transport, Emissions and Reactivity (WINTER) campaign (Feb-March 2015), together with the GEOS-Chem chemical transport model, to investigate the sources and chemical processes governing wintertime PM1 over the northeastern U.S. The mean observed concentration of PM1 between the surface and 1 km was 4 μg m-3, about 30% of which was composed of sulfate, 20% nitrate, 10% ammonium, and 40% OA. The model reproduces the observed sulfate, nitrate and ammonium concentrations after updates to HNO3 production and loss, SO2 oxidation, and NH3 emissions. We find that 65% of the sulfate formation occurs in the aqueous phase, and 55% of nitrate formation through N2O5 hydrolysis, highlighting the importance of multiphase and heterogeneous processes during winter. Aqueous-phase sulfate production and the gas-particle partitioning of nitrate and ammonium are affected by atmospheric acidity, which in turn depends on the concentration of these species. We examine these couplings with GEOS-Chem, and assess the response of wintertime PM1 concentrations to further emission reductions based on the U.S. EPA projections for the year 2023. For OA, we find that the standard GEOS-Chem simulation underestimates the observed concentrations, but a simple parameterization developed from previous summer field campaigns is able to reproduce the observations and the contribution of primary and secondary OA. We find that

  3. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    Science.gov (United States)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  4. Reproducibility of surface roughness in reaming

    DEFF Research Database (Denmark)

    Müller, Pavel; De Chiffre, Leonardo

    An investigation on the reproducibility of surface roughness in reaming was performed to document the applicability of this approach for testing cutting fluids. Austenitic stainless steel was used as a workpiece material and HSS reamers as cutting tools. Reproducibility of the results was evaluat...

  5. Examination of reproducibility in microbiological degredation experiments

    DEFF Research Database (Denmark)

    Sommer, Helle Mølgaard; Spliid, Henrik; Holst, Helle

    1998-01-01

    Experimental data indicate that certain microbiological degradation experiments have a limited reproducibility. Nine identical batch experiments were carried out on 3 different days to examine reproducibility. A pure culture, isolated from soil, grew with toluene as the only carbon and energy...

  6. Reproducible statistical analysis with multiple languages

    DEFF Research Database (Denmark)

    Lenth, Russell; Højsgaard, Søren

    2011-01-01

    This paper describes the system for making reproducible statistical analyses. differs from other systems for reproducible analysis in several ways. The two main differences are: (1) Several statistics programs can be in used in the same document. (2) Documents can be prepared using OpenOffice or ...

  7. GIS for large-scale watershed observational data model

    Science.gov (United States)

    Patino-Gomez, Carlos

    Because integrated management of a river basin requires the development of models that are used for many purposes, e.g., to assess risks and possible mitigation of droughts and floods, manage water rights, assess water quality, and simply to understand the hydrology of the basin, the development of a relational database from which models can access the various data needed to describe the systems being modeled is fundamental. In order for this concept to be useful and widely applicable, however, it must have a standard design. The recently developed ArcHydro data model facilitates the organization of data according to the "basin" principle and allows access to hydrologic information by models. The development of a basin-scale relational database for the Rio Grande/Bravo basin implemented in a Geographic Information System is one of the contributions of this research. This geodatabase represents the first major attempt to establish a more complete understanding of the basin as a whole, including spatial and temporal information obtained from the United States of America and Mexico. Difficulties in processing raster datasets over large regions are studied in this research. One of the most important contributions is the application of a Raster-Network Regionalization technique, which utilizes raster-based analysis at the subregional scale in an efficient manner and combines the resulting subregional vector datasets into a regional database. Another important contribution of this research is focused on implementing a robust structure for handling huge temporal data sets related to monitoring points such as hydrometric and climatic stations, reservoir inlets and outlets, water rights, etc. For the Rio Grande study area, the ArcHydro format is applied to the historical information collected in order to include and relate these time series to the monitoring points in the geodatabase. Its standard time series format is changed to include a relationship to the agency from

  8. Reduced modeling and state observation of an activated sludge process.

    Science.gov (United States)

    Queinnec, Isabelle; Gómez-Quintero, Claudia-Sophya

    2009-01-01

    This article first proposes a reduction strategy of the activated sludge process model with alternated aeration. Initiated with the standard activated sludge model (ASM1), the reduction is based on some biochemical considerations followed by linear approximations of nonlinear terms. Two submodels are then obtained, one for the aerobic phase and one for the anoxic phase, using four state variables related to the organic substrate concentration, the ammonium and nitrate-nitrite nitrogen, and the oxygen concentration. Then, a two-step robust estimation strategy is used to estimate both the unmeasured state variables and the unknown inflow ammonium nitrogen concentration. Parameter uncertainty is considered in the dynamics and input matrices of the system. 2009 American Institute of Chemical Engineers

  9. Metric versus observable operator representation, higher spin models

    Science.gov (United States)

    Fring, Andreas; Frith, Thomas

    2018-02-01

    We elaborate further on the metric representation that is obtained by transferring the time-dependence from a Hermitian Hamiltonian to the metric operator in a related non-Hermitian system. We provide further insight into the procedure on how to employ the time-dependent Dyson relation and the quasi-Hermiticity relation to solve time-dependent Hermitian Hamiltonian systems. By solving both equations separately we argue here that it is in general easier to solve the former. We solve the mutually related time-dependent Schrödinger equation for a Hermitian and non-Hermitian spin 1/2, 1 and 3/2 model with time-independent and time-dependent metric, respectively. In all models the overdetermined coupled system of equations for the Dyson map can be decoupled algebraic manipulations and reduces to simple linear differential equations and an equation that can be converted into the non-linear Ermakov-Pinney equation.

  10. Uniform relativistic universe models with pressure. Part 2. Observational tests

    International Nuclear Information System (INIS)

    Krempec, J.; Krygier, B.

    1977-01-01

    The magnitude-redshift and angular diameter-redshift relations are discussed for the uniform (homogeneous and isotropic) relativistic Universe models with pressure. The inclusion of pressure into the energy-momentum tensor has given larger values of the deceleration parameter q. An increase of the deceleration parameter has led to the brightening of objects as well as to a little larger angular diameters. (author)

  11. Observation of the Meissner effect in a lattice Higgs model

    Science.gov (United States)

    Damgaard, Poul H.; Heller, Urs M.

    1988-01-01

    The lattice-regularized U(1) Higgs model in an external electromagnetic field is studied by Monte Carlo techniques. In the Coulomb phase, magnetic flux can flow through uniformly. The Higgs phase splits into a region where magnetic flux can penetrate only in the form of vortices and a region where the magnetic flux is completely expelled, the relativistic analog of the Meissner effect in superconductivity. Evidence is presented for symmetry restoration in strong external fields.

  12. Reproducing (and Disrupting) Heteronormativity: Gendered Sexual Socialization in Preschool Classrooms

    Science.gov (United States)

    Gansen, Heidi M.

    2017-01-01

    Using ethnographic data from 10 months of observations in nine preschool classrooms, I examine gendered sexual socialization children receive from teachers' practices and reproduce through peer interactions. I find heteronormativity permeates preschool classrooms, where teachers construct (and occasionally disrupt) gendered sexuality in a number…

  13. Inter- and intra-laboratory study to determine the reproducibility of toxicogenomics datasets.

    Science.gov (United States)

    Scott, D J; Devonshire, A S; Adeleye, Y A; Schutte, M E; Rodrigues, M R; Wilkes, T M; Sacco, M G; Gribaldo, L; Fabbri, M; Coecke, S; Whelan, M; Skinner, N; Bennett, A; White, A; Foy, C A

    2011-11-28

    The application of toxicogenomics as a predictive tool for chemical risk assessment has been under evaluation by the toxicology community for more than a decade. However, it predominately remains a tool for investigative research rather than for regulatory risk assessment. In this study, we assessed whether the current generation of microarray technology in combination with an in vitro experimental design was capable of generating robust, reproducible data of sufficient quality to show promise as a tool for regulatory risk assessment. To this end, we designed a prospective collaborative study to determine the level of inter- and intra-laboratory reproducibility between three independent laboratories. All test centres (TCs) adopted the same protocols for all aspects of the toxicogenomic experiment including cell culture, chemical exposure, RNA extraction, microarray data generation and analysis. As a case study, the genotoxic carcinogen benzo[a]pyrene (B[a]P) and the human hepatoma cell line HepG2 were used to generate three comparable toxicogenomic data sets. High levels of technical reproducibility were demonstrated using a widely employed gene expression microarray platform. While differences at the global transcriptome level were observed between the TCs, a common subset of B[a]P responsive genes (n=400 gene probes) was identified at all TCs which included many genes previously reported in the literature as B[a]P responsive. These data show promise that the current generation of microarray technology, in combination with a standard in vitro experimental design, can produce robust data that can be generated reproducibly in independent laboratories. Future work will need to determine whether such reproducible in vitro model(s) can be predictive for a range of toxic chemicals with different mechanisms of action and thus be considered as part of future testing regimes for regulatory risk assessment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Science-Grade Observing Systems as Process Observatories: Mapping and Understanding Nonlinearity and Multiscale Memory with Models and Observations

    Science.gov (United States)

    Barros, A. P.; Wilson, A. M.; Miller, D. K.; Tao, J.; Genereux, D. P.; Prat, O.; Petersen, W. A.; Brunsell, N. A.; Petters, M. D.; Duan, Y.

    2015-12-01

    Using the planet as a study domain and collecting observations over unprecedented ranges of spatial and temporal scales, NASA's EOS (Earth Observing System) program was an agent of transformational change in Earth Sciences over the last thirty years. The remarkable space-time organization and variability of atmospheric and terrestrial moist processes that emerged from the analysis of comprehensive satellite observations provided much impetus to expand the scope of land-atmosphere interaction studies in Hydrology and Hydrometeorology. Consequently, input and output terms in the mass and energy balance equations evolved from being treated as fluxes that can be used as boundary conditions, or forcing, to being viewed as dynamic processes of a coupled system interacting at multiple scales. Measurements of states or fluxes are most useful if together they map, reveal and/or constrain the underlying physical processes and their interactions. This can only be accomplished through an integrated observing system designed to capture the coupled physics, including nonlinear feedbacks and tipping points. Here, we first review and synthesize lessons learned from hydrometeorology studies in the Southern Appalachians and in the Southern Great Plains using both ground-based and satellite observations, physical models and data-assimilation systems. We will specifically focus on mapping and understanding nonlinearity and multiscale memory of rainfall-runoff processes in mountainous regions. It will be shown that beyond technical rigor, variety, quantity and duration of measurements, the utility of observing systems is determined by their interpretive value in the context of physical models to describe the linkages among different observations. Second, we propose a framework for designing science-grade and science-minded process-oriented integrated observing and modeling platforms for hydrometeorological studies.

  15. Reproducibility principles, problems, practices, and prospects

    CERN Document Server

    Maasen, Sabine

    2016-01-01

    Featuring peer-reviewed contributions from noted experts in their fields of research, Reproducibility: Principles, Problems, Practices, and Prospects presents state-of-the-art approaches to reproducibility, the gold standard sound science, from multi- and interdisciplinary perspectives. Including comprehensive coverage for implementing and reflecting the norm of reproducibility in various pertinent fields of research, the book focuses on how the reproducibility of results is applied, how it may be limited, and how such limitations can be understood or even controlled in the natural sciences, computational sciences, life sciences, social sciences, and studies of science and technology. The book presents many chapters devoted to a variety of methods and techniques, as well as their epistemic and ontological underpinnings, which have been developed to safeguard reproducible research and curtail deficits and failures. The book also investigates the political, historical, and social practices that underlie repro...

  16. Direct observation of intermediate states in model membrane fusion

    Science.gov (United States)

    Keidel, Andrea; Bartsch, Tobias F.; Florin, Ernst-Ludwig

    2016-01-01

    We introduce a novel assay for membrane fusion of solid supported membranes on silica beads and on coverslips. Fusion of the lipid bilayers is induced by bringing an optically trapped bead in contact with the coverslip surface while observing the bead’s thermal motion with microsecond temporal and nanometer spatial resolution using a three-dimensional position detector. The probability of fusion is controlled by the membrane tension on the particle. We show that the progression of fusion can be monitored by changes in the three-dimensional position histograms of the bead and in its rate of diffusion. We were able to observe all fusion intermediates including transient fusion, formation of a stalk, hemifusion and the completion of a fusion pore. Fusion intermediates are characterized by axial but not lateral confinement of the motion of the bead and independently by the change of its rate of diffusion due to the additional drag from the stalk-like connection between the two membranes. The detailed information provided by this assay makes it ideally suited for studies of early events in pure lipid bilayer fusion or fusion assisted by fusogenic molecules. PMID:27029285

  17. Committed warming inferred from observations and an energy balance model

    Science.gov (United States)

    Pincus, R.; Mauritsen, T.

    2017-12-01

    Due to the lifetime of CO2 and thermal inertia of the ocean, the Earth's climate is not equilibrated with anthropogenic forcing. As a result, even if fossil fuel emissions were to suddenly cease, some level of committed warming is expected due to past emissions. Here, we provide an observational-based quantification of this committed warming using the instrument record of global-mean warming, recently-improved estimates of Earth's energy imbalance, and estimates of radiative forcing from the fifth IPCC assessment report. Compared to pre-industrial levels, we find a committed warming of 1.5K [0.9-3.6, 5-95 percentile] at equilibrium, and of 1.3K [0.9-2.3] within this century. However, when assuming that ocean carbon uptake cancels remnant greenhouse gas-induced warming on centennial timescales, committed warming is reduced to 1.1K [0.7-1.8]. Conservatively, there is a 32% risk that committed warming already exceeds the 1.5K target set in Paris, and that this will likely be crossed prior to 2053. Regular updates of these observationally-constrained committed warming estimates, though simplistic, can provide transparent guidance as uncertainty regarding transient climate sensitivity inevitably narrows and understanding the limitations of the framework is advanced.

  18. An Exospheric Temperature Model Based On CHAMP Observations and TIEGCM Simulations

    Science.gov (United States)

    Ruan, Haibing; Lei, Jiuhou; Dou, Xiankang; Liu, Siqing; Aa, Ercha

    2018-02-01

    In this work, thermospheric densities from the accelerometer measurement on board the CHAMP satellite during 2002-2009 and the simulations from the National Center for Atmospheric Research Thermosphere Ionosphere Electrodynamics General Circulation Model (NCAR-TIEGCM) are employed to develop an empirical exospheric temperature model (ETM). The two-dimensional basis functions of the ETM are first provided from the principal component analysis of the TIEGCM simulations. Based on the exospheric temperatures derived from CHAMP thermospheric densities, a global distribution of the exospheric temperatures is reconstructed. A parameterization is conducted for each basis function amplitude as a function of solar-geophysical and seasonal conditions. Thus, the ETM can be utilized to model the thermospheric temperature and mass density under a specified condition. Our results showed that the averaged standard deviation of the ETM is generally less than 10% than approximately 30% in the MSIS model. Besides, the ETM reproduces the global thermospheric evolutions including the equatorial thermosphere anomaly.

  19. Improved Analysis of Earth System Models and Observations using Simple Climate Models

    Science.gov (United States)

    Nadiga, Balasubramanya; Urban, Nathan

    2017-04-01

    First-principles-based Earth System Models (ESMs) are central to both improving our understanding of the climate system and developing climate projections. Nevertheless, given the diversity of climate simulated by the various ESMs and the intense computational burden associated with running such models, simple climate models (SCMs) are key to being able to compare ESMs and the climates they simulate in a dynamically meaningful fashion. We present some preliminary work along these lines. In an application of an SCM to compare different ESMs and observations, we demonstrate a deficiency in the commonly-used upwelling-diffusion (UD) energy balance model (EBM). When we consider the vertical distribution of ocean heat uptake, the lack of representation of processes such as deep water formation and subduction in the UD-EBM precludes a reasonable representation of the vertical distribution of heat uptake in that model. We then demonstrate how the problem can be remedied by introducing a parameterization of such processes in the UD-EBM. With further development, it is anticipated that this approach of ESM inter-comparison using simple physics-based models will lead to further insights into aspects of the climate response such as its stability and sensitivity, uncertainty and predictability, and underlying flow structure and topology.

  20. Observations and modelling of sea level variability in the Bay of Biscay in the framework of the ENIGME project.

    Science.gov (United States)

    Jordà, Gabriel; Marcos, Marta; Pineau-Guillou, Lucia; Vandermeirsch, Frederic; Theetten, Sebastien; Charria, Guillaume

    2017-04-01

    In a climate change context, understanding the variability of physical properties along the coasts and its link to large scale processes is of paramount importance in order to project how global warming will affect the coastal environments. In this framework, the ENIGME project aims to implement and validate a suite of high resolution numerical models in the Bay of Biscay (NE Atlantic) in order to better represent the interannual variations of physical properties. In this presentation we will focus on sea level variations at the coast characterized from observations (tide gauges and altimetry) and models (barotropic and baroclinic in different configurations). In a first step we characterize the mechanisms behind sea level variations at time scales from hours to decades. Most of the variability is associated with tides while atmospherically induced variations (meteorological tides) dominate the residuals at all frequencies (true for narrow shelf areas where the open sea dynamics have non-negligible influence on the coastal sea level variability. In those cases, the treatment of the open boundaries in regional circulation models and the quality of the information there (i.e. from the OGCM) is crucial for a good representation of variations in coastal areas. Finally we have also noticed that none of the models is able to correctly reproduce the long-term trends, which are dominated by large scale processes.

  1. Analysis and Modeling of Jovian Radio Emissions Observed by Galileo

    Science.gov (United States)

    Menietti, J. D.

    2003-01-01

    Our studies of Jovian radio emission have resulted in the publication of five papers in refereed journals, with three additional papers in progress. The topics of these papers include the study of narrow-band kilometric radio emission; the apparent control of radio emission by Callisto; quasi-periodic radio emission; hectometric attenuation lanes and their relationship to Io volcanic activity; and modeling of HOM attenuation lanes using ray tracing. A further study of the control of radio emission by Jovian satellites is currently in progress. Abstracts of each of these papers are contained in the Appendix. A list of the publication titles are also included.

  2. Modelling and mapping tick dynamics using volunteered observations.

    Science.gov (United States)

    Garcia-Martí, Irene; Zurita-Milla, Raúl; van Vliet, Arnold J H; Takken, Willem

    2017-11-14

    Tick populations and tick-borne infections have steadily increased since the mid-1990s posing an ever-increasing risk to public health. Yet, modelling tick dynamics remains challenging because of the lack of data and knowledge on this complex phenomenon. Here we present an approach to model and map tick dynamics using volunteered data. This approach is illustrated with 9 years of data collected by a group of trained volunteers who sampled active questing ticks (AQT) on a monthly basis and for 15 locations in the Netherlands. We aimed at finding the main environmental drivers of AQT at multiple time-scales, and to devise daily AQT maps at the national level for 2014. Tick dynamics is a complex ecological problem driven by biotic (e.g. pathogens, wildlife, humans) and abiotic (e.g. weather, landscape) factors. We enriched the volunteered AQT collection with six types of weather variables (aggregated at 11 temporal scales), three types of satellite-derived vegetation indices, land cover, and mast years. Then, we applied a feature engineering process to derive a set of 101 features to characterize the conditions that yielded a particular count of AQT on a date and location. To devise models predicting the AQT, we use a time-aware Random Forest regression method, which is suitable to find non-linear relationships in complex ecological problems, and provides an estimation of the most important features to predict the AQT. We trained a model capable of fitting AQT with reduced statistical metrics. The multi-temporal study on the feature importance indicates that variables linked to water levels in the atmosphere (i.e. evapotranspiration, relative humidity) consistently showed a higher explanatory power than previous works using temperature. As a product of this study, we are able of mapping daily tick dynamics at the national level. This study paves the way towards the design of new applications in the fields of environmental research, nature management, and public

  3. Modelling and mapping tick dynamics using volunteered observations

    Directory of Open Access Journals (Sweden)

    Irene Garcia-Martí

    2017-11-01

    Full Text Available Abstract Background Tick populations and tick-borne infections have steadily increased since the mid-1990s posing an ever-increasing risk to public health. Yet, modelling tick dynamics remains challenging because of the lack of data and knowledge on this complex phenomenon. Here we present an approach to model and map tick dynamics using volunteered data. This approach is illustrated with 9 years of data collected by a group of trained volunteers who sampled active questing ticks (AQT on a monthly basis and for 15 locations in the Netherlands. We aimed at finding the main environmental drivers of AQT at multiple time-scales, and to devise daily AQT maps at the national level for 2014. Method Tick dynamics is a complex ecological problem driven by biotic (e.g. pathogens, wildlife, humans and abiotic (e.g. weather, landscape factors. We enriched the volunteered AQT collection with six types of weather variables (aggregated at 11 temporal scales, three types of satellite-derived vegetation indices, land cover, and mast years. Then, we applied a feature engineering process to derive a set of 101 features to characterize the conditions that yielded a particular count of AQT on a date and location. To devise models predicting the AQT, we use a time-aware Random Forest regression method, which is suitable to find non-linear relationships in complex ecological problems, and provides an estimation of the most important features to predict the AQT. Results We trained a model capable of fitting AQT with reduced statistical metrics. The multi-temporal study on the feature importance indicates that variables linked to water levels in the atmosphere (i.e. evapotranspiration, relative humidity consistently showed a higher explanatory power than previous works using temperature. As a product of this study, we are able of mapping daily tick dynamics at the national level. Conclusions This study paves the way towards the design of new applications in the fields

  4. Modelling the widths of fission observables in GEF

    Directory of Open Access Journals (Sweden)

    Schmidt K.-H.

    2013-03-01

    Full Text Available The widths of the mass distributions of the different fission channels are traced back to the probability distributions of the corresponding quantum oscillators that are coupled to the heat bath, which is formed by the intrinsic degrees of freedom of the fissioning system under the influence of pairing correlations and shell effects. Following conclusion from stochastic calculations of Adeev and Pashkevich, an early freezing due to dynamical effects is assumed. It is shown that the mass width of the fission channels in low-energy fission is strongly influenced by the zero-point motion of the corresponding quantum oscillator. The observed variation of the mass widths of the asymmetric fission channels with excitation energy is attributed to the energy-dependent properties of the heat bath and not to the population of excited states of the corresponding quantum oscillator.

  5. High-Energy Aspects of Solar Flares: Observations and Models

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wei [Lockheed Martin Solar and Astrophysics Laboratory; Guo, Fan [Los Alamos National Laboratory

    2015-07-21

    The paper begins by describing the structure of the Sun, with emphasis on the corona. The Sun is a unique plasma laboratory, which can be probed by Sun-grazing comets, and is the driver of space weather. Energization and particle acceleration mechanisms in solar flares is presented; magnetic reconnection is key is understanding stochastic acceleration mechanisms. Then coupling between kinetic and fluid aspects is taken up; the next step is feedback of atmospheric response to the acceleration process – rapid quenching of acceleration. Future challenges include applications of stochastic acceleration to solar energetic particles (SEPs), Fermi γ-rays observations, fast-mode magnetosonic wave trains in a funnel-shaped wave guide associated with flare pulsations, and the new SMEX mission IRIS (Interface Region Imaging Spectrograph),

  6. Slow Solar Wind: Observable Characteristics for Constraining Modelling

    Science.gov (United States)

    Ofman, L.; Abbo, L.; Antiochos, S. K.; Hansteen, V. H.; Harra, L.; Ko, Y. K.; Lapenta, G.; Li, B.; Riley, P.; Strachan, L.; von Steiger, R.; Wang, Y. M.

    2015-12-01

    The Slow Solar Wind (SSW) origin is an open issue in the post SOHO era and forms a major objective for planned future missions such as the Solar Orbiter and Solar Probe Plus.Results from spacecraft data, combined with theoretical modeling, have helped to investigate many aspects of the SSW. Fundamental physical properties of the coronal plasma have been derived from spectroscopic and imaging remote-sensing data and in-situ data, and these results have provided crucial insights for a deeper understanding of the origin and acceleration of the SSW.Advances models of the SSW in coronal streamers and other structures have been developed using 3D MHD and multi-fluid equations.Nevertheless, there are still debated questions such as:What are the source regions of SSW? What are their contributions to the SSW?Which is the role of the magnetic topology in corona for the origin, acceleration and energy deposition of SSW?Which are the possible acceleration and heating mechanisms for the SSW?The aim of this study is to present the insights on the SSW origin and formationarisen during the discussions at the International Space Science Institute (ISSI) by the Team entitled ''Slowsolar wind sources and acceleration mechanisms in the corona'' held in Bern (Switzerland) in March2014--2015. The attached figure will be presented to summarize the different hypotheses of the SSW formation.

  7. Standard Model in multiscale theories and observational constraints

    Science.gov (United States)

    Calcagni, Gianluca; Nardelli, Giuseppe; Rodríguez-Fernández, David

    2016-08-01

    We construct and analyze the Standard Model of electroweak and strong interactions in multiscale spacetimes with (i) weighted derivatives and (ii) q -derivatives. Both theories can be formulated in two different frames, called fractional and integer picture. By definition, the fractional picture is where physical predictions should be made. (i) In the theory with weighted derivatives, it is shown that gauge invariance and the requirement of having constant masses in all reference frames make the Standard Model in the integer picture indistinguishable from the ordinary one. Experiments involving only weak and strong forces are insensitive to a change of spacetime dimensionality also in the fractional picture, and only the electromagnetic and gravitational sectors can break the degeneracy. For the simplest multiscale measures with only one characteristic time, length and energy scale t*, ℓ* and E*, we compute the Lamb shift in the hydrogen atom and constrain the multiscale correction to the ordinary result, getting the absolute upper bound t*28 TeV . Stronger bounds are obtained from the measurement of the fine-structure constant. (ii) In the theory with q -derivatives, considering the muon decay rate and the Lamb shift in light atoms, we obtain the independent absolute upper bounds t*35 MeV . For α0=1 /2 , the Lamb shift alone yields t*450 GeV .

  8. Atmospheric methane variability at the Peterhof station (Russia): ground-based observations and modeling

    Science.gov (United States)

    Makarova, Maria; Kirner, Oliver; Poberovskii, Anatoliy; Imhasin, Humud; Timofeyev, Yuriy; Virolainen, Yana; Makarov, Boris

    2014-05-01

    The Peterhof station (59.88 N, 29.83 E, 20 m asl) for atmospheric monitoring was founded by Saint - Petersburg State University, Russia. FTIR (Fourier transform IR) observations of methane total column are being carried out by Bruker IFS125 HR since 2009. The study presents a joint analysis of experimental data and EMAC (ECHAM/MESSy Atmospheric Chemistry model) model simulations for Peterhof over the period of 2009-2012. It was shown that CH4 total columns (TC) and column-averaged dry-air mole fractions (MF) obtained from observations are higher than model results with the difference of 1.3% and 0.3 % respectively. The correlation coefficients between FTIR and EMAC data are statistically significant (with 95% confidence) and equal to 0.82 ± 0.08 and 0.4 ± 0.1 for TC and MF of CH4 respectively. The high correlation for TCs shows that EMAC adequately reproduces CH4 variability due to meteorological processes in the atmosphere. On the other hand, the relatively low correlation coefficient for CH4 MF probably indicates an insufficiently precise knowledge of sources and sinks of the atmospheric methane. Amplitudes of the mean annual cycle of CH4 TC for experimental and model datasets (2009-2012) are of 2.1 % and 1.5 % respectively. The same amplitudes calculated for MF are less than for TC: 1.1% for FTIR and 0.6% for EMAC. Difference between FTIR and EMAC annual variations has pronounced seasonality with a maximum in September - November. It could be attributed to the underestimation of methane natural sources in the emission inventory used for EMAC simulations or by relatively coarse horizontal grid of the model (2.8°x2.8°). The analysis of modeling results allowed us to estimate the influence of the limited number of sunny days with FTIR measurement (i.e. specific meteorological conditions which usually take place during FTIR observations) on obtained FTIR estimates of the mean levels of TC and MF over 2009-2012. The systematic shifts of FTIR mean levels of TC and

  9. Quantifying reproducibility in computational biology: the case of the tuberculosis drugome.

    Science.gov (United States)

    Garijo, Daniel; Kinnings, Sarah; Xie, Li; Xie, Lei; Zhang, Yinliang; Bourne, Philip E; Gil, Yolanda

    2013-01-01

    How easy is it to reproduce the results found in a typical computational biology paper? Either through experience or intuition the reader will already know that the answer is with difficulty or not at all. In this paper we attempt to quantify this difficulty by reproducing a previously published paper for different classes of users (ranging from users with little expertise to domain experts) and suggest ways in which the situation might be improved. Quantification is achieved by estimating the time required to reproduce each of the steps in the method described in the original paper and make them part of an explicit workflow that reproduces the original results. Reproducing the method took several months of effort, and required using new versions and new software that posed challenges to reconstructing and validating the results. The quantification leads to "reproducibility maps" that reveal that novice researchers would only be able to reproduce a few of the steps in the method, and that only expert researchers with advance knowledge of the domain would be able to reproduce the method in its entirety. The workflow itself is published as an online resource together with supporting software and data. The paper concludes with a brief discussion of the complexities of requiring reproducibility in terms of cost versus benefit, and a desiderata with our observations and guidelines for improving reproducibility. This has implications not only in reproducing the work of others from published papers, but reproducing work from one's own laboratory.

  10. Quantifying reproducibility in computational biology: the case of the tuberculosis drugome.

    Directory of Open Access Journals (Sweden)

    Daniel Garijo

    Full Text Available How easy is it to reproduce the results found in a typical computational biology paper? Either through experience or intuition the reader will already know that the answer is with difficulty or not at all. In this paper we attempt to quantify this difficulty by reproducing a previously published paper for different classes of users (ranging from users with little expertise to domain experts and suggest ways in which the situation might be improved. Quantification is achieved by estimating the time required to reproduce each of the steps in the method described in the original paper and make them part of an explicit workflow that reproduces the original results. Reproducing the method took several months of effort, and required using new versions and new software that posed challenges to reconstructing and validating the results. The quantification leads to "reproducibility maps" that reveal that novice researchers would only be able to reproduce a few of the steps in the method, and that only expert researchers with advance knowledge of the domain would be able to reproduce the method in its entirety. The workflow itself is published as an online resource together with supporting software and data. The paper concludes with a brief discussion of the complexities of requiring reproducibility in terms of cost versus benefit, and a desiderata with our observations and guidelines for improving reproducibility. This has implications not only in reproducing the work of others from published papers, but reproducing work from one's own laboratory.

  11. The climatology of carbon monoxide and water vapor on Mars as observed by CRISM and modeled by the GEM-Mars general circulation model

    Science.gov (United States)

    Smith, Michael D.; Daerden, Frank; Neary, Lori; Khayat, Alain

    2018-02-01

    Radiative transfer modeling of near-infrared spectra taken by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) instrument onboard Mars Reconnaissance Orbiter (MRO) enables the column-integrated abundance of carbon monoxide (CO) and water vapor (H2O) to be retrieved. These results provide a detailed global description of the seasonal and spatial distribution of CO in the Mars atmosphere and new information about the interannual variability of H2O. The CRISM retrievals show the seasonally and globally averaged carbon monoxide mixing ratio to be near 800 ppm, but with strong seasonal variations, especially at high latitudes. At low latitudes, the carbon monoxide mixing ratio varies in response to the mean seasonal cycle of surface pressure and shows little variation with topography. At high latitudes, carbon monoxide is depleted in the summer hemisphere by a factor of two or more, while in the winter hemisphere there is relatively higher mixing ratio in regions with low-lying topography. Water vapor shows only modest interannual variations, with the largest observed difference being unusually dry conditions in the wake of the Mars Year 28 global dust storm. Modeling results from the GEM-Mars general circulation model generally reproduce the observed seasonal and spatial trends and provide insight into the underlying physical processes.

  12. Status of standard model predictions and uncertainties for electroweak observables

    International Nuclear Information System (INIS)

    Kniehl, B.A.

    1993-11-01

    Recent progress in theoretical predictions of electroweak parameters beyond one loop in the standard model is reviewed. The topics include universal corrections of O(G F 2 M H 2 M W 2 ), O(G F 2 m t 4 ), O(α s G F M W 2 ), and those due to virtual t anti t threshold effects, as well as specific corrections to Γ(Z → b anti b) of O(G F 2 m t 4 ), O(α s G F m t 2 ), and O(α s 2 m b 2 /M Z 2 ). An update of the hadronic contributions to Δα is presented. Theoretical uncertainties, other than those due to the lack of knowledge of M H and m t , are estimated. (orig.)

  13. Topography of inland deltas: Observations, modeling, and experiments

    Science.gov (United States)

    Seybold, H. J.; Molnar, P.; Akca, D.; Doumi, M.; Cavalcanti Tavares, M.; Shinbrot, T.; Andrade, J. S.; Kinzelbach, W.; Herrmann, H. J.

    2010-04-01

    The topography of inland deltas is influenced by the water-sediment balance in distributary channels and local evaporation and seepage rates. In this letter a reduced complexity model is applied to simulate inland delta formation, and results are compared with the Okavango Delta, Botswana and with a laboratory experiment. We show that water loss in inland deltas produces fundamentally different dynamics of water and sediment transport than coastal deltas, especially deposition associated with expansion-contraction dynamics at the channel head. These dynamics lead to a systematic decrease in the mean topographic slope of the inland delta with distance from the apex following a power law with exponent α = -0.69 ± 0.02 where the data for both simulation and experiment can be collapsed onto a single curve. In coastal deltas, on the contrary, the slope increases toward the end of the deposition zone.

  14. Regional-Scale Climate Change: Observations and Model Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, Raymond S; Diaz, Henry F

    2010-12-14

    This collaborative proposal addressed key issues in understanding the Earth's climate system, as highlighted by the U.S. Climate Science Program. The research focused on documenting past climatic changes and on assessing future climatic changes based on suites of global and regional climate models. Geographically, our emphasis was on the mountainous regions of the world, with a particular focus on the Neotropics of Central America and the Hawaiian Islands. Mountain regions are zones where large variations in ecosystems occur due to the strong climate zonation forced by the topography. These areas are particularly susceptible to changes in critical ecological thresholds, and we conducted studies of changes in phonological indicators based on various climatic thresholds.

  15. Recent changes in county-level corn yield variability in the United States from observations and crop models

    Energy Technology Data Exchange (ETDEWEB)

    Leng, Guoyong

    2017-12-01

    The United States is responsible for 35% and 60% of global corn supply and exports. Enhanced supply stability through a reduction in the year-to-year variability of US corn yield would greatly benefit global food security. Important in this regard is to understand how corn yield variability has evolved geographically in the history and how it relates to climatic and non-climatic factors. Results showed that year-to-year variation of US corn yield has decreased significantly during 1980-2010, mainly in Midwest Corn Belt, Nebraska and western arid regions. Despite the country-scale decreasing variability, corn yield variability exhibited an increasing trend in South Dakota, Texas and Southeast growing regions, indicating the importance of considering spatial scales in estimating yield variability. The observed pattern is partly reproduced by process-based crop models, simulating larger areas experiencing increasing variability and underestimating the magnitude of decreasing variability. And 3 out of 11 models even produced a differing sign of change from observations. Hence, statistical model which produces closer agreement with observations is used to explore the contribution of climatic and non-climatic factors to the changes in yield variability. It is found that climate variability dominate the change trends of corn yield variability in the Midwest Corn Belt, while the ability of climate variability in controlling yield variability is low in southeastern and western arid regions. Irrigation has largely reduced the corn yield variability in regions (e.g. Nebraska) where separate estimates of irrigated and rain-fed corn yield exist, demonstrating the importance of non-climatic factors in governing the changes in corn yield variability. The results highlight the distinct spatial patterns of corn yield variability change as well as its influencing factors at the county scale. I also caution the use of process-based crop models, which have substantially underestimated

  16. Recent changes in county-level corn yield variability in the United States from observations and crop models.

    Science.gov (United States)

    Leng, Guoyong

    2017-12-31

    The United States is responsible for 35% and 60% of global corn supply and exports. Enhanced supply stability through a reduction in the year-to-year variability of US corn yield would greatly benefit global food security. Important in this regard is to understand how corn yield variability has evolved geographically in the history and how it relates to climatic and non-climatic factors. Results showed that year-to-year variation of US corn yield has decreased significantly during 1980-2010, mainly in Midwest Corn Belt, Nebraska and western arid regions. Despite the country-scale decreasing variability, corn yield variability exhibited an increasing trend in South Dakota, Texas and Southeast growing regions, indicating the importance of considering spatial scales in estimating yield variability. The observed pattern is partly reproduced by process-based crop models, simulating larger areas experiencing increasing variability and underestimating the magnitude of decreasing variability. And 3 out of 11 models even produced a differing sign of change from observations. Hence, statistical model which produces closer agreement with observations is used to explore the contribution of climatic and non-climatic factors to the changes in yield variability. It is found that climate variability dominate the change trends of corn yield variability in the Midwest Corn Belt, while the ability of climate variability in controlling yield variability is low in southeastern and western arid regions. Irrigation has largely reduced the corn yield variability in regions (e.g. Nebraska) where separate estimates of irrigated and rain-fed corn yield exist, demonstrating the importance of non-climatic factors in governing the changes in corn yield variability. The results highlight the distinct spatial patterns of corn yield variability change as well as its influencing factors at the county scale. I also caution the use of process-based crop models, which have substantially underestimated

  17. Evaluation of a plot-scale methane emission model using eddy covariance observations and footprint modelling

    Directory of Open Access Journals (Sweden)

    A. Budishchev

    2014-09-01

    Full Text Available Most plot-scale methane emission models – of which many have been developed in the recent past – are validated using data collected with the closed-chamber technique. This method, however, suffers from a low spatial representativeness and a poor temporal resolution. Also, during a chamber-flux measurement the air within a chamber is separated from the ambient atmosphere, which negates the influence of wind on emissions. Additionally, some methane models are validated by upscaling fluxes based on the area-weighted averages of modelled fluxes, and by comparing those to the eddy covariance (EC flux. This technique is rather inaccurate, as the area of upscaling might be different from the EC tower footprint, therefore introducing significant mismatch. In this study, we present an approach to validate plot-scale methane models with EC observations using the footprint-weighted average method. Our results show that the fluxes obtained by the footprint-weighted average method are of the same magnitude as the EC flux. More importantly, the temporal dynamics of the EC flux on a daily timescale are also captured (r2 = 0.7. In contrast, using the area-weighted average method yielded a low (r2 = 0.14 correlation with the EC measurements. This shows that the footprint-weighted average method is preferable when validating methane emission models with EC fluxes for areas with a heterogeneous and irregular vegetation pattern.

  18. Application of New Chorus Wave Model from Van Allen Probe Observations in Earth's Radiation Belt Modeling

    Science.gov (United States)

    Wang, D.; Shprits, Y.; Spasojevic, M.; Zhu, H.; Aseev, N.; Drozdov, A.; Kellerman, A. C.

    2017-12-01

    In situ satellite observations, theoretical studies and model simulations suggested that chorus waves play a significant role in the dynamic evolution of relativistic electrons in the Earth's radiation belts. In this study, we developed new wave frequency and amplitude models that depend on Magnetic Local Time (MLT)-, L-shell, latitude- and geomagnetic conditions indexed by Kp for upper-band and lower-band chorus waves using measurements from the Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) instrument onboard the Van Allen Probes. Utilizing the quasi-linear full diffusion code, we calculated corresponding diffusion coefficients in each MLT sector (1 hour resolution) for upper-band and lower-band chorus waves according to the new developed wave models. Compared with former parameterizations of chorus waves, the new parameterizations result in differences in diffusion coefficients that depend on energy and pitch angle. Utilizing obtained diffusion coefficients, lifetime of energetic electrons is parameterized accordingly. In addition, to investigate effects of obtained diffusion coefficients in different MLT sectors and under different geomagnetic conditions, we performed simulations using four-dimensional Versatile Electron Radiation Belt simulations and validated results against observations.

  19. Catapult current sheet relaxation model confirmed by THEMIS observations

    Science.gov (United States)

    Machida, S.; Miyashita, Y.; Ieda, A.; Nose, M.; Angelopoulos, V.; McFadden, J. P.

    2014-12-01

    In this study, we show the result of superposed epoch analysis on the THEMIS probe data during the period from November, 2007 to April, 2009 by setting the origin of time axis to the substorm onset determined by Nishimura with THEMIS all sky imager (THEMS/ASI) data (http://www.atmos.ucla.edu/~toshi/files/paper/Toshi_THEMIS_GBO_list_distribution.xls). We confirmed the presence of earthward flows which can be associated with north-south auroral streamers during the substorm growth phase. At around X = -12 Earth radii (Re), the northward magnetic field and its elevation angle decreased markedly approximately 4 min before substorm onset. A northward magnetic-field increase associated with pre-onset earthward flows was found at around X = -17Re. This variation indicates the occurrence of the local depolarization. Interestingly, in the region earthwards of X = -18Re, earthward flows in the central plasma sheet (CPS) reduced significantly about 3min before substorm onset. However, the earthward flows enhanced again at t = -60 sec in the region around X = -14 Re, and they moved toward the Earth. At t = 0, the dipolarization of the magnetic field started at X ~ -10 Re, and simultaneously the magnetic reconnection started at X ~ -20 Re. Synthesizing these results, we can confirm the validity of our catapult current sheet relaxation model.

  20. Benthic boundary layer. IOS observational and modelling programme

    International Nuclear Information System (INIS)

    Saunders, P.M.; Richards, K.J.

    1985-01-01

    Near bottom currents, measured at three sites in the N.E. Atlantic, reveal the eddying characteristics of the flow. Eddies develop, migrate and decay in ways best revealed by numerical modelling simulations. Eddies control the thickness of the bottom mixed layer by accumulating and thickening or spreading and thinning the bottom waters. At the boundaries of eddies benthic fronts form providing a path for upward displacement of the bottom water. An experiment designed to estimate vertical diffusivity is performed. The flux of heat into the bottom of the Iberian basin through Discovery Gap is deduced from year long current measurements. The flux is supposed balanced by geothermal heating through the sea floor and diapycnal diffusion in the water. A diffusivity of 1.5 to 4 cm 2 s -1 is derived for the bottom few hundred meters of the deep ocean. Experiments to estimate horizontal diffusivity are described. If a tracer is discharged from the sea bed the volume of sea water in which it is found increases with time and after 20 years will fill an ocean basin of side 1000 km to a depth of only 1 to 2 km. (author)

  1. New insights on geomagnetic storms from observations and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Jordanova, Vania K [Los Alamos National Laboratory

    2009-01-01

    Understanding the response at Earth of the Sun's varying energy output and forecasting geomagnetic activity is of central interest to space science, since intense geomagnetic storms may cause severe damages on technological systems and affect communications. Episodes of southward (Bzmodel (RAM), and investigate the mechanisms responsible for trapping particles and for causing their loss. We find that periods of increased magnetospheric convection coinciding with enhancements of plasma sheet density are needed for strong ring current buildup. During the HSS-driven storm the convection potential is highly variable and causes small sporadic injections into the ring current. The long period of enhanced convection during the CME-driven storm causes a continuous ring current injection penetrating to lower L shells and stronger ring current buildup.

  2. Reproducibility of neuroimaging analyses across operating systems.

    Science.gov (United States)

    Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.

  3. Reproducibility of the water drinking test.

    Science.gov (United States)

    Muñoz, C R; Macias, J H; Hartleben, C

    2015-11-01

    To investigate the reproducibility of the water drinking test in determining intraocular pressure peaks and fluctuation. It has been suggested that there is limited agreement between the water drinking test and diurnal tension curve. This may be because it has only been compared with a 10-hour modified diurnal tension curve, missing 70% of IOP peaks that occurred during night. This was a prospective, analytical and comparative study that assesses the correlation, agreement, sensitivity and specificity of the water drinking test. The correlation between the water drinking test and diurnal tension curve was significant and strong (r=0.93, Confidence interval 95% between 0.79 and 0.96, p<01). A moderate agreement was observed between these measurements (pc=0.93, Confidence interval 95% between 0.87 and 0.95, p<.01). The agreement was within±2mmHg in 89% of the tests. Our study found a moderate agreement between the water drinking test and diurnal tension curve, in contrast with the poor agreement found in other studies, possibly due to the absence of nocturnal IOP peaks. These findings suggest that the water drinking test could be used to determine IOP peaks, as well as for determining baseline IOP. Copyright © 2014 Sociedad Española de Oftalmología. Published by Elsevier España, S.L.U. All rights reserved.

  4. A model for atmospheric brightness temperatures observed by the special sensor microwave imager (SSM/I)

    Science.gov (United States)

    Petty, Grant W.; Katsaros, Kristina B.

    1989-01-01

    A closed-form mathematical model for the atmospheric contribution to microwave the absorption and emission at the SSM/I frequencies is developed in order to improve quantitative interpretation of microwave imagery from the Special Sensor Microwave Imager (SSM/I). The model is intended to accurately predict upwelling and downwelling atmospheric brightness temperatures at SSM/I frequencies, as functions of eight input parameters: the zenith (nadir) angle, the integrated water vapor and vapor scale height, the integrated cloud water and cloud height, the effective surface temperature, atmospheric lapse rate, and surface pressure. It is shown that the model accurately reproduces clear-sky brightness temperatures computed by explicit integration of a large number of radiosonde soundings representing all maritime climate zones and seasons.

  5. The Economics of Reproducibility in Preclinical Research.

    Directory of Open Access Journals (Sweden)

    Leonard P Freedman

    2015-06-01

    Full Text Available Low reproducibility rates within life science research undermine cumulative knowledge production and contribute to both delays and costs of therapeutic drug development. An analysis of past studies indicates that the cumulative (total prevalence of irreproducible preclinical research exceeds 50%, resulting in approximately US$28,000,000,000 (US$28B/year spent on preclinical research that is not reproducible-in the United States alone. We outline a framework for solutions and a plan for long-term improvements in reproducibility rates that will help to accelerate the discovery of life-saving therapies and cures.

  6. Learning Reproducibility with a Yearly Networking Contest

    KAUST Repository

    Canini, Marco

    2017-08-10

    Better reproducibility of networking research results is currently a major goal that the academic community is striving towards. This position paper makes the case that improving the extent and pervasiveness of reproducible research can be greatly fostered by organizing a yearly international contest. We argue that holding a contest undertaken by a plurality of students will have benefits that are two-fold. First, it will promote hands-on learning of skills that are helpful in producing artifacts at the replicable-research level. Second, it will advance the best practices regarding environments, testbeds, and tools that will aid the tasks of reproducibility evaluation committees by and large.

  7. Thou Shalt Be Reproducible! A Technology Perspective

    Directory of Open Access Journals (Sweden)

    Patrick Mair

    2016-07-01

    Full Text Available This article elaborates on reproducibility in psychology from a technological viewpoint. Modernopen source computational environments are shown and explained that foster reproducibilitythroughout the whole research life cycle, and to which emerging psychology researchers shouldbe sensitized, are shown and explained. First, data archiving platforms that make datasets publiclyavailable are presented. Second, R is advocated as the data-analytic lingua franca in psychologyfor achieving reproducible statistical analysis. Third, dynamic report generation environments forwriting reproducible manuscripts that integrate text, data analysis, and statistical outputs such asfigures and tables in a single document are described. Supplementary materials are provided inorder to get the reader started with these technologies.

  8. Improved Analysis of Earth System Models and Observations using Simple Climate Models

    Science.gov (United States)

    Nadiga, B. T.; Urban, N. M.

    2016-12-01

    both ESM experiments and actual observations are presented. One such result points to the importance of direct sequestration of heat below 700 m, a process that is not allowed for in the simple models that have been traditionally used to deduce climate sensitivity.

  9. Evaluation of the CMAQ and GMI Model-Simulated Shape Factors with DISCOVER-AQ Observations with Implications for Retrievals

    Science.gov (United States)

    Flynn, C. M.; Pickering, K. E.; Crawford, J. H.; Weinheimer, A. J.; Diskin, G. S.; Loughner, C.; Strode, S.

    2016-12-01

    model to reproduce the observations. These results demonstrate the importance of resolution for accurate representation of pollutant profiles as a priori information within satellite retrievals, and for the ability to relate column abundances to surface concentrations.

  10. Modelled black carbon radiative forcing and atmospheric lifetime in AeroCom Phase II constrained by aircraft observations

    Science.gov (United States)

    Samset, B. H.; Myhre, G.; Herber, A.; Kondo, Y.; Li, S.-M.; Moteki, N.; Koike, M.; Oshima, N.; Schwarz, J. P.; Balkanski, Y.; Bauer, S. E.; Bellouin, N.; Berntsen, T. K.; Bian, H.; Chin, M.; Diehl, T.; Easter, R. C.; Ghan, S. J.; Iversen, T.; Kirkevåg, A.; Lamarque, J.-F.; Lin, G.; Liu, X.; Penner, J. E.; Schulz, M.; Seland, Ø.; Skeie, R. B.; Stier, P.; Takemura, T.; Tsigaridis, K.; Zhang, K.

    2014-11-01

    Atmospheric black carbon (BC) absorbs solar radiation, and exacerbates global warming through exerting positive radiative forcing (RF). However, the contribution of