WorldWideScience

Sample records for source model derived

  1. Conceptual model for deriving the repository source term

    International Nuclear Information System (INIS)

    Alexander, D.H.; Apted, M.J.; Liebetrau, A.M.; Van Luik, A.E.; Williford, R.E.; Doctor, P.G.; Pacific Northwest Lab., Richland, WA; Roy F. Weston, Inc./Rogers and Assoc. Engineering Corp., Rockville, MD)

    1984-01-01

    Part of a strategy for evaluating the compliance of geologic repositories with Federal regulations is a modeling approach that would provide realistic release estimates for a particular configuration of the engineered-barrier system. The objective is to avoid worst-case bounding assumptions that are physically impossible or excessively conservative and to obtain probabilitistic estimates of (1) the penetration time for metal barriers and (2) radionuclide-release rates for individually simulated waste packages after penetration has occurred. The conceptual model described in this paper will assume that release rates are explicitly related to such time-dependent processes as mass transfer, dissolution and precipitation, radionuclide decay, and variations in the geochemical environment. The conceptual model will take into account the reduction in the rates of waste-form dissolution and metal corrosion due to a buildup of chemical reaction products. The sorptive properties of the metal-barrier corrosion products in proximity to the waste form surface will also be included. Cumulative released from the engineered-barrier system will be calculated by summing the releases from a probabilistically generated population of individual waste packages. 14 refs., 7 figs

  2. Conceptual model for deriving the repository source term

    International Nuclear Information System (INIS)

    Alexander, D.H.; Apted, M.J.; Liebetrau, A.M.; Doctor, P.G.; Williford, R.E.; Van Luik, A.E.

    1984-11-01

    Part of a strategy for evaluating the compliance of geologic repositories with federal regulations is a modeling approach that would provide realistic release estimates for a particular configuration of the engineered-barrier system. The objective is to avoid worst-case bounding assumptions that are physically impossible or excessively conservative and to obtain probabilistic estimates of (1) the penetration time for metal barriers and (2) radionuclide-release rates for individually simulated waste packages after penetration has occurred. The conceptual model described in this paper will assume that release rates are explicitly related to such time-dependent processes as mass transfer, dissolution and precipitation, radionuclide decay, and variations in the geochemical environment. The conceptual model will take into account the reduction in the rates of waste-form dissolution and metal corrosion due to a buildup of chemical reaction products. The sorptive properties of the metal-barrier corrosion products in proximity to the waste form surface will also be included. Cumulative releases from the engineered-barrier system will be calculated by summing the releases from a probabilistically generated population of individual waste packages. 14 refs., 7 figs

  3. Computing Pathways in Bio-Models Derived from Bio-Science Text Sources

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik; Nilsson, Jørgen Fischer

    2015-01-01

    This paper outlines a system, OntoScape, serving to accomplish complex inference tasks on knowledge bases and bio-models derived from life-science text corpora. The system applies so-called natural logic, a form of logic which is readable for humans. This logic affords ontological representations...... of complex terms appearing in the text sources. Along with logical propositions, the system applies a semantic graph representation facilitating calculation of bio-pathways. More generally, the system aords means of query answering appealing to general and domain specic inference rules....

  4. Pathway computation in models derived from bio-science text sources

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik; Jensen, Per Anker

    2017-01-01

    This paper outlines a system, OntoScape, serving to accomplish complex inference tasks on knowledge bases and bio-models derived from life-science text corpora. The system applies so-called natural logic, a form of logic which is readable for humans. This logic affords ontological representations...

  5. Modeling of negative ion extraction from a magnetized plasma source: Derivation of scaling laws and description of the origins of aberrations in the ion beam

    Science.gov (United States)

    Fubiani, G.; Garrigues, L.; Boeuf, J. P.

    2018-02-01

    We model the extraction of negative ions from a high brightness high power magnetized negative ion source. The model is a Particle-In-Cell (PIC) algorithm with Monte-Carlo Collisions. The negative ions are generated only on the plasma grid surface (which separates the plasma from the electrostatic accelerator downstream). The scope of this work is to derive scaling laws for the negative ion beam properties versus the extraction voltage (potential of the first grid of the accelerator) and plasma density and investigate the origins of aberrations on the ion beam. We show that a given value of the negative ion beam perveance correlates rather well with the beam profile on the extraction grid independent of the simulated plasma density. Furthermore, the extracted beam current may be scaled to any value of the plasma density. The scaling factor must be derived numerically but the overall gain of computational cost compared to performing a PIC simulation at the real plasma density is significant. Aberrations appear for a meniscus curvature radius of the order of the radius of the grid aperture. These aberrations cannot be cancelled out by switching to a chamfered grid aperture (as in the case of positive ions).

  6. Source model for the 1997 Zirkuh earthquake (MW= 7.2) in Iran derived from JERS and ERS InSAR observations

    KAUST Repository

    Sudhaus, Henriette

    2011-05-01

    We present the first detailed source model of the 1997 M7.2 Zirkuh earthquake that ruptured the entire Abiz fault in East Iran producing a 125 km long, bended and segmented fault trace. Using SAR data from the ERS and JERS-1 satellites we first determined a multisegment fault model for this predominately strike-slip earthquake by estimating fault-segment dip, slip, and rake values using an evolutionary optimization algorithm. We then inverted the InSAR data for variable slip and rake in more detail along the multisegment fault plane. We complement our optimization with importance sampling of the model parameter space to ensure that the derived optimum model has a high likelihood, to detect correlations or trade-offs between model parameters, and to image the model resolution. Our results are in an agreement with field observations showing that this predominantly strike-slip earthquake had a clear change in style of faulting along its rupture. In the north we find that thrust faulting on a westerly dipping fault is accompanied with the strike-slip that changes to thrust faulting on an eastward dipping fault plane in the south. The centre part of the fault is vertical and has almost pure dextral strike-slip. The heterogeneous fault slip distribution shows two regions of low slip near significant fault step-overs of the Abiz fault and therefore these fault complexities appear to reduce the fault slip. Furthermore, shallow fault slip is generally reduced with respect to slip at depth. This shallow slip deficit varies along the Zirkuh fault from a small deficit in the North to a much larger deficit along the central part of the fault, a variation that is possibly related to different interseismic repose times.

  7. Open source molecular modeling.

    Science.gov (United States)

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  8. Gaussian Plume Model Parameters for Ground-Level and Elevated Sources Derived from the Atmospheric Diffusion Equation in the Neutral and Stable Conditions

    International Nuclear Information System (INIS)

    Essa, K.S.M.

    2009-01-01

    The analytical solution of the atmospheric diffusion equation for a point source gives the ground-level concentration profiles. It depends on the wind speed ua nd vertical dispersion coefficient σ z expressed by Pasquill power laws. Both σ z and u are functions of downwind distance, stability and source elevation, while for the ground-level emission u is constant. In the neutral and stable conditions, the Gaussian plume model and finite difference numerical methods with wind speed in power law and the vertical dispersion coefficient in exponential law are estimated. This work shows that the estimated ground-level concentrations of the Gaussian model for high-level source and numerical finite difference method are very match fit to the observed ground-level concentrations of the Gaussian model

  9. Distinct transmissibility features of TSE sources derived from ruminant prion diseases by the oral route in a transgenic mouse model (TgOvPrP4 overexpressing the ovine prion protein.

    Directory of Open Access Journals (Sweden)

    Jean-Noël Arsac

    Full Text Available Transmissible spongiform encephalopathies (TSEs are a group of fatal neurodegenerative diseases associated with a misfolded form of host-encoded prion protein (PrP. Some of them, such as classical bovine spongiform encephalopathy in cattle (BSE, transmissible mink encephalopathy (TME, kuru and variant Creutzfeldt-Jakob disease in humans, are acquired by the oral route exposure to infected tissues. We investigated the possible transmission by the oral route of a panel of strains derived from ruminant prion diseases in a transgenic mouse model (TgOvPrP4 overexpressing the ovine prion protein (A136R154Q171 under the control of the neuron-specific enolase promoter. Sources derived from Nor98, CH1641 or 87V scrapie sources, as well as sources derived from L-type BSE or cattle-passaged TME, failed to transmit by the oral route, whereas those derived from classical BSE and classical scrapie were successfully transmitted. Apart from a possible effect of passage history of the TSE agent in the inocula, this implied the occurrence of subtle molecular changes in the protease-resistant prion protein (PrPres following oral transmission that can raises concerns about our ability to correctly identify sheep that might be orally infected by the BSE agent in the field. Our results provide proof of principle that transgenic mouse models can be used to examine the transmissibility of TSE agents by the oral route, providing novel insights regarding the pathogenesis of prion diseases.

  10. Photovoltaic sources modeling

    CERN Document Server

    Petrone, Giovanni; Spagnuolo, Giovanni

    2016-01-01

    This comprehensive guide surveys all available models for simulating a photovoltaic (PV) generator at different levels of granularity, from cell to system level, in uniform as well as in mismatched conditions. Providing a thorough comparison among the models, engineers have all the elements needed to choose the right PV array model for specific applications or environmental conditions matched with the model of the electronic circuit used to maximize the PV power production.

  11. The Commercial Open Source Business Model

    Science.gov (United States)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  12. Estuarine Bathymetric Digital Elevation Models (30 meter and 3 arc second resolution) Derived From Source Hydrographic Survey Soundings Collected by NOAA

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These Bathymetric Digital Elevation Models (DEM) were generated from original point soundings collected during hydrographic surveys conducted by the National Ocean...

  13. Source model for the 1997 Zirkuh earthquake (MW= 7.2) in Iran derived from JERS and ERS InSAR observations

    KAUST Repository

    Sudhaus, Henriette; Jonsson, Sigurjon

    2011-01-01

    , and to image the model resolution. Our results are in an agreement with field observations showing that this predominantly strike-slip earthquake had a clear change in style of faulting along its rupture. In the north we find that thrust faulting on a westerly

  14. Modelling of H.264 MPEG2 TS Traffic Source

    Directory of Open Access Journals (Sweden)

    Stanislav Klucik

    2013-01-01

    Full Text Available This paper deals with IPTV traffic source modelling. Traffic sources are used for simulation, emulation and real network testing. This model is made as a derivation of known recorded traffic sources that are analysed and statistically processed. As the results show the proposed model causes in comparison to the known traffic source very similar network traffic parameters when used in a simulated network.

  15. Marine-derived fungi as a source of proteases

    Digital Repository Service at National Institute of Oceanography (India)

    Kamat, T.; Rodrigues, C.; Naik, C.G.

    , of marine-derived fungi in order to identify the potential sources. Sponge and corals were collected by SCUBA diving, from a depth of 8 to 10 m from the coastal waters of Mandapam, Tamil Nadu (9"16' N; 79"liE). The samples comprised of a soft coral Sinularia... pieces of approximately 2x2 cm were cut out aseptically. These fourteen pieces of each organism were subjected to two different treatments 23 • In the first case seven pieces were vortexed four times, for 20 seconds each, with sterile seawater while...

  16. Modelling Choice of Information Sources

    Directory of Open Access Journals (Sweden)

    Agha Faisal Habib Pathan

    2013-04-01

    Full Text Available This paper addresses the significance of traveller information sources including mono-modal and multimodal websites for travel decisions. The research follows a decision paradigm developed earlier, involving an information acquisition process for travel choices, and identifies the abstract characteristics of new information sources that deserve further investigation (e.g. by incorporating these in models and studying their significance in model estimation. A Stated Preference experiment is developed and the utility functions are formulated by expanding the travellers' choice set to include different combinations of sources of information. In order to study the underlying choice mechanisms, the resulting variables are examined in models based on different behavioural strategies, including utility maximisation and minimising the regret associated with the foregone alternatives. This research confirmed that RRM (Random Regret Minimisation Theory can fruitfully be used and can provide important insights for behavioural studies. The study also analyses the properties of travel planning websites and establishes a link between travel choices and the content, provenance, design, presence of advertisements, and presentation of information. The results indicate that travellers give particular credence to governmentowned sources and put more importance on their own previous experiences than on any other single source of information. Information from multimodal websites is more influential than that on train-only websites. This in turn is more influential than information from friends, while information from coachonly websites is the least influential. A website with less search time, specific information on users' own criteria, and real time information is regarded as most attractive

  17. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  18. A 'simple' hybrid model for power derivatives

    International Nuclear Information System (INIS)

    Lyle, Matthew R.; Elliott, Robert J.

    2009-01-01

    This paper presents a method for valuing power derivatives using a supply-demand approach. Our method extends work in the field by incorporating randomness into the base load portion of the supply stack function and equating it with a noisy demand process. We obtain closed form solutions for European option prices written on average spot prices considering two different supply models: a mean-reverting model and a Markov chain model. The results are extensions of the classic Black-Scholes equation. The model provides a relatively simple approach to describe the complicated price behaviour observed in electricity spot markets and also allows for computationally efficient derivatives pricing. (author)

  19. Deriving simulators for hybrid Chi models

    NARCIS (Netherlands)

    Beek, van D.A.; Man, K.L.; Reniers, M.A.; Rooda, J.E.; Schiffelers, R.R.H.

    2006-01-01

    The hybrid Chi language is formalism for modeling, simulation and verification of hybrid systems. The formal semantics of hybrid Chi allows the definition of provably correct implementations for simulation, verification and realtime control. This paper discusses the principles of deriving an

  20. Price models for oil derivates in Slovenia

    International Nuclear Information System (INIS)

    Nemac, F.; Saver, A.

    1995-01-01

    In Slovenia, a law is currently applied according to which any change in the price of oil derivatives is subject to the Governmental approval. Following the target of getting closer to the European Union, the necessity has arisen of finding ways for the introduction of liberalization or automated approach to price modifications depending on oscillations of oil derivative prices on the world market and the rate of exchange of the American dollar. It is for this reason that at the Agency for Energy Restructuring we made a study for the Ministry of Economic Affairs and Development regarding this issue. We analysed the possible models for the formation of oil derivative prices for Slovenia. Based on the assessment of experiences of primarily the west European countries, we proposed three models for the price formation for Slovenia. In future, it is expected that the Government of the Republic of Slovenia will make a selection of one of the proposed models to be followed by enforcement of price liberalization. The paper presents two representative models for price formation as used in Austria and Portugal. In the continuation the authors analyse the application of three models that they find suitable for the use in Slovenia. (author)

  1. Assessing Model Characterization of Single Source ...

    Science.gov (United States)

    Aircraft measurements made downwind from specific coal fired power plants during the 2013 Southeast Nexus field campaign provide a unique opportunity to evaluate single source photochemical model predictions of both O3 and secondary PM2.5 species. The model did well at predicting downwind plume placement. The model shows similar patterns of an increasing fraction of PM2.5 sulfate ion to the sum of SO2 and PM2.5 sulfate ion by distance from the source compared with ambient based estimates. The model was less consistent in capturing downwind ambient based trends in conversion of NOX to NOY from these sources. Source sensitivity approaches capture near-source O3 titration by fresh NO emissions, in particular subgrid plume treatment. However, capturing this near-source chemical feature did not translate into better downwind peak estimates of single source O3 impacts. The model estimated O3 production from these sources but often was lower than ambient based source production. The downwind transect ambient measurements, in particular secondary PM2.5 and O3, have some level of contribution from other sources which makes direct comparison with model source contribution challenging. Model source attribution results suggest contribution to secondary pollutants from multiple sources even where primary pollutants indicate the presence of a single source. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, deci

  2. Modeling of heat conduction via fractional derivatives

    Science.gov (United States)

    Fabrizio, Mauro; Giorgi, Claudio; Morro, Angelo

    2017-09-01

    The modeling of heat conduction is considered by letting the time derivative, in the Cattaneo-Maxwell equation, be replaced by a derivative of fractional order. The purpose of this new approach is to overcome some drawbacks of the Cattaneo-Maxwell equation, for instance possible fluctuations which violate the non-negativity of the absolute temperature. Consistency with thermodynamics is shown to hold for a suitable free energy potential, that is in fact a functional of the summed history of the heat flux, subject to a suitable restriction on the set of admissible histories. Compatibility with wave propagation at a finite speed is investigated in connection with temperature-rate waves. It follows that though, as expected, this is the case for the Cattaneo-Maxwell equation, the model involving the fractional derivative does not allow the propagation at a finite speed. Nevertheless, this new model provides a good description of wave-like profiles in thermal propagation phenomena, whereas Fourier's law does not.

  3. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  4. Deriving profiles of incident and scattered neutrons for TOF experiments with the spallation sources

    International Nuclear Information System (INIS)

    Watanabe, Hidehiro

    1993-01-01

    A formula that closely matches the incident profile of epi-thermal and thermal neutrons for time of flight experiments carried out with a spallation neutron source and moderator scheme is derived based on the slowing-down and diffusing-out processes in a moderator. This analytical description also enables us to predict burst-function profiles; these profiles are verified by a comparison with a diffraction pattern. The limits of the analytical model are discussed through the predictable peak position shift brought about by the slowing-down process. (orig.)

  5. Xiphoid Process-Derived Chondrocytes: A Novel Cell Source for Elastic Cartilage Regeneration

    Science.gov (United States)

    Nam, Seungwoo; Cho, Wheemoon; Cho, Hyunji; Lee, Jungsun

    2014-01-01

    Reconstruction of elastic cartilage requires a source of chondrocytes that display a reliable differentiation tendency. Predetermined tissue progenitor cells are ideal candidates for meeting this need; however, it is difficult to obtain donor elastic cartilage tissue because most elastic cartilage serves important functions or forms external structures, making these tissues indispensable. We found vestigial cartilage tissue in xiphoid processes and characterized it as hyaline cartilage in the proximal region and elastic cartilage in the distal region. Xiphoid process-derived chondrocytes (XCs) showed superb in vitro expansion ability based on colony-forming unit fibroblast assays, cell yield, and cumulative cell growth. On induction of differentiation into mesenchymal lineages, XCs showed a strong tendency toward chondrogenic differentiation. An examination of the tissue-specific regeneration capacity of XCs in a subcutaneous-transplantation model and autologous chondrocyte implantation model confirmed reliable regeneration of elastic cartilage regardless of the implantation environment. On the basis of these observations, we conclude that xiphoid process cartilage, the only elastic cartilage tissue source that can be obtained without destroying external shape or function, is a source of elastic chondrocytes that show superb in vitro expansion and reliable differentiation capacity. These findings indicate that XCs could be a valuable cell source for reconstruction of elastic cartilage. PMID:25205841

  6. Learning models for multi-source integration

    Energy Technology Data Exchange (ETDEWEB)

    Tejada, S.; Knoblock, C.A.; Minton, S. [Univ. of Southern California/ISI, Marina del Rey, CA (United States)

    1996-12-31

    Because of the growing number of information sources available through the internet there are many cases in which information needed to solve a problem or answer a question is spread across several information sources. For example, when given two sources, one about comic books and the other about super heroes, you might want to ask the question {open_quotes}Is Spiderman a Marvel Super Hero?{close_quotes} This query accesses both sources; therefore, it is necessary to have information about the relationships of the data within each source and between sources to properly access and integrate the data retrieved. The SIMS information broker captures this type of information in the form of a model. All the information sources map into the model providing the user a single interface to multiple sources.

  7. Biota Modeling in EPA's Preliminary Remediation Goal and Dose Compliance Concentration Calculators for Use in EPA Superfund Risk Assessment: Explanation of Intake Rate Derivation, Transfer Factor Compilation, and Mass Loading Factor Sources

    Energy Technology Data Exchange (ETDEWEB)

    Manning, Karessa L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dolislager, Fredrick G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bellamy, Michael B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-11-01

    The Preliminary Remediation Goal (PRG) and Dose Compliance Concentration (DCC) calculators are screening level tools that set forth Environmental Protection Agency's (EPA) recommended approaches, based upon currently available information with respect to risk assessment, for response actions at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites, commonly known as Superfund. The screening levels derived by the PRG and DCC calculators are used to identify isotopes contributing the highest risk and dose as well as establish preliminary remediation goals. Each calculator has a residential gardening scenario and subsistence farmer exposure scenarios that require modeling of the transfer of contaminants from soil and water into various types of biota (crops and animal products). New publications of human intake rates of biota; farm animal intakes of water, soil, and fodder; and soil to plant interactions require updates be implemented into the PRG and DCC exposure scenarios. Recent improvements have been made in the biota modeling for these calculators, including newly derived biota intake rates, more comprehensive soil mass loading factors (MLFs), and more comprehensive soil to tissue transfer factors (TFs) for animals and soil to plant transfer factors (BV's). New biota have been added in both the produce and animal products categories that greatly improve the accuracy and utility of the PRG and DCC calculators and encompass greater geographic diversity on a national and international scale.

  8. Biota Modeling in EPA's Preliminary Remediation Goal and Dose Compliance Concentration Calculators for Use in EPA Superfund Risk Assessment: Explanation of Intake Rate Derivation, Transfer Factor Compilation, and Mass Loading Factor Sources

    International Nuclear Information System (INIS)

    Manning, Karessa L.; Dolislager, Fredrick G.; Bellamy, Michael B.

    2016-01-01

    The Preliminary Remediation Goal (PRG) and Dose Compliance Concentration (DCC) calculators are screening level tools that set forth Environmental Protection Agency's (EPA) recommended approaches, based upon currently available information with respect to risk assessment, for response actions at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites, commonly known as Superfund. The screening levels derived by the PRG and DCC calculators are used to identify isotopes contributing the highest risk and dose as well as establish preliminary remediation goals. Each calculator has a residential gardening scenario and subsistence farmer exposure scenarios that require modeling of the transfer of contaminants from soil and water into various types of biota (crops and animal products). New publications of human intake rates of biota; farm animal intakes of water, soil, and fodder; and soil to plant interactions require updates be implemented into the PRG and DCC exposure scenarios. Recent improvements have been made in the biota modeling for these calculators, including newly derived biota intake rates, more comprehensive soil mass loading factors (MLFs), and more comprehensive soil to tissue transfer factors (TFs) for animals and soil to plant transfer factors (BV's). New biota have been added in both the produce and animal products categories that greatly improve the accuracy and utility of the PRG and DCC calculators and encompass greater geographic diversity on a national and international scale.

  9. Photovoltaic sources modeling and emulation

    CERN Document Server

    Piazza, Maria Carmela Di

    2012-01-01

    This book offers an extensive introduction to the modeling of photovoltaic generators and their emulation by means of power electronic converters will aid in understanding and improving design and setup of new PV plants.

  10. Partial podocyte replenishment in experimental FSGS derives from nonpodocyte sources.

    Science.gov (United States)

    Kaverina, Natalya V; Eng, Diana G; Schneider, Remington R S; Pippin, Jeffrey W; Shankland, Stuart J

    2016-06-01

    The current studies used genetic fate mapping to prove that adult podocytes can be partially replenished following depletion. Inducible NPHS2-rtTA/tetO-Cre/RS-ZsGreen-R reporter mice were generated to permanently label podocytes with the ZsGreen reporter. Experimental focal segmental glomerulosclerosis (FSGS) was induced with a cytotoxic podocyte antibody. On FSGS day 7, immunostaining for the podocyte markers p57, synaptopodin, and podocin were markedly decreased by 44%, and this was accompanied by a decrease in ZsGreen fluorescence. The nuclear stain DAPI was absent in segments of reduced ZsGreen and podocyte marker staining, which is consistent with podocyte depletion. Staining for p57, synaptopodin, podocin, and DAPI increased at FSGS day 28 and was augmented by the ACE inhibitor enalapril, which is consistent with a partial replenishment of podocytes. In contrast, ZsGreen fluorescence did not return and remained significantly low at day 28, indicating replenishment was from a nonpodocyte origin. Despite administration of bromodeoxyuridine (BrdU) thrice weekly throughout the course of disease, BrdU staining was not detected in podocytes, which is consistent with an absence of proliferation. Although ZsGreen reporting was reduced in the tuft at FSGS day 28, labeled podocytes were detected along the Bowman's capsule in a subset of glomeruli, which is consistent with migration from the tuft. Moreover, more than half of the migrated podocytes coexpressed the parietal epithelial cell (PEC) proteins claudin-1, SSeCKS, and PAX8. These results show that although podocytes can be partially replenished following abrupt depletion, a process augmented by ACE inhibition, the source or sources are nonpodocyte in origin and are independent of proliferation. Furthermore, a subset of podocytes migrate to the Bowman's capsule and begin to coexpress PEC markers. Copyright © 2016 the American Physiological Society.

  11. A tracer diffusion model derived from microstructure

    International Nuclear Information System (INIS)

    Lehikoinen, Jarmo; Muurinen, Arto; Olin, Markus

    2012-01-01

    of reference, is shown to be given by the ratio of the effective diffusivity to the apparent diffusivity for an assumed non-interacting solute, such as tritiated water. Finally, the utility of the model and derivation of the model parameters are demonstrated with tracer diffusion data from the open literature for compacted bentonite. (authors)

  12. An analytic uranium sources model

    International Nuclear Information System (INIS)

    Singer, C.E.

    2001-01-01

    This document presents a method for estimating uranium resources as a continuous function of extraction costs and describing the uncertainty in the resulting fit. The estimated functions provide convenient extrapolations of currently available data on uranium extraction cost and can be used to predict the effect of resource depletion on future uranium supply costs. As such, they are a useful input for economic models of the nuclear energy sector. The method described here pays careful attention to minimizing built-in biases in the fitting procedure and defines ways to describe the uncertainty in the resulting fits in order to render the procedure and its results useful to the widest possible variety of potential users. (author)

  13. Coda-derived source spectra, moment magnitudes and energy-moment scaling in the western Alps

    Science.gov (United States)

    Morasca, P.; Mayeda, K.; Malagnini, L.; Walter, William R.

    2005-01-01

    A stable estimate of the earthquake source spectra in the western Alps is obtained using an empirical method based on coda envelope amplitude measurements described by Mayeda et al. for events ranging between MW~ 1.0 and ~5.0. Path corrections for consecutive narrow frequency bands ranging between 0.3 and 25.0 Hz were included using a simple 1-D model for five three-component stations of the Regional Seismic network of Northwestern Italy (RSNI). The 1-D assumption performs well, even though the region is characterized by a complex structural setting involving strong lateral variations in the Moho depth. For frequencies less than 1.0 Hz, we tied our dimensionless, distance-corrected coda amplitudes to an absolute scale in units of dyne cm by using independent moment magnitudes from long-period waveform modelling for three moderate magnitude events in the region. For the higher frequencies, we used small events as empirical Green's functions, with corner frequencies above 25.0 Hz. For each station, the procedure yields frequency-dependent corrections that account for site effects, including those related to fmax, as well as to S-to-coda transfer function effects. After the calibration was completed, the corrections were applied to the entire data set composed of 957 events. Our findings using the coda-derived source spectra are summarized as follows: (i) we derived stable estimates of seismic moment, M0, (and hence MW) as well as radiated S-wave energy, (ES), from waveforms recorded by as few as one station, for events that were too small to be waveform modelled (i.e. events less than MW~ 3.5); (ii) the source spectra were used to derive an equivalent local magnitude, ML(coda), that is in excellent agreement with the network averaged values using direct S waves; (iii) scaled energy, , where ER, the radiated seismic energy, is comparable to results from other tectonically active regions (e.g. western USA, Japan) and supports the idea that there is a fundamental

  14. Characterization and modeling of the heat source

    Energy Technology Data Exchange (ETDEWEB)

    Glickstein, S.S.; Friedman, E.

    1993-10-01

    A description of the input energy source is basic to any numerical modeling formulation designed to predict the outcome of the welding process. The source is fundamental and unique to each joining process. The resultant output of any numerical model will be affected by the initial description of both the magnitude and distribution of the input energy of the heat source. Thus, calculated weld shape, residual stresses, weld distortion, cooling rates, metallurgical structure, material changes due to excessive temperatures and potential weld defects are all influenced by the initial characterization of the heat source. Understandings of both the physics and the mathematical formulation of these sources are essential for describing the input energy distribution. This section provides a brief review of the physical phenomena that influence the input energy distributions and discusses several different models of heat sources that have been used in simulating arc welding, high energy density welding and resistance welding processes. Both simplified and detailed models of the heat source are discussed.

  15. Balmorel open source energy system model

    DEFF Research Database (Denmark)

    Wiese, Frauke; Bramstoft, Rasmus; Koduvere, Hardi

    2018-01-01

    As the world progresses towards a cleaner energy future with more variable renewable energy sources, energy system models are required to deal with new challenges. This article describes design, development and applications of the open source energy system model Balmorel, which is a result...... of a long and fruitful cooperation between public and private institutions within energy system research and analysis. The purpose of the article is to explain the modelling approach, to highlight strengths and challenges of the chosen approach, to create awareness about the possible applications...... of Balmorel as well as to inspire to new model developments and encourage new users to join the community. Some of the key strengths of the model are the flexible handling of the time and space dimensions and the combination of operation and investment optimisation. Its open source character enables diverse...

  16. Faster universal modeling for two source classes

    NARCIS (Netherlands)

    Nowbakht, A.; Willems, F.M.J.; Macq, B.; Quisquater, J.-J.

    2002-01-01

    The Universal Modeling algorithms proposed in [2] for two general classes of finite-context sources are reviewed. The above methods were constructed by viewing a model structure as a partition of the context space and realizing that a partition can be reached through successive splits. Here we start

  17. System level modelling with open source tools

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Koefoed; Madsen, Jan; Niaki, Seyed Hosein Attarzadeh

    , called ForSyDe. ForSyDe is available under the open Source approach, which allows small and medium enterprises (SME) to get easy access to advanced modeling capabilities and tools. We give an introduction to the design methodology through the system level modeling of a simple industrial use case, and we...

  18. Probabilistic forward model for electroencephalography source analysis

    International Nuclear Information System (INIS)

    Plis, Sergey M; George, John S; Jun, Sung C; Ranken, Doug M; Volegov, Petr L; Schmidt, David M

    2007-01-01

    Source localization by electroencephalography (EEG) requires an accurate model of head geometry and tissue conductivity. The estimation of source time courses from EEG or from EEG in conjunction with magnetoencephalography (MEG) requires a forward model consistent with true activity for the best outcome. Although MRI provides an excellent description of soft tissue anatomy, a high resolution model of the skull (the dominant resistive component of the head) requires CT, which is not justified for routine physiological studies. Although a number of techniques have been employed to estimate tissue conductivity, no present techniques provide the noninvasive 3D tomographic mapping of conductivity that would be desirable. We introduce a formalism for probabilistic forward modeling that allows the propagation of uncertainties in model parameters into possible errors in source localization. We consider uncertainties in the conductivity profile of the skull, but the approach is general and can be extended to other kinds of uncertainties in the forward model. We and others have previously suggested the possibility of extracting conductivity of the skull from measured electroencephalography data by simultaneously optimizing over dipole parameters and the conductivity values required by the forward model. Using Cramer-Rao bounds, we demonstrate that this approach does not improve localization results nor does it produce reliable conductivity estimates. We conclude that the conductivity of the skull has to be either accurately measured by an independent technique, or that the uncertainties in the conductivity values should be reflected in uncertainty in the source location estimates

  19. A model for superliminal radio sources

    International Nuclear Information System (INIS)

    Milgrom, M.; Bahcall, J.N.

    1977-01-01

    A geometrical model for superluminal radio sources is described. Six predictions that can be tested by observations are summarized. The results are in agreement with all the available observations. In this model, the Hubble constant is the only numerical parameter that is important in interpreting the observed rates of change of angular separations for small redshifts. The available observations imply that H 0 is less than 55 km/s/Mpc if the model is correct. (author)

  20. Air quality dispersion models from energy sources

    International Nuclear Information System (INIS)

    Lazarevska, Ana

    1996-01-01

    Along with the continuing development of new air quality models that cover more complex problems, in the Clean Air Act, legislated by the US Congress, a consistency and standardization of air quality model applications were encouraged. As a result, the Guidelines on Air Quality Models were published, which are regularly reviewed by the Office of Air Quality Planning and Standards, EPA. These guidelines provide a basis for estimating the air quality concentrations used in accessing control strategies as well as defining emission limits. This paper presents a review and analysis of the recent versions of the models: Simple Terrain Stationary Source Model; Complex Terrain Dispersion Model; Ozone,Carbon Monoxide and Nitrogen Dioxide Models; Long Range Transport Model; Other phenomenon Models:Fugitive Dust/Fugitive Emissions, Particulate Matter, Lead, Air Pathway Analyses - Air Toxic as well as Hazardous Waste. 8 refs., 4 tabs., 2 ills

  1. Vulnerable Derivatives and Good Deal Bounds: A Structural Model

    DEFF Research Database (Denmark)

    Murgoci, Agatha

    2013-01-01

    We price vulnerable derivatives -- i.e. derivatives where the counterparty may default. These are basically the derivatives traded on the over-the-counter (OTC) markets. Default is modeled in a structural framework. The technique employed for pricing is good deal bounds (GDBs). The method imposes...

  2. Source Rupture Process of the 2016 Kumamoto Prefecture, Japan, Earthquake Derived from Near-Source Strong-Motion Records

    Science.gov (United States)

    Zheng, A.; Zhang, W.

    2016-12-01

    On 15 April, 2016 the great earthquake with magnitude Mw7.1 occurred in Kumamoto prefecture, Japan. The focal mechanism solution released by F-net located the hypocenter at 130.7630°E, 32.7545°N, at a depth of 12.45 km, and the strike, dip, and the rake angle of the fault were N226°E, 84° and -142° respectively. The epicenter distribution and focal mechanisms of aftershocks implied the mechanism of the mainshock might have changed in the source rupture process, thus a single focal mechanism was not enough to explain the observed data adequately. In this study, based on the inversion result of GNSS and InSAR surface deformation with active structures for reference, we construct a finite fault model with focal mechanism changes, and derive the source rupture process by multi-time-window linear waveform inversion method using the strong-motion data (0.05 1.0Hz) obtained by K-NET and KiK-net of Japan. Our result shows that the Kumamoto earthquake is a right-lateral strike slipping rupture event along the Futagawa-Hinagu fault zone, and the seismogenic fault is divided into a northern segment and a southern one. The strike and the dip of the northern segment are N235°E, 60° respectively. And for the southern one, they are N205°E, 72° respectively. The depth range of the fault model is consistent with the depth distribution of aftershocks, and the slip on the fault plane mainly concentrate on the northern segment, in which the maximum slip is about 7.9 meter. The rupture process of the whole fault continues for approximately 18-sec, and the total seismic moment released is 5.47×1019N·m (Mw 7.1). In addition, the essential feature of the distribution of PGV and PGA synthesized by the inversion result is similar to that of observed PGA and seismic intensity.

  3. BRIEF COMMENTS REGARDING THE INDIRECT (OR DERIVED) SOURCES OF LABOR LAW

    OpenAIRE

    Brîndușa Vartolomei

    2015-01-01

    In the field of the law governing the legal work relations one of the features that also contributes to defining the autonomy of labor law is that of the existence of the specific sources of law consisting in regulation on the functioning of the employer, internal regulation, collective labor agreement, and instructions regarding the security and labor health. In addition, in the practical field of the labor relationssome indirect (or derived) sources of law were also pointed out ...

  4. Heat source reconstruction from noisy temperature fields using an optimised derivative Gaussian filter

    Science.gov (United States)

    Delpueyo, D.; Balandraud, X.; Grédiac, M.

    2013-09-01

    The aim of this paper is to present a post-processing technique based on a derivative Gaussian filter to reconstruct heat source fields from temperature fields measured by infrared thermography. Heat sources can be deduced from temperature variations thanks to the heat diffusion equation. Filtering and differentiating are key-issues which are closely related here because the temperature fields which are processed are unavoidably noisy. We focus here only on the diffusion term because it is the most difficult term to estimate in the procedure, the reason being that it involves spatial second derivatives (a Laplacian for isotropic materials). This quantity can be reasonably estimated using a convolution of the temperature variation fields with second derivatives of a Gaussian function. The study is first based on synthetic temperature variation fields corrupted by added noise. The filter is optimised in order to reconstruct at best the heat source fields. The influence of both the dimension and the level of a localised heat source is discussed. Obtained results are also compared with another type of processing based on an averaging filter. The second part of this study presents an application to experimental temperature fields measured with an infrared camera on a thin plate in aluminium alloy. Heat sources are generated with an electric heating patch glued on the specimen surface. Heat source fields reconstructed from measured temperature fields are compared with the imposed heat sources. Obtained results illustrate the relevancy of the derivative Gaussian filter to reliably extract heat sources from noisy temperature fields for the experimental thermomechanics of materials.

  5. Heuristic derivation of the Rossi-alpha formula for a pulsed neutron source

    International Nuclear Information System (INIS)

    Baeten, P.

    2004-01-01

    Expressions for the Rossi-alpha distribution for a pulsed neutron source were derived using a heuristic derivation based on the method of joint detection probability. This heuristic technique was chosen over the more rigorous master equation method due to its simplicity and the complementary of both techniques. The derived equations also take into account the presence of delayed neutrons and intrinsic neutron sources which often cannot be neglected in source-driven subcritical cores. The obtained expressions showed that the ratio of the correlated to the uncorrelated signal in the Rossi-Alpha distribution for a Pulsed Source (RAPS) was strongly increased compared to the case for a standard Rossi-alpha distribution for a continuous source. It was also demonstrated that by using this RAPS technique four independent measurement quantities, instead of three with the standard Rossi-alpha technique, can be determined. Hence, it is no longer necessary to combine the Rossi-alpha technique with another method to measure the reactivity expressed in dollars. Both properties, the increased signal-to-noise ratio of the correlated signal and the measurement of a fourth measurement quantity, make that the RAPS technique is an excellent candidate for the measurement of kinetic parameters in source-driven subcritical assemblies

  6. Derivation of the source term, dose results and associated radiological consequences for the Greek Research Reactor – 1

    Energy Technology Data Exchange (ETDEWEB)

    Pappas, Charalampos, E-mail: chpappas@ipta.demokritos.gr; Ikonomopoulos, Andreas; Sfetsos, Athanasios; Andronopoulos, Spyros; Varvayanni, Melpomeni; Catsaros, Nicolas

    2014-07-01

    Highlights: • Source term derivation of postulated accident sequences in a research reactor. • Various containment ventilation scenarios considered for source term calculations. • Source term parametric analysis performed in case of lack of ventilation. • JRODOS employed for dose calculations under eighteen modeled scenarios. • Estimation of radiological consequences during typical and adverse weather scenarios. - Abstract: The estimated source term, dose results and radiological consequences of selected accident sequences in the Greek Research Reactor – 1 are presented and discussed. A systematic approach has been adopted to perform the necessary calculations in accordance with the latest computational developments and IAEA recommendations. Loss-of-coolant, reactivity insertion and fuel channel blockage accident sequences have been selected to derive the associated source terms under three distinct containment ventilation scenarios. Core damage has been conservatively assessed for each accident sequence while the ventilation has been assumed to function within the efficiency limits defined at the Safety Analysis Report. In case of lack of ventilation a parametric analysis is also performed to examine the dependency of the source term on the containment leakage rate. A typical as well as an adverse meteorological scenario have been defined in the JRODOS computational platform in order to predict the effective, lung and thyroid doses within a region defined by a 15 km radius downwind from the reactor building. The radiological consequences of the eighteen scenarios associated with the accident sequences are presented and discussed.

  7. Path spectra derived from inversion of source and site spectra for earthquakes in Southern California

    Science.gov (United States)

    Klimasewski, A.; Sahakian, V. J.; Baltay, A.; Boatwright, J.; Fletcher, J. B.; Baker, L. M.

    2017-12-01

    A large source of epistemic uncertainty in Ground Motion Prediction Equations (GMPEs) is derived from the path term, currently represented as a simple geometric spreading and intrinsic attenuation term. Including additional physical relationships between the path properties and predicted ground motions would produce more accurate and precise, region-specific GMPEs by reclassifying some of the random, aleatory uncertainty as epistemic. This study focuses on regions of Southern California, using data from the Anza network and Southern California Seismic network to create a catalog of events magnitude 2.5 and larger from 1998 to 2016. The catalog encompasses regions of varying geology and therefore varying path and site attenuation. Within this catalog of events, we investigate several collections of event region-to-station pairs, each of which share similar origin locations and stations so that all events have similar paths. Compared with a simple regional GMPE, these paths consistently have high or low residuals. By working with events that have the same path, we can isolate source and site effects, and focus on the remaining residual as path effects. We decompose the recordings into source and site spectra for each unique event and site in our greater Southern California regional database using the inversion method of Andrews (1986). This model represents each natural log record spectra as the sum of its natural log event and site spectra, while constraining each record to a reference site or Brune source spectrum. We estimate a regional, path-specific anelastic attenuation (Q) and site attenuation (t*) from the inversion site spectra and corner frequency from the inversion event spectra. We then compute the residuals between the observed record data, and the inversion model prediction (event*site spectra). This residual is representative of path effects, likely anelastic attenuation along the path that varies from the regional median attenuation. We examine the

  8. A new source of mesenchymal stem cells for articular cartilage repair: MSCs derived from mobilized peripheral blood share similar biological characteristics in vitro and chondrogenesis in vivo as MSCs from bone marrow in a rabbit model.

    Science.gov (United States)

    Fu, Wei-Li; Zhou, Chun-Yan; Yu, Jia-Kuo

    2014-03-01

    Bone marrow (BM) has been considered as a major source of mesenchymal stem cells (MSCs), but it has many disadvantages in clinical application. However, MSCs from peripheral blood (PB) could be obtained by a less invasive method and be more beneficial for autologous transplantation than BM MSCs, which makes PB a promising source for articular cartilage repair in clinical use. To assess whether MSCs from mobilized PB of New Zealand White rabbits have similar biological characteristics in vitro and chondrogenesis in vivo as BM MSCs. Controlled laboratory study. A combined method of drug administration containing granulocyte colony stimulating factor (G-CSF) plus CXCR4 antagonist AMD3100 was adopted to mobilize the PB stem cells of adult New Zealand White rabbits in vitro. The isolated cells were identified as MSCs by morphological characteristics, surface markers, and differentiation potentials. A comparison between PB MSCs and BM MSCs was made in terms of biological characteristics in vitro and chondrogenesis in vivo. This issue was investigated from the aspects of morphology, immune phenotype, multiple differentiation capacity, expansion potential, antiapoptotic capacity, and ability to repair cartilage defects in vivo of PB MSCs compared with BM MSCs. Peripheral blood MSCs were successfully mobilized by the method of combined drug administration, then isolated, expanded, and identified in vitro. No significant difference was found concerning the morphology, immune phenotype, and antiapoptotic capacity between PB MSCs and BM MSCs. Significantly, MSCs from both sources compounded with decalcified bone matrix showed the same ability to repair cartilage defects in vivo. For multipluripotency, BM MSCs exhibited a more osteogenic potential and higher proliferation capacity than PB MSCs, whereas PB MSCs possessed a stronger adipogenic and chondrogenic differentiation potential than BM MSCs in vitro. Although there are some differences in the proliferation and

  9. Remarks on the microscopic derivation of the collective model

    International Nuclear Information System (INIS)

    Toyoda, T.; Wildermuth, K.

    1984-01-01

    The rotational part of the phenomenological collective model of Bohr and Mottelson and others is derived microscopically, starting with the Schrodinger equation written in projection form and introducing a new set of 'relative Euler angles'. In order to derive the local Schrodinger equation of the collective model, it is assumed that the intrinsic wave functions give strong peaking properties to the overlapping kernels

  10. Bone marrow-derived versus parenchymal sources of inducible nitric oxide synthase in experimental autoimmune encephalomyelitis

    DEFF Research Database (Denmark)

    Zehntner, Simone P; Bourbonniere, Lyne; Hassan-Zahraee, Mina

    2004-01-01

    . These discrepancies may reflect balance between immunoregulatory and neurocytopathologic roles for NO. We investigated selective effects of bone marrow-derived versus CNS parenchymal sources of iNOS in EAE in chimeric mice. Chimeras that selectively expressed or ablated iNOS in leukocytes both showed significant...

  11. Local discrete symmetries from superstring derived models

    International Nuclear Information System (INIS)

    Faraggi, A.E.

    1996-10-01

    Discrete and global symmetries play an essential role in many extensions of the Standard Model, for example, to preserve the proton lifetime, to prevent flavor changing neutral currents, etc. An important question is how can such symmetries survive in a theory of quantum gravity, like superstring theory. In a specific string model the author illustrates how local discrete symmetries may arise in string models and play an important role in preventing fast proton decay and flavor changing neutral currents. The local discrete symmetry arises due to the breaking of the non-Abelian gauge symmetries by Wilson lines in the superstring models and forbids, for example dimension five operators which mediate rapid proton decay, to all orders of nonrenormalizable terms. In the context of models of unification of the gauge and gravitational interactions, it is precisely this type of local discrete symmetries that must be found in order to insure that a given model is not in conflict with experimental observations

  12. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Science.gov (United States)

    Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  13. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Directory of Open Access Journals (Sweden)

    Obioma Nwankwo

    Full Text Available To introduce a new method of deriving a virtual source model (VSM of a linear accelerator photon beam from a phase space file (PSF for Monte Carlo (MC dose calculation.A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses.The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate for the evaluated fields.A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  14. Developing a Successful Open Source Training Model

    Directory of Open Access Journals (Sweden)

    Belinda Lopez

    2010-01-01

    Full Text Available Training programs for open source software provide a tangible, and sellable, product. A successful training program not only builds revenue, it also adds to the overall body of knowledge available for the open source project. By gathering best practices and taking advantage of the collective expertise within a community, it may be possible for a business to partner with an open source project to build a curriculum that promotes the project and supports the needs of the company's training customers. This article describes the initial approach used by Canonical, the commercial sponsor of the Ubuntu Linux operating system, to engage the community in the creation of its training offerings. We then discuss alternate curriculum creation models and some of the conditions that are necessary for successful collaboration between creators of existing documentation and commercial training providers.

  15. Modeling Secondary Organic Aerosol Formation From Emissions of Combustion Sources

    Science.gov (United States)

    Jathar, Shantanu Hemant

    Atmospheric aerosols exert a large influence on the Earth's climate and cause adverse public health effects, reduced visibility and material degradation. Secondary organic aerosol (SOA), defined as the aerosol mass arising from the oxidation products of gas-phase organic species, accounts for a significant fraction of the submicron atmospheric aerosol mass. Yet, there are large uncertainties surrounding the sources, atmospheric evolution and properties of SOA. This thesis combines laboratory experiments, extensive data analysis and global modeling to investigate the contribution of semi-volatile and intermediate volatility organic compounds (SVOC and IVOC) from combustion sources to SOA formation. The goals are to quantify the contribution of these emissions to ambient PM and to evaluate and improve models to simulate its formation. To create a database for model development and evaluation, a series of smog chamber experiments were conducted on evaporated fuel, which served as surrogates for real-world combustion emissions. Diesel formed the most SOA followed by conventional jet fuel / jet fuel derived from natural gas, gasoline and jet fuel derived from coal. The variability in SOA formation from actual combustion emissions can be partially explained by the composition of the fuel. Several models were developed and tested along with existing models using SOA data from smog chamber experiments conducted using evaporated fuel (this work, gasoline, fischertropschs, jet fuel, diesels) and published data on dilute combustion emissions (aircraft, on- and off-road gasoline, on- and off-road diesel, wood burning, biomass burning). For all of the SOA data, existing models under-predicted SOA formation if SVOC/IVOC were not included. For the evaporated fuel experiments, when SVOC/IVOC were included predictions using the existing SOA model were brought to within a factor of two of measurements with minor adjustments to model parameterizations. Further, a volatility

  16. Retrieving global aerosol sources from satellites using inverse modeling

    Directory of Open Access Journals (Sweden)

    O. Dubovik

    2008-01-01

    Full Text Available Understanding aerosol effects on global climate requires knowing the global distribution of tropospheric aerosols. By accounting for aerosol sources, transports, and removal processes, chemical transport models simulate the global aerosol distribution using archived meteorological fields. We develop an algorithm for retrieving global aerosol sources from satellite observations of aerosol distribution by inverting the GOCART aerosol transport model.

    The inversion is based on a generalized, multi-term least-squares-type fitting, allowing flexible selection and refinement of a priori algorithm constraints. For example, limitations can be placed on retrieved quantity partial derivatives, to constrain global aerosol emission space and time variability in the results. Similarities and differences between commonly used inverse modeling and remote sensing techniques are analyzed. To retain the high space and time resolution of long-period, global observational records, the algorithm is expressed using adjoint operators.

    Successful global aerosol emission retrievals at 2°×2.5 resolution were obtained by inverting GOCART aerosol transport model output, assuming constant emissions over the diurnal cycle, and neglecting aerosol compositional differences. In addition, fine and coarse mode aerosol emission sources were inverted separately from MODIS fine and coarse mode aerosol optical thickness data, respectively. These assumptions are justified, based on observational coverage and accuracy limitations, producing valuable aerosol source locations and emission strengths. From two weeks of daily MODIS observations during August 2000, the global placement of fine mode aerosol sources agreed with available independent knowledge, even though the inverse method did not use any a priori information about aerosol sources, and was initialized with a "zero aerosol emission" assumption. Retrieving coarse mode aerosol emissions was less successful

  17. Rational Models for Inflation-Linked Derivatives

    DEFF Research Database (Denmark)

    Dam, Henrik; Macrina, Andrea; Skovmand, David

    2018-01-01

    in a multiplicative manner that allows for closed-form pricing of vanilla inflation products suchlike zero-coupon swaps, caps and floors, year-on-year swaps, caps and floors, and the exotic limited price index swap. The model retains the attractive features of a nominal multi-curve interest rate model such as closed...

  18. Weather Derivatives and Stochastic Modelling of Temperature

    Directory of Open Access Journals (Sweden)

    Fred Espen Benth

    2011-01-01

    Full Text Available We propose a continuous-time autoregressive model for the temperature dynamics with volatility being the product of a seasonal function and a stochastic process. We use the Barndorff-Nielsen and Shephard model for the stochastic volatility. The proposed temperature dynamics is flexible enough to model temperature data accurately, and at the same time being analytically tractable. Futures prices for commonly traded contracts at the Chicago Mercantile Exchange on indices like cooling- and heating-degree days and cumulative average temperatures are computed, as well as option prices on them.

  19. Open source integrated modeling environment Delta Shell

    Science.gov (United States)

    Donchyts, G.; Baart, F.; Jagers, B.; van Putten, H.

    2012-04-01

    In the last decade, integrated modelling has become a very popular topic in environmental modelling since it helps solving problems, which is difficult to model using a single model. However, managing complexity of integrated models and minimizing time required for their setup remains a challenging task. The integrated modelling environment Delta Shell simplifies this task. The software components of Delta Shell are easy to reuse separately from each other as well as a part of integrated environment that can run in a command-line or a graphical user interface mode. The most components of the Delta Shell are developed using C# programming language and include libraries used to define, save and visualize various scientific data structures as well as coupled model configurations. Here we present two examples showing how Delta Shell simplifies process of setting up integrated models from the end user and developer perspectives. The first example shows coupling of a rainfall-runoff, a river flow and a run-time control models. The second example shows how coastal morphological database integrates with the coastal morphological model (XBeach) and a custom nourishment designer. Delta Shell is also available as open-source software released under LGPL license and accessible via http://oss.deltares.nl.

  20. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  1. The infinitesimal model: Definition, derivation, and implications.

    Science.gov (United States)

    Barton, N H; Etheridge, A M; Véber, A

    2017-12-01

    Our focus here is on the infinitesimal model. In this model, one or several quantitative traits are described as the sum of a genetic and a non-genetic component, the first being distributed within families as a normal random variable centred at the average of the parental genetic components, and with a variance independent of the parental traits. Thus, the variance that segregates within families is not perturbed by selection, and can be predicted from the variance components. This does not necessarily imply that the trait distribution across the whole population should be Gaussian, and indeed selection or population structure may have a substantial effect on the overall trait distribution. One of our main aims is to identify some general conditions on the allelic effects for the infinitesimal model to be accurate. We first review the long history of the infinitesimal model in quantitative genetics. Then we formulate the model at the phenotypic level in terms of individual trait values and relationships between individuals, but including different evolutionary processes: genetic drift, recombination, selection, mutation, population structure, …. We give a range of examples of its application to evolutionary questions related to stabilising selection, assortative mating, effective population size and response to selection, habitat preference and speciation. We provide a mathematical justification of the model as the limit as the number M of underlying loci tends to infinity of a model with Mendelian inheritance, mutation and environmental noise, when the genetic component of the trait is purely additive. We also show how the model generalises to include epistatic effects. We prove in particular that, within each family, the genetic components of the individual trait values in the current generation are indeed normally distributed with a variance independent of ancestral traits, up to an error of order 1∕M. Simulations suggest that in some cases the convergence

  2. A variable-order fractal derivative model for anomalous diffusion

    Directory of Open Access Journals (Sweden)

    Liu Xiaoting

    2017-01-01

    Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

  3. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    Science.gov (United States)

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  4. Source term derivation and radiological safety analysis for the TRICO II research reactor in Kinshasa

    International Nuclear Information System (INIS)

    Muswema, J.L.; Ekoko, G.B.; Lukanda, V.M.; Lobo, J.K.-K.; Darko, E.O.; Boafo, E.K.

    2015-01-01

    Highlights: • Atmospheric dispersion modeling for two credible accidents of the TRIGA Mark II research reactor in Kinshasa (TRICO II) was performed. • Radiological safety analysis after the postulated initiating events (PIE) was also carried out. • The Karlsruhe KORIGEN and the HotSpot Health Physics codes were used to achieve the objectives of this study. • All the values of effective dose obtained following the accident scenarios were below the regulatory limits for reactor staff members and the public, respectively. - Abstract: The source term from the 1 MW TRIGA Mark II research reactor core of the Democratic Republic of the Congo was derived in this study. An atmospheric dispersion modeling followed by radiation dose calculation were performed based on two possible postulated accident scenarios. This derivation was made from an inventory of peak radioisotope activities released in the core by using the Karlsruhe version of isotope generation code KORIGEN. The atmospheric dispersion modeling was performed with HotSpot code, and its application yielded to radiation dose profile around the site using meteorological parameters specific to the area under study. The two accident scenarios were picked from possible accident analyses for TRIGA and TRIGA-fueled reactors, involving the case of destruction of the fuel element with highest activity release and a plane crash on the reactor building as the worst case scenario. Deterministic effects of these scenarios are used to update the Safety Analysis Report (SAR) of the reactor, and for its current version, these scenarios are not yet incorporated. Site-specific meteorological conditions were collected from two meteorological stations: one installed within the Atomic Energy Commission and another at the National Meteorological Agency (METTELSAT), which is not far from the site. Results show that in both accident scenarios, radiation doses remain within the limits, far below the recommended maximum effective

  5. Source term derivation and radiological safety analysis for the TRICO II research reactor in Kinshasa

    Energy Technology Data Exchange (ETDEWEB)

    Muswema, J.L., E-mail: jeremie.muswem@unikin.ac.cd [Faculty of Science, University of Kinshasa, P.O. Box 190, KIN XI (Congo, The Democratic Republic of the); Ekoko, G.B. [Faculty of Science, University of Kinshasa, P.O. Box 190, KIN XI (Congo, The Democratic Republic of the); Lukanda, V.M. [Faculty of Science, University of Kinshasa, P.O. Box 190, KIN XI (Congo, The Democratic Republic of the); Democratic Republic of the Congo' s General Atomic Energy Commission, P.O. Box AE1 (Congo, The Democratic Republic of the); Lobo, J.K.-K. [Faculty of Science, University of Kinshasa, P.O. Box 190, KIN XI (Congo, The Democratic Republic of the); Darko, E.O. [Radiation Protection Institute, Ghana Atomic Energy Commission, P.O. Box LG 80, Legon, Accra (Ghana); Boafo, E.K. [University of Ontario Institute of Technology, 2000 Simcoe St. North, Oshawa, ONL1 H7K4 (Canada)

    2015-01-15

    Highlights: • Atmospheric dispersion modeling for two credible accidents of the TRIGA Mark II research reactor in Kinshasa (TRICO II) was performed. • Radiological safety analysis after the postulated initiating events (PIE) was also carried out. • The Karlsruhe KORIGEN and the HotSpot Health Physics codes were used to achieve the objectives of this study. • All the values of effective dose obtained following the accident scenarios were below the regulatory limits for reactor staff members and the public, respectively. - Abstract: The source term from the 1 MW TRIGA Mark II research reactor core of the Democratic Republic of the Congo was derived in this study. An atmospheric dispersion modeling followed by radiation dose calculation were performed based on two possible postulated accident scenarios. This derivation was made from an inventory of peak radioisotope activities released in the core by using the Karlsruhe version of isotope generation code KORIGEN. The atmospheric dispersion modeling was performed with HotSpot code, and its application yielded to radiation dose profile around the site using meteorological parameters specific to the area under study. The two accident scenarios were picked from possible accident analyses for TRIGA and TRIGA-fueled reactors, involving the case of destruction of the fuel element with highest activity release and a plane crash on the reactor building as the worst case scenario. Deterministic effects of these scenarios are used to update the Safety Analysis Report (SAR) of the reactor, and for its current version, these scenarios are not yet incorporated. Site-specific meteorological conditions were collected from two meteorological stations: one installed within the Atomic Energy Commission and another at the National Meteorological Agency (METTELSAT), which is not far from the site. Results show that in both accident scenarios, radiation doses remain within the limits, far below the recommended maximum effective

  6. Source modelling in seismic risk analysis for nuclear power plants

    International Nuclear Information System (INIS)

    Yucemen, M.S.

    1978-12-01

    The proposed probabilistic procedure provides a consistent method for the modelling, analysis and updating of uncertainties that are involved in the seismic risk analysis for nuclear power plants. The potential earthquake activity zones are idealized as point, line or area sources. For these seismic source types, expressions to evaluate their contribution to seismic risk are derived, considering all the possible site-source configurations. The seismic risk at a site is found to depend not only on the inherent randomness of the earthquake occurrences with respect to magnitude, time and space, but also on the uncertainties associated with the predicted values of the seismic and geometric parameters, as well as the uncertainty in the attenuation model. The uncertainty due to the attenuation equation is incorporated into the analysis through the use of random correction factors. The influence of the uncertainty resulting from the insufficient information on the seismic parameters and source geometry is introduced into the analysis by computing a mean risk curve averaged over the various alternative assumptions on the parameters and source geometry. Seismic risk analysis is carried for the city of Denizli, which is located in the seismically most active zone of Turkey. The second analysis is for Akkuyu

  7. Silicon Carbide Derived Carbons: Experiments and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kertesz, Miklos [Georgetown University, Washington DC 20057

    2011-02-28

    The main results of the computational modeling was: 1. Development of a new genealogical algorithm to generate vacancy clusters in diamond starting from monovacancies combined with energy criteria based on TBDFT energetics. The method revealed that for smaller vacancy clusters the energetically optimal shapes are compact but for larger sizes they tend to show graphitized regions. In fact smaller clusters of the size as small as 12 already show signatures of this graphitization. The modeling gives firm basis for the slit-pore modeling of porous carbon materials and explains some of their properties. 2. We discovered small vacancy clusters and their physical characteristics that can be used to spectroscopically identify them. 3. We found low barrier pathways for vacancy migration in diamond-like materials by obtaining for the first time optimized reaction pathways.

  8. Quasi-homogeneous partial coherent source modeling of multimode optical fiber output using the elementary source method

    Science.gov (United States)

    Fathy, Alaa; Sabry, Yasser M.; Khalil, Diaa A.

    2017-10-01

    Multimode fibers (MMF) have many applications in illumination, spectroscopy, sensing and even in optical communication systems. In this work, we present a model for the MMF output field assuming the fiber end as a quasi-homogenous source. The fiber end is modeled by a group of partially coherent elementary sources, spatially shifted and uncorrelated with each other. The elementary source distribution is derived from the far field intensity measurement, while the weighting function of the sources is derived from the fiber end intensity measurement. The model is compared with practical measurements for fibers with different core/cladding diameters at different propagation distances and for different input excitations: laser, white light and LED. The obtained results show normalized root mean square error less than 8% in the intensity profile in most cases, even when the fiber end surface is not perfectly cleaved. Also, the comparison with the Gaussian-Schell model results shows a better agreement with the measurement. In addition, the complex degree of coherence, derived from the model results, is compared with the theoretical predictions of the modified Van Zernike equation showing very good agreement, which strongly supports the assumption that the large core MMF could be considered as a quasi-homogenous source.

  9. The algorithms for calculating synthetic seismograms from a dipole source using the derivatives of Green's function

    Science.gov (United States)

    Pavlov, V. M.

    2017-07-01

    The problem of calculating complete synthetic seismograms from a point dipole with an arbitrary seismic moment tensor in a plane parallel medium composed of homogeneous elastic isotropic layers is considered. It is established that the solutions of the system of ordinary differential equations for the motion-stress vector have a reciprocity property, which allows obtaining a compact formula for the derivative of the motion vector with respect to the source depth. The reciprocity theorem for Green's functions with respect to the interchange of the source and receiver is obtained for a medium with cylindrical boundary. The differentiation of Green's functions with respect to the coordinates of the source leads to the same calculation formulas as the algorithm developed in the previous work (Pavlov, 2013). A new algorithm appears when the derivatives with respect to the horizontal coordinates of the source is replaced by the derivatives with respect to the horizontal coordinates of the receiver (with the minus sign). This algorithm is more transparent, compact, and economic than the previous one. It requires calculating the wavenumbers associated with Bessel function's roots of order 0 and order 1, whereas the previous algorithm additionally requires the second order roots.

  10. Wind gust models derived from field data

    Science.gov (United States)

    Gawronski, W.

    1995-01-01

    Wind data measured during a field experiment were used to verify the analytical model of wind gusts. Good coincidence was observed; the only discrepancy occurred for the azimuth error in the front and back winds, where the simulated errors were smaller than the measured ones. This happened because of the assumption of the spatial coherence of the wind gust model, which generated a symmetric antenna load and, in consequence, a low azimuth servo error. This result indicates a need for upgrading the wind gust model to a spatially incoherent one that will reflect the real gusts in a more accurate manner. In order to design a controller with wind disturbance rejection properties, the wind disturbance should be known at the input to the antenna rate loop model. The second task, therefore, consists of developing a digital filter that simulates the wind gusts at the antenna rate input. This filter matches the spectrum of the measured servo errors. In this scenario, the wind gusts are generated by introducing white noise to the filter input.

  11. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Smidts, C.; Sova, D.

    1999-01-01

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  12. A spatial structural derivative model for ultraslow diffusion

    Directory of Open Access Journals (Sweden)

    Xu Wei

    2017-01-01

    Full Text Available This study investigates the ultraslow diffusion by a spatial structural derivative, in which the exponential function ex is selected as the structural function to construct the local structural derivative diffusion equation model. The analytical solution of the diffusion equation is a form of Biexponential distribution. Its corresponding mean squared displacement is numerically calculated, and increases more slowly than the logarithmic function of time. The local structural derivative diffusion equation with the structural function ex in space is an alternative physical and mathematical modeling model to characterize a kind of ultraslow diffusion.

  13. Markov source model for printed music decoding

    Science.gov (United States)

    Kopec, Gary E.; Chou, Philip A.; Maltz, David A.

    1995-03-01

    This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.

  14. Modeling of renewable hybrid energy sources

    Directory of Open Access Journals (Sweden)

    Dumitru Cristian Dragos

    2009-12-01

    Full Text Available Recent developments and trends in the electric power consumption indicate an increasing use of renewable energy. Renewable energy technologies offer the promise of clean, abundant energy gathered from self-renewing resources such as the sun, wind, earth and plants. Virtually all regions of the world have renewable resources of one type or another. By this point of view studies on renewable energies focuses more and more attention. The present paper intends to present different mathematical models related to different types of renewable energy sources such as: solar energy and wind energy. It is also presented the validation and adaptation of such models to hybrid systems working in geographical and meteorological conditions specific to central part of Transylvania region. The conclusions based on validation of such models are also shown.

  15. Analysis of Drude model using fractional derivatives without singular kernels

    Directory of Open Access Journals (Sweden)

    Jiménez Leonardo Martínez

    2017-11-01

    Full Text Available We report study exploring the fractional Drude model in the time domain, using fractional derivatives without singular kernels, Caputo-Fabrizio (CF, and fractional derivatives with a stretched Mittag-Leffler function. It is shown that the velocity and current density of electrons moving through a metal depend on both the time and the fractional order 0 < γ ≤ 1. Due to non-singular fractional kernels, it is possible to consider complete memory effects in the model, which appear neither in the ordinary model, nor in the fractional Drude model with Caputo fractional derivative. A comparison is also made between these two representations of the fractional derivatives, resulting a considered difference when γ < 0.8.

  16. Source-based neurofeedback methods using EEG recordings: training altered brain activity in a functional brain source derived from blind source separation

    Science.gov (United States)

    White, David J.; Congedo, Marco; Ciorciari, Joseph

    2014-01-01

    A developing literature explores the use of neurofeedback in the treatment of a range of clinical conditions, particularly ADHD and epilepsy, whilst neurofeedback also provides an experimental tool for studying the functional significance of endogenous brain activity. A critical component of any neurofeedback method is the underlying physiological signal which forms the basis for the feedback. While the past decade has seen the emergence of fMRI-based protocols training spatially confined BOLD activity, traditional neurofeedback has utilized a small number of electrode sites on the scalp. As scalp EEG at a given electrode site reflects a linear mixture of activity from multiple brain sources and artifacts, efforts to successfully acquire some level of control over the signal may be confounded by these extraneous sources. Further, in the event of successful training, these traditional neurofeedback methods are likely influencing multiple brain regions and processes. The present work describes the use of source-based signal processing methods in EEG neurofeedback. The feasibility and potential utility of such methods were explored in an experiment training increased theta oscillatory activity in a source derived from Blind Source Separation (BSS) of EEG data obtained during completion of a complex cognitive task (spatial navigation). Learned increases in theta activity were observed in two of the four participants to complete 20 sessions of neurofeedback targeting this individually defined functional brain source. Source-based EEG neurofeedback methods using BSS may offer important advantages over traditional neurofeedback, by targeting the desired physiological signal in a more functionally and spatially specific manner. Having provided preliminary evidence of the feasibility of these methods, future work may study a range of clinically and experimentally relevant brain processes where individual brain sources may be targeted by source-based EEG neurofeedback. PMID

  17. Modeling a neutron rich nuclei source

    Energy Technology Data Exchange (ETDEWEB)

    Mirea, M.; Bajeat, O.; Clapier, F.; Ibrahim, F.; Mueller, A.C.; Pauwels, N.; Proust, J. [Institut de Physique Nucleaire, IN2P3/CNRS, 91 - Orsay (France); Mirea, M. [Institute of Physics and Nuclear Engineering, Tandem Lab., Bucharest (Romania)

    2000-07-01

    The deuteron break-up process in a suitable converter gives rise to intense neutron beams. A source of neutron rich nuclei based on the neutron induced fission can be realised using these beams. A theoretical optimization of such a facility as a function of the incident deuteron energy is reported. The model used to determine the fission products takes into account the excitation energy of the target nucleus and the evaporation of prompt neutrons. Results are presented in connection with a converter-target specific geometry. (author000.

  18. Modeling a neutron rich nuclei source

    International Nuclear Information System (INIS)

    Mirea, M.; Bajeat, O.; Clapier, F.; Ibrahim, F.; Mueller, A.C.; Pauwels, N.; Proust, J.; Mirea, M.

    2000-01-01

    The deuteron break-up process in a suitable converter gives rise to intense neutron beams. A source of neutron rich nuclei based on the neutron induced fission can be realised using these beams. A theoretical optimization of such a facility as a function of the incident deuteron energy is reported. The model used to determine the fission products takes into account the excitation energy of the target nucleus and the evaporation of prompt neutrons. Results are presented in connection with a converter-target specific geometry. (authors)

  19. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  20. Inflationary models with non-minimally derivative coupling

    International Nuclear Information System (INIS)

    Yang, Nan; Fei, Qin; Gong, Yungui; Gao, Qing

    2016-01-01

    We derive the general formulae for the scalar and tensor spectral tilts to the second order for the inflationary models with non-minimally derivative coupling without taking the high friction limit. The non-minimally kinetic coupling to Einstein tensor brings the energy scale in the inflationary models down to be sub-Planckian. In the high friction limit, the Lyth bound is modified with an extra suppression factor, so that the field excursion of the inflaton is sub-Planckian. The inflationary models with non-minimally derivative coupling are more consistent with observations in the high friction limit. In particular, with the help of the non-minimally derivative coupling, the quartic power law potential is consistent with the observational constraint at 95% CL. (paper)

  1. Large deflection of viscoelastic beams using fractional derivative model

    International Nuclear Information System (INIS)

    Bahranini, Seyed Masoud Sotoodeh; Eghtesad, Mohammad; Ghavanloo, Esmaeal; Farid, Mehrdad

    2013-01-01

    This paper deals with large deflection of viscoelastic beams using a fractional derivative model. For this purpose, a nonlinear finite element formulation of viscoelastic beams in conjunction with the fractional derivative constitutive equations has been developed. The four-parameter fractional derivative model has been used to describe the constitutive equations. The deflected configuration for a uniform beam with different boundary conditions and loads is presented. The effect of the order of fractional derivative on the large deflection of the cantilever viscoelastic beam, is investigated after 10, 100, and 1000 hours. The main contribution of this paper is finite element implementation for nonlinear analysis of viscoelastic fractional model using the storage of both strain and stress histories. The validity of the present analysis is confirmed by comparing the results with those found in the literature.

  2. Derivative interactions and perturbative UV contributions in N Higgs doublet models

    Energy Technology Data Exchange (ETDEWEB)

    Kikuta, Yohei [KEK Theory Center, KEK, Tsukuba (Japan); The Graduate University for Advanced Studies, Department of Particle and Nuclear Physics, Tsukuba (Japan); Yamamoto, Yasuhiro [Universidad de Granada, Deportamento de Fisica Teorica y del Cosmos, Facultad de Ciencias and CAFPE, Granada (Spain)

    2016-05-15

    We study the Higgs derivative interactions on models including arbitrary number of the Higgs doublets. These interactions are generated by two ways. One is higher order corrections of composite Higgs models, and the other is integration of heavy scalars and vectors. In the latter case, three point couplings between the Higgs doublets and these heavy states are the sources of the derivative interactions. Their representations are constrained to couple with the doublets. We explicitly calculate all derivative interactions generated by integrating out. Their degrees of freedom and conditions to impose the custodial symmetry are discussed. We also study the vector boson scattering processes with a couple of two Higgs doublet models to see experimental signals of the derivative interactions. They are differently affected by each heavy field. (orig.)

  3. Data analysis and source modelling for LISA

    International Nuclear Information System (INIS)

    Shang, Yu

    2014-01-01

    The gravitational waves are one of the most important predictions in general relativity. Besides of the directly proof of the existence of GWs, there are already several ground based detectors (such as LIGO, GEO, etc) and the planed future space mission (such as: LISA) which are aim to detect the GWs directly. GW contain a large amount of information of its source, extracting these information can help us dig out the physical property of the source, even open a new window for understanding the Universe. Hence, GW data analysis will be a challenging task in seeking the GWs. In this thesis, I present two works about the data analysis for LISA. In the first work, we introduce an extended multimodal genetic algorithm which utilizes the properties of the signal and the detector response function to analyze the data from the third round of mock LISA data challenge. We have found all five sources present in the data and recovered the coalescence time, chirp mass, mass ratio and sky location with reasonable accuracy. As for the orbital angular momentum and two spins of the Black Holes, we have found a large number of widely separated modes in the parameter space with similar maximum likelihood values. The performance of this method is comparable, if not better, to already existing algorithms. In the second work, we introduce an new phenomenological waveform model for the extreme mass ratio inspiral system. This waveform consists of a set of harmonics with constant amplitude and slowly evolving phase which we decompose in a Taylor series. We use these phenomenological templates to detect the signal in the simulated data, and then, assuming a particular EMRI model, estimate the physical parameters of the binary with high precision. The results show that our phenomenological waveform is very feasible in the data analysis of EMRI signal.

  4. Hamiltonian derivation of a gyrofluid model for collisionless magnetic reconnection

    International Nuclear Information System (INIS)

    Tassi, E

    2014-01-01

    We consider a simple electromagnetic gyrokinetic model for collisionless plasmas and show that it possesses a Hamiltonian structure. Subsequently, from this model we derive a two-moment gyrofluid model by means of a procedure which guarantees that the resulting gyrofluid model is also Hamiltonian. The first step in the derivation consists of imposing a generic fluid closure in the Poisson bracket of the gyrokinetic model, after expressing such bracket in terms of the gyrofluid moments. The constraint of the Jacobi identity, which every Poisson bracket has to satisfy, selects then what closures can lead to a Hamiltonian gyrofluid system. For the case at hand, it turns out that the only closures (not involving integro/differential operators or an explicit dependence on the spatial coordinates) that lead to a valid Poisson bracket are those for which the second order parallel moment, independently for each species, is proportional to the zero order moment. In particular, if one chooses an isothermal closure based on the equilibrium temperatures and derives accordingly the Hamiltonian of the system from the Hamiltonian of the parent gyrokinetic model, one recovers a known Hamiltonian gyrofluid model for collisionless reconnection. The proposed procedure, in addition to yield a gyrofluid model which automatically conserves the total energy, provides also, through the resulting Poisson bracket, a way to derive further conservation laws of the gyrofluid model, associated with the so called Casimir invariants. We show that a relation exists between Casimir invariants of the gyrofluid model and those of the gyrokinetic parent model. The application of such Hamiltonian derivation procedure to this two-moment gyrofluid model is a first step toward its application to more realistic, higher-order fluid or gyrofluid models for tokamaks. It also extends to the electromagnetic gyrokinetic case, recent applications of the same procedure to Vlasov and drift- kinetic systems

  5. Tissue Source and Cell Expansion Condition Influence Phenotypic Changes of Adipose-Derived Stem Cells

    Directory of Open Access Journals (Sweden)

    Lauren H. Mangum

    2017-01-01

    Full Text Available Stem cells derived from the subcutaneous adipose tissue of debrided burned skin represent an appealing source of adipose-derived stem cells (ASCs for regenerative medicine. Traditional tissue culture uses fetal bovine serum (FBS, which complicates utilization of ASCs in human medicine. Human platelet lysate (hPL is one potential xeno-free, alternative supplement for use in ASC culture. In this study, adipogenic and osteogenic differentiation in media supplemented with 10% FBS or 10% hPL was compared in human ASCs derived from abdominoplasty (HAP or from adipose associated with debrided burned skin (BH. Most (95–99% cells cultured in FBS were stained positive for CD73, CD90, CD105, and CD142. FBS supplementation was associated with increased triglyceride content and expression of adipogenic genes. Culture in hPL significantly decreased surface staining of CD105 by 31% and 48% and CD142 by 27% and 35% in HAP and BH, respectively (p<0.05. Culture of BH-ASCs in hPL also increased expression of markers of osteogenesis and increased ALP activity. These data indicate that application of ASCs for wound healing may be influenced by ASC source as well as culture conditions used to expand them. As such, these factors must be taken into consideration before ASCs are used for regenerative purposes.

  6. Tissue Source and Cell Expansion Condition Influence Phenotypic Changes of Adipose-Derived Stem Cells

    Science.gov (United States)

    Mangum, Lauren H.; Stone, Randolph; Wrice, Nicole L.; Larson, David A.; Florell, Kyle F.; Christy, Barbara A.; Herzig, Maryanne C.; Cap, Andrew P.

    2017-01-01

    Stem cells derived from the subcutaneous adipose tissue of debrided burned skin represent an appealing source of adipose-derived stem cells (ASCs) for regenerative medicine. Traditional tissue culture uses fetal bovine serum (FBS), which complicates utilization of ASCs in human medicine. Human platelet lysate (hPL) is one potential xeno-free, alternative supplement for use in ASC culture. In this study, adipogenic and osteogenic differentiation in media supplemented with 10% FBS or 10% hPL was compared in human ASCs derived from abdominoplasty (HAP) or from adipose associated with debrided burned skin (BH). Most (95–99%) cells cultured in FBS were stained positive for CD73, CD90, CD105, and CD142. FBS supplementation was associated with increased triglyceride content and expression of adipogenic genes. Culture in hPL significantly decreased surface staining of CD105 by 31% and 48% and CD142 by 27% and 35% in HAP and BH, respectively (p < 0.05). Culture of BH-ASCs in hPL also increased expression of markers of osteogenesis and increased ALP activity. These data indicate that application of ASCs for wound healing may be influenced by ASC source as well as culture conditions used to expand them. As such, these factors must be taken into consideration before ASCs are used for regenerative purposes. PMID:29138638

  7. Modeling neurodegenerative diseases with patient-derived induced pluripotent cells

    DEFF Research Database (Denmark)

    Poon, Anna; Zhang, Yu; Chandrasekaran, Abinaya

    2017-01-01

    patient-specific induced pluripotent stem cells (iPSCs) and isogenic controls generated using CRISPR-Cas9 mediated genome editing. The iPSCs are self-renewable and capable of being differentiated into the cell types affected by the diseases. These in vitro models based on patient-derived iPSCs provide...... the possibilities of generating three-dimensional (3D) models using the iPSCs-derived cells and compare their advantages and disadvantages to conventional two-dimensional (2D) models....

  8. Deriving the Dividend Discount Model in the Intermediate Microeconomics Class

    Science.gov (United States)

    Norman, Stephen; Schlaudraff, Jonathan; White, Karianne; Wills, Douglas

    2013-01-01

    In this article, the authors show that the dividend discount model can be derived using the basic intertemporal consumption model that is introduced in a typical intermediate microeconomics course. This result will be of use to instructors who teach microeconomics to finance students in that it demonstrates the value of utility maximization in…

  9. On a derivation of the Salam-Weinberg model

    International Nuclear Information System (INIS)

    Squires, E.J.

    1979-01-01

    It is shown how the graded Lie-algebra structure of a recent derivation of the Salam-Weinberg model might arise from the form of allowed transformations on the lepton lagrangian in a 6-dimensional space. The possibility that the model might allow two identically coupled leptonic sectors, and others in which the chiralites are reversed, are discussed. (Auth.)

  10. Some remarks on the small-distance derivative model

    International Nuclear Information System (INIS)

    Jannussis, A.

    1985-01-01

    In the present work the new expressions of the derivatives for small distance are investigated according to Gonzales-Diaz model. This model is noncanonical, is a particular case of the Lie-admissible formulation and has applications for distance and time scales comparable with the Planck dimensions

  11. Developing seismogenic source models based on geologic fault data

    Science.gov (United States)

    Haller, Kathleen M.; Basili, Roberto

    2011-01-01

    Calculating seismic hazard usually requires input that includes seismicity associated with known faults, historical earthquake catalogs, geodesy, and models of ground shaking. This paper will address the input generally derived from geologic studies that augment the short historical catalog to predict ground shaking at time scales of tens, hundreds, or thousands of years (e.g., SSHAC 1997). A seismogenic source model, terminology we adopt here for a fault source model, includes explicit three-dimensional faults deemed capable of generating ground motions of engineering significance within a specified time frame of interest. In tectonically active regions of the world, such as near plate boundaries, multiple seismic cycles span a few hundred to a few thousand years. In contrast, in less active regions hundreds of kilometers from the nearest plate boundary, seismic cycles generally are thousands to tens of thousands of years long. Therefore, one should include sources having both longer recurrence intervals and possibly older times of most recent rupture in less active regions of the world rather than restricting the model to include only Holocene faults (i.e., those with evidence of large-magnitude earthquakes in the past 11,500 years) as is the practice in tectonically active regions with high deformation rates. During the past 15 years, our institutions independently developed databases to characterize seismogenic sources based on geologic data at a national scale. Our goal here is to compare the content of these two publicly available seismogenic source models compiled for the primary purpose of supporting seismic hazard calculations by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the U.S. Geological Survey (USGS); hereinafter we refer to the two seismogenic source models as INGV and USGS, respectively. This comparison is timely because new initiatives are emerging to characterize seismogenic sources at the continental scale (e.g., SHARE in the

  12. Source term modelling parameters for Project-90

    International Nuclear Information System (INIS)

    Shaw, W.; Smith, G.; Worgan, K.; Hodgkinson, D.; Andersson, K.

    1992-04-01

    This document summarises the input parameters for the source term modelling within Project-90. In the first place, the parameters relate to the CALIBRE near-field code which was developed for the Swedish Nuclear Power Inspectorate's (SKI) Project-90 reference repository safety assessment exercise. An attempt has been made to give best estimate values and, where appropriate, a range which is related to variations around base cases. It should be noted that the data sets contain amendments to those considered by KBS-3. In particular, a completely new set of inventory data has been incorporated. The information given here does not constitute a complete set of parameter values for all parts of the CALIBRE code. Rather, it gives the key parameter values which are used in the constituent models within CALIBRE and the associated studies. For example, the inventory data acts as an input to the calculation of the oxidant production rates, which influence the generation of a redox front. The same data is also an initial value data set for the radionuclide migration component of CALIBRE. Similarly, the geometrical parameters of the near-field are common to both sub-models. The principal common parameters are gathered here for ease of reference and avoidance of unnecessary duplication and transcription errors. (au)

  13. Integrated source-risk model for radon: A definition study

    International Nuclear Information System (INIS)

    Laheij, G.M.H.; Aldenkamp, F.J.; Stoop, P.

    1993-10-01

    The purpose of a source-risk model is to support policy making on radon mitigation by comparing effects of various policy options and to enable optimization of counter measures applied to different parts of the source-risk chain. There are several advantages developing and using a source-risk model: risk calculations are standardized; the effects of measures applied to different parts of the source-risk chain can be better compared because interactions are included; and sensitivity analyses can be used to determine the most important parameters within the total source-risk chain. After an inventory of processes and sources to be included in the source-risk chain, the models presently available in the Netherlands are investigated. The models were screened for completeness, validation and operational status. The investigation made clear that, by choosing for each part of the source-risk chain the most convenient model, a source-risk chain model for radon may be realized. However, the calculation of dose out of the radon concentrations and the status of the validation of most models should be improved. Calculations with the proposed source-risk model will give estimations with a large uncertainty at the moment. For further development of the source-risk model an interaction between the source-risk model and experimental research is recommended. Organisational forms of the source-risk model are discussed. A source-risk model in which only simple models are included is also recommended. The other models are operated and administrated by the model owners. The model owners execute their models for a combination of input parameters. The output of the models is stored in a database which will be used for calculations with the source-risk model. 5 figs., 15 tabs., 7 appendices, 14 refs

  14. State-Space Modelling of Loudspeakers using Fractional Derivatives

    DEFF Research Database (Denmark)

    King, Alexander Weider; Agerkvist, Finn T.

    2015-01-01

    This work investigates the use of fractional order derivatives in modeling moving-coil loudspeakers. A fractional order state-space solution is developed, leading the way towards incorporating nonlinearities into a fractional order system. The method is used to calculate the response of a fractio......This work investigates the use of fractional order derivatives in modeling moving-coil loudspeakers. A fractional order state-space solution is developed, leading the way towards incorporating nonlinearities into a fractional order system. The method is used to calculate the response...... of a fractional harmonic oscillator, representing the mechanical part of a loudspeaker, showing the effect of the fractional derivative and its relationship to viscoelasticity. Finally, a loudspeaker model with a fractional order viscoelastic suspension and fractional order voice coil is fit to measurement data...

  15. Deriving consumer-facing disease concepts for family health histories using multi-source sampling.

    Science.gov (United States)

    Hulse, Nathan C; Wood, Grant M; Haug, Peter J; Williams, Marc S

    2010-10-01

    The family health history has long been recognized as an effective way of understanding individuals' susceptibility to familial disease; yet electronic tools to support the capture and use of these data have been characterized as inadequate. As part of an ongoing effort to build patient-facing tools for entering detailed family health histories, we have compiled a set of concepts specific to familial disease using multi-source sampling. These concepts were abstracted by analyzing family health history data patterns in our enterprise data warehouse, collection patterns of consumer personal health records, analyses from the local state health department, a healthcare data dictionary, and concepts derived from genetic-oriented consumer education materials. Collectively, these sources yielded a set of more than 500 unique disease concepts, represented by more than 2500 synonyms for supporting patients in entering coded family health histories. We expect that these concepts will be useful in providing meaningful data and education resources for patients and providers alike.

  16. Trimethylsilyl derivatives of organic compounds in source samples and in atmospheric fine particulate matter.

    Science.gov (United States)

    Nolte, Christopher G; Schauer, James J; Cass, Glen R; Simoneit, Bernd R T

    2002-10-15

    Source sample extracts of vegetative detritus, motor vehicle exhaust, tire dust paved road dust, and cigarette smoke have been silylated and analyzed by GC-MS to identify polar organic compounds that may serve as tracers for those specific emission sources of atmospheric fine particulate matter. Candidate molecular tracers were also identified in atmospheric fine particle samples collected in the San Joaquin Valley of California. A series of normal primary alkanols, dominated by even carbon-numbered homologues from C26 to C32, the secondary alcohol 10-nonacosanol, and some phytosterols are prominent polar compounds in the vegetative detritus source sample. No new polar organic compounds are found in the motor vehicle exhaust samples. Several hydrogenated resin acids are present in the tire dust sample, which might serve as useful tracers for those sources in areas that are heavily impacted by motor vehicle traffic. Finally, the alcohol and sterol emission profiles developed for all the source samples examined in this project are scaled according to the ambient fine particle mass concentrations attributed to those sources by a chemical mass balance receptor model that was previously applied to the San Joaquin Valley to compute the predicted atmospheric concentrations of individual alcohols and sterols. The resulting underprediction of alkanol concentrations at the urban sites suggests that alkanols may be more sensitive tracers for natural background from vegetative emissions (i.e., waxes) than the high molecular weight alkanes, which have been the best previously available tracers for that source.

  17. Infrapatellar Fat Pad: An Alternative Source of Adipose-Derived Mesenchymal Stem Cells

    Directory of Open Access Journals (Sweden)

    P. Tangchitphisut

    2016-01-01

    Full Text Available Introduction. The Infrapatellar fat pad (IPFP represents an emerging alternative source of adipose-derived mesenchymal stem cells (ASCs. We compared the characteristics and differentiation capacity of ASCs isolated from IPFP and SC. Materials and Methods. ASCs were harvested from either IPFP or SC. IPFPs were collected from patients undergoing total knee arthroplasty (TKA, whereas subcutaneous tissues were collected from patients undergoing lipoaspiration. Immunophenotypes of surface antigens were evaluated. Their ability to form colony-forming units (CFUs and their differentiation potential were determined. The ASCs karyotype was evaluated. Results. There was no difference in the number of CFUs and size of CFUs between IPFP and SC sources. ASCs isolated from both sources had a normal karyotype. The mesenchymal stem cells (MSCs markers on flow cytometry was equivalent. IPFP-ASCs demonstrated significantly higher expression of SOX-9 and RUNX-2 over ASCs isolated from SC (6.19 ± 5.56-, 0.47 ± 0.62-fold; p value = 0.047, and 17.33 ± 10.80-, 1.56 ± 1.31-fold; p value = 0.030, resp.. Discussion and Conclusion. CFU assay of IPFP-ASCs and SC-ASCs harvested by lipoaspiration technique was equivalent. The expression of key chondrogenic and osteogenic genes was increased in cells isolated from IPFP. IPFP should be considered a high quality alternative source of ASCs.

  18. Identifying the source, transport path and sinks of sewage derived organic matter

    International Nuclear Information System (INIS)

    Mudge, Stephen M.; Duce, Caroline E.

    2005-01-01

    Since sewage discharges can significantly contribute to the contaminant loadings in coastal areas, it is important to identify sources, pathways and environmental sinks. Sterol and fatty alcohol biomarkers were quantified in source materials, suspended sediments and settling matter from the Ria Formosa Lagoon. Simple ratios between key biomarkers including 5β-coprostanol, cholesterol and epi-coprostanol were able to identify the sewage sources and effected deposition sites. Multivariate methods (PCA) were used to identify co-varying sites. PLS analysis using the sewage discharge as the signature indicated ∼ 25% of the variance in the sites could be predicted by the sewage signature. A new source of sewage derived organic matter was found with a high sewage predictable signature. The suspended sediments had relatively low sewage signatures as the material was diluted with other organic matter from in situ production. From a management viewpoint, PLS provides a useful tool in identifying the pathways and accumulation sites for such contaminants. - Multivariate statistical analysis was used to identify pathways and accumulation sites for contaminants in coastal waters

  19. Comparison of analytic source models for head scatter factor calculation and planar dose calculation for IMRT

    International Nuclear Information System (INIS)

    Yan Guanghua; Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G

    2008-01-01

    The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity

  20. Comparison of analytic source models for head scatter factor calculation and planar dose calculation for IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Yan Guanghua [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G [Department of Radiation Oncology, University of Florida, Gainesville, FL 32610-0385 (United States)

    2008-04-21

    The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity.

  1. An open source business model for malaria.

    Science.gov (United States)

    Årdal, Christine; Røttingen, John-Arne

    2015-01-01

    Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new malaria

  2. An open source business model for malaria.

    Directory of Open Access Journals (Sweden)

    Christine Årdal

    Full Text Available Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related

  3. About Block Dynamic Model of Earthquake Source.

    Science.gov (United States)

    Gusev, G. A.; Gufeld, I. L.

    One may state the absence of a progress in the earthquake prediction papers. The short-term prediction (diurnal period, localisation being also predicted) has practical meaning. Failure is due to the absence of the adequate notions about geological medium, particularly, its block structure and especially in the faults. Geological and geophysical monitoring gives the basis for the notion about geological medium as open block dissipative system with limit energy saturation. The variations of the volume stressed state close to critical states are associated with the interaction of the inhomogeneous ascending stream of light gases (helium and hydrogen) with solid phase, which is more expressed in the faults. In the background state small blocks of the fault medium produce the sliding of great blocks in the faults. But for the considerable variations of ascending gas streams the formation of bound chains of small blocks is possible, so that bound state of great blocks may result (earthquake source). Recently using these notions we proposed a dynamical earthquake source model, based on the generalized chain of non-linear bound oscillators of Fermi-Pasta-Ulam type (FPU). The generalization concerns its in homogeneity and different external actions, imitating physical processes in the real source. Earlier weak inhomogeneous approximation without dissipation was considered. Last has permitted to study the FPU return (return to initial state). Probabilistic properties in quasi periodic movement were found. The chain decay problem due to non-linearity and external perturbations was posed. The thresholds and dependence of life- time of the chain are studied. The great fluctuations of life-times are discovered. In the present paper the rigorous consideration of the inhomogeneous chain including the dissipation is considered. For the strong dissipation case, when the oscillation movements are suppressed, specific effects are discovered. For noise action and constantly arising

  4. Turbulence modeling with fractional derivatives: Derivation from first principles and initial results

    Science.gov (United States)

    Epps, Brenden; Cushman-Roisin, Benoit

    2017-11-01

    Fluid turbulence is an outstanding unsolved problem in classical physics, despite 120+ years of sustained effort. Given this history, we assert that a new mathematical framework is needed to make a transformative breakthrough. This talk offers one such framework, based upon kinetic theory tied to the statistics of turbulent transport. Starting from the Boltzmann equation and ``Lévy α-stable distributions'', we derive a turbulence model that expresses the turbulent stresses in the form of a fractional derivative, where the fractional order is tied to the transport behavior of the flow. Initial results are presented herein, for the cases of Couette-Poiseuille flow and 2D boundary layers. Among other results, our model is able to reproduce the logarithmic Law of the Wall in shear turbulence.

  5. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  6. A statistical model for deriving probability distributions of contamination for accidental releases

    International Nuclear Information System (INIS)

    ApSimon, H.M.; Davison, A.C.

    1986-01-01

    Results generated from a detailed long-range transport model, MESOS, simulating dispersal of a large number of hypothetical releases of radionuclides in a variety of meteorological situations over Western Europe have been used to derive a simpler statistical model, MESOSTAT. This model may be used to generate probability distributions of different levels of contamination at a receptor point 100-1000 km or so from the source (for example, across a frontier in another country) without considering individual release and dispersal scenarios. The model is embodied in a series of equations involving parameters which are determined from such factors as distance between source and receptor, nuclide decay and deposition characteristics, release duration, and geostrophic windrose at the source. Suitable geostrophic windrose data have been derived for source locations covering Western Europe. Special attention has been paid to the relatively improbable extreme values of contamination at the top end of the distribution. The MESOSTAT model and its development are described, with illustrations of its use and comparison with the original more detailed modelling techniques. (author)

  7. Heat source model for welding process

    International Nuclear Information System (INIS)

    Doan, D.D.

    2006-10-01

    One of the major industrial stakes of the welding simulation relates to the control of mechanical effects of the process (residual stress, distortions, fatigue strength... ). These effects are directly dependent on the temperature evolutions imposed during the welding process. To model this thermal loading, an original method is proposed instead of the usual methods like equivalent heat source approach or multi-physical approach. This method is based on the estimation of the weld pool shape together with the heat flux crossing the liquid/solid interface, from experimental data measured in the solid part. Its originality consists in solving an inverse Stefan problem specific to the welding process, and it is shown how to estimate the parameters of the weld pool shape. To solve the heat transfer problem, the interface liquid/solid is modeled by a Bezier curve ( 2-D) or a Bezier surface (3-D). This approach is well adapted to a wide diversity of weld pool shapes met for the majority of the current welding processes (TIG, MlG-MAG, Laser, FE, Hybrid). The number of parameters to be estimated is weak enough, according to the cases considered from 2 to 5 in 20 and 7 to 16 in 3D. A sensitivity study leads to specify the location of the sensors, their number and the set of measurements required to a good estimate. The application of the method on test results of welding TIG on thin stainless steel sheets in emerging and not emerging configurations, shows that only one measurement point is enough to estimate the various weld pool shapes in 20, and two points in 3D, whatever the penetration is full or not. In the last part of the work, a methodology is developed for the transient analysis. It is based on the Duvaut's transformation which overpasses the discontinuity of the liquid metal interface and therefore gives a continuous variable for the all spatial domain. Moreover, it allows to work on a fixed mesh grid and the new inverse problem is equivalent to identify a source

  8. Modeling and Forecasting Average Temperature for Weather Derivative Pricing

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2015-01-01

    Full Text Available The main purpose of this paper is to present a feasible model for the daily average temperature on the area of Zhengzhou and apply it to weather derivatives pricing. We start by exploring the background of weather derivatives market and then use the 62 years of daily historical data to apply the mean-reverting Ornstein-Uhlenbeck process to describe the evolution of the temperature. Finally, Monte Carlo simulations are used to price heating degree day (HDD call option for this city, and the slow convergence of the price of the HDD call can be found through taking 100,000 simulations. The methods of the research will provide a frame work for modeling temperature and pricing weather derivatives in other similar places in China.

  9. Aspects of the derivative coupling model in four dimensions

    International Nuclear Information System (INIS)

    Aste, Andreas

    2014-01-01

    A concise discussion of a 3 + 1-dimensional derivative coupling model, in which a massive Dirac field couples to the four-gradient of a massless scalar field, is given in order to elucidate the role of different concepts in quantum field theory like the regularization of quantum fields as operator-valued distributions, correlation distributions, locality, causality, and field operator gauge transformations. (orig.)

  10. Aspects of the derivative coupling model in four dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Aste, Andreas [University of Basel, Department of Physics, Basel (Switzerland); Paul Scherrer Institute, Villigen (Switzerland)

    2014-01-15

    A concise discussion of a 3 + 1-dimensional derivative coupling model, in which a massive Dirac field couples to the four-gradient of a massless scalar field, is given in order to elucidate the role of different concepts in quantum field theory like the regularization of quantum fields as operator-valued distributions, correlation distributions, locality, causality, and field operator gauge transformations. (orig.)

  11. Microscopic Derivation of the Ginzburg-Landau Model

    DEFF Research Database (Denmark)

    Frank, Rupert; Hainzl, Christian; Seiringer, Robert

    2014-01-01

    We present a summary of our recent rigorous derivation of the celebrated Ginzburg-Landau (GL) theory, starting from the microscopic Bardeen-Cooper-Schrieffer (BCS) model. Close to the critical temperature, GL arises as an effective theory on the macroscopic scale. The relevant scaling limit...

  12. Modelling ocean-colour-derived chlorophyll a

    Directory of Open Access Journals (Sweden)

    S. Dutkiewicz

    2018-01-01

    Full Text Available This article provides a proof of concept for using a biogeochemical/ecosystem/optical model with a radiative transfer component as a laboratory to explore aspects of ocean colour. We focus here on the satellite ocean colour chlorophyll a (Chl a product provided by the often-used blue/green reflectance ratio algorithm. The model produces output that can be compared directly to the real-world ocean colour remotely sensed reflectance. This model output can then be used to produce an ocean colour satellite-like Chl a product using an algorithm linking the blue versus green reflectance similar to that used for the real world. Given that the model includes complete knowledge of the (model water constituents, optics and reflectance, we can explore uncertainties and their causes in this proxy for Chl a (called derived Chl a in this paper. We compare the derived Chl a to the actual model Chl a field. In the model we find that the mean absolute bias due to the algorithm is 22 % between derived and actual Chl a. The real-world algorithm is found using concurrent in situ measurement of Chl a and radiometry. We ask whether increased in situ measurements to train the algorithm would improve the algorithm, and find a mixed result. There is a global overall improvement, but at the expense of some regions, especially in lower latitudes where the biases increase. Not surprisingly, we find that region-specific algorithms provide a significant improvement, at least in the annual mean. However, in the model, we find that no matter how the algorithm coefficients are found there can be a temporal mismatch between the derived Chl a and the actual Chl a. These mismatches stem from temporal decoupling between Chl a and other optically important water constituents (such as coloured dissolved organic matter and detrital matter. The degree of decoupling differs regionally and over time. For example, in many highly seasonal regions, the timing of initiation

  13. Staphylococcus aureus utilizes host-derived lipoprotein particles as sources of exogenous fatty acids.

    Science.gov (United States)

    Delekta, Phillip C; Shook, John C; Lydic, Todd A; Mulks, Martha H; Hammer, Neal D

    2018-03-26

    Methicillin-resistant Staphylococcus aureus (MRSA) is a threat to global health. Consequently, much effort has focused on the development of new antimicrobials that target novel aspects of S. aureus physiology. Fatty acids are required to maintain cell viability, and bacteria synthesize fatty acids using the type II fatty acid synthesis pathway (FASII). FASII is significantly different from human fatty acid synthesis, underscoring the therapeutic potential of inhibiting this pathway. However, many Gram-positive pathogens incorporate exogenous fatty acids, bypassing FASII inhibition and leaving the clinical potential of FASII inhibitors uncertain. Importantly, the source(s) of fatty acids available to pathogens within the host environment remains unclear. Fatty acids are transported throughout the body by lipoprotein particles in the form of triglycerides and esterified cholesterol. Thus, lipoproteins, such as low-density lipoprotein (LDL) represent a potentially rich source of exogenous fatty acids for S. aureus during infection. We sought to test the ability of LDLs to serve as a fatty acid source for S. aureus and show that cells cultured in the presence of human LDLs demonstrate increased tolerance to the FASII inhibitor, triclosan. Using mass spectrometry, we observed that host-derived fatty acids present in the LDLs are incorporated into the staphylococcal membrane and that tolerance to triclosan is facilitated by the fatty acid kinase A, FakA, and Geh, a triacylglycerol lipase. Finally, we demonstrate that human LDLs support the growth of S. aureus fatty acid auxotrophs. Together, these results suggest that human lipoprotein particles are a viable source of exogenous fatty acids for S. aureus during infection. IMPORTANCE Inhibition of bacterial fatty acid synthesis is a promising approach to combating infections caused by S. aureus and other human pathogens. However, S. aureus incorporates exogenous fatty acids into its phospholipid bilayer. Therefore, the

  14. Operational derivation of Boltzmann distribution with Maxwell's demon model.

    Science.gov (United States)

    Hosoya, Akio; Maruyama, Koji; Shikano, Yutaka

    2015-11-24

    The resolution of the Maxwell's demon paradox linked thermodynamics with information theory through information erasure principle. By considering a demon endowed with a Turing-machine consisting of a memory tape and a processor, we attempt to explore the link towards the foundations of statistical mechanics and to derive results therein in an operational manner. Here, we present a derivation of the Boltzmann distribution in equilibrium as an example, without hypothesizing the principle of maximum entropy. Further, since the model can be applied to non-equilibrium processes, in principle, we demonstrate the dissipation-fluctuation relation to show the possibility in this direction.

  15. Computerized dosimetry of I-125 sources model 6711

    International Nuclear Information System (INIS)

    Isturiz, J.

    2001-01-01

    It tries on: physical presentation of the sources; radiation protection; mathematical model of I-125 source model 6711; data considered for the calculation program; experimental com probation of the dose distribution; exposure rate and apparent activity; techniques of the use given to the sources I-125; and the calculation planning systems [es

  16. Can pancreatic duct-derived progenitors be a source of islet regeneration?

    International Nuclear Information System (INIS)

    Xia, Bing; Zhan, Xiao-Rong; Yi, Ran; Yang, Baofeng

    2009-01-01

    The regenerative process of the pancreas is of interest because the main pathogenesis of diabetes mellitus is an inadequate number of insulin-producing β-cells. The functional mass of β-cells is decreased in type 1 diabetes, so replacing missing β-cells or triggering their regeneration may allow for improved type 1 diabetes treatment. Therefore, expansion of the β-cell mass from endogenous sources, either in vivo or in vitro, represents an area of increasing interest. The mechanism of islet regeneration remains poorly understood, but the identification of islet progenitor sources is critical for understanding β-cell regeneration. One potential source is the islet proper, via the dedifferentiation, proliferation, and redifferentiation of facultative progenitors residing within the islet. Neogenesis, or that the new pancreatic islets can derive from progenitor cells present within the ducts has been reported, but the existence and identity of the progenitor cells have been debated. In this review, we focus on pancreatic ductal cells, which are islet progenitors capable of differentiating into islet β-cells. Islet neogenesis, seen as budding of hormone-positive cells from the ductal epithelium, is considered to be one mechanism for normal islet growth after birth and in regeneration, and has suggested the presence of pancreatic stem cells. Numerous results support the neogenesis hypothesis, the evidence for the hypothesis in the adult comes primarily from morphological studies that have in common the production of damage to all or part of the pancreas, with consequent inflammation and repair. Although numerous studies support a ductal origin for new islets after birth, lineage-tracing experiments are considered the 'gold standard' of proof. Lineage-tracing experiments show that pancreatic duct cells act as progenitors, giving rise to new islets after birth and after injury. The identification of differentiated pancreatic ductal cells as an in vivo progenitor for

  17. Can pancreatic duct-derived progenitors be a source of islet regeneration?

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Bing [Department of Endocrinology, First Hospital of Harbin Medical University, Harbin, Hei Long Jiang Province 150001 (China); Zhan, Xiao-Rong, E-mail: xiaorongzhan@sina.com [Department of Endocrinology, First Hospital of Harbin Medical University, Harbin, Hei Long Jiang Province 150001 (China); Yi, Ran [Department of Endocrinology, First Hospital of Harbin Medical University, Harbin, Hei Long Jiang Province 150001 (China); Yang, Baofeng [Department of Pharmacology, State Key Laboratory of Biomedicine and Pharmacology, Harbin Medical University, Harbin, Hei Long Jiang Province 150001 (China)

    2009-06-12

    The regenerative process of the pancreas is of interest because the main pathogenesis of diabetes mellitus is an inadequate number of insulin-producing {beta}-cells. The functional mass of {beta}-cells is decreased in type 1 diabetes, so replacing missing {beta}-cells or triggering their regeneration may allow for improved type 1 diabetes treatment. Therefore, expansion of the {beta}-cell mass from endogenous sources, either in vivo or in vitro, represents an area of increasing interest. The mechanism of islet regeneration remains poorly understood, but the identification of islet progenitor sources is critical for understanding {beta}-cell regeneration. One potential source is the islet proper, via the dedifferentiation, proliferation, and redifferentiation of facultative progenitors residing within the islet. Neogenesis, or that the new pancreatic islets can derive from progenitor cells present within the ducts has been reported, but the existence and identity of the progenitor cells have been debated. In this review, we focus on pancreatic ductal cells, which are islet progenitors capable of differentiating into islet {beta}-cells. Islet neogenesis, seen as budding of hormone-positive cells from the ductal epithelium, is considered to be one mechanism for normal islet growth after birth and in regeneration, and has suggested the presence of pancreatic stem cells. Numerous results support the neogenesis hypothesis, the evidence for the hypothesis in the adult comes primarily from morphological studies that have in common the production of damage to all or part of the pancreas, with consequent inflammation and repair. Although numerous studies support a ductal origin for new islets after birth, lineage-tracing experiments are considered the 'gold standard' of proof. Lineage-tracing experiments show that pancreatic duct cells act as progenitors, giving rise to new islets after birth and after injury. The identification of differentiated pancreatic ductal

  18. Derivation and characterization of human fetal MSCs: an alternative cell source for large-scale production of cardioprotective microparticles.

    Science.gov (United States)

    Lai, Ruenn Chai; Arslan, Fatih; Tan, Soon Sim; Tan, Betty; Choo, Andre; Lee, May May; Chen, Tian Sheng; Teh, Bao Ju; Eng, John Kun Long; Sidik, Harwin; Tanavde, Vivek; Hwang, Wei Sek; Lee, Chuen Neng; El Oakley, Reida Menshawe; Pasterkamp, Gerard; de Kleijn, Dominique P V; Tan, Kok Hian; Lim, Sai Kiang

    2010-06-01

    The therapeutic effects of mesenchymal stem cells (MSCs) transplantation are increasingly thought to be mediated by MSC secretion. We have previously demonstrated that human ESC-derived MSCs (hESC-MSCs) produce cardioprotective microparticles in pig model of myocardial ischemia/reperfusion (MI/R) injury. As the safety and availability of clinical grade human ESCs remain a concern, MSCs from fetal tissue sources were evaluated as alternatives. Here we derived five MSC cultures from limb, kidney and liver tissues of three first trimester aborted fetuses and like our previously described hESC-derived MSCs; they were highly expandable and had similar telomerase activities. Each line has the potential to generate at least 10(16-19) cells or 10(7-10) doses of cardioprotective secretion for a pig model of MI/R injury. Unlike previously described fetal MSCs, they did not express pluripotency-associated markers such as Oct4, Nanog or Tra1-60. They displayed a typical MSC surface antigen profile and differentiated into adipocytes, osteocytes and chondrocytes in vitro. Global gene expression analysis by microarray and qRT-PCR revealed a typical MSC gene expression profile that was highly correlated among the five fetal MSC cultures and with that of hESC-MSCs (r(2)>0.90). Like hESC-MSCs, they produced secretion that was cardioprotective in a mouse model of MI/R injury. HPLC analysis of the secretion revealed the presence of a population of microparticles with a hydrodynamic radius of 50-65 nm. This purified population of microparticles was cardioprotective at approximately 1/10 dosage of the crude secretion. (c) 2009 Elsevier Ltd. All rights reserved.

  19. Source Term Model for Fine Particle Resuspension from Indoor Surfaces

    National Research Council Canada - National Science Library

    Kim, Yoojeong; Gidwani, Ashok; Sippola, Mark; Sohn, Chang W

    2008-01-01

    This Phase I effort developed a source term model for particle resuspension from indoor surfaces to be used as a source term boundary condition for CFD simulation of particle transport and dispersion in a building...

  20. Invariant models in the inversion of gravity and magnetic fields and their derivatives

    Science.gov (United States)

    Ialongo, Simone; Fedi, Maurizio; Florio, Giovanni

    2014-11-01

    In potential field inversion problems we usually solve underdetermined systems and realistic solutions may be obtained by introducing a depth-weighting function in the objective function. The choice of the exponent of such power-law is crucial. It was suggested to determine it from the field-decay due to a single source-block; alternatively it has been defined as the structural index of the investigated source distribution. In both cases, when k-order derivatives of the potential field are considered, the depth-weighting exponent has to be increased by k with respect that of the potential field itself, in order to obtain consistent source model distributions. We show instead that invariant and realistic source-distribution models are obtained using the same depth-weighting exponent for the magnetic field and for its k-order derivatives. A similar behavior also occurs in the gravity case. In practice we found that the depth weighting-exponent is invariant for a given source-model and equal to that of the corresponding magnetic field, in the magnetic case, and of the 1st derivative of the gravity field, in the gravity case. In the case of the regularized inverse problem, with depth-weighting and general constraints, the mathematical demonstration of such invariance is difficult, because of its non-linearity, and of its variable form, due to the different constraints used. However, tests performed on a variety of synthetic cases seem to confirm the invariance of the depth-weighting exponent. A final consideration regards the role of the regularization parameter; we show that the regularization can severely affect the depth to the source because the estimated depth tends to increase proportionally with the size of the regularization parameter. Hence, some care is needed in handling the combined effect of the regularization parameter and depth weighting.

  1. Gauge coupling unification in superstring derived standard-like models

    International Nuclear Information System (INIS)

    Faraggi, A.E.

    1992-11-01

    I discuss gauge coupling unification in a class of superstring standard-like models, which are derived in the free fermionic formulation. Recent calculations indicate that the superstring unification scale is at O(10 18 GeV) while the minimal supersymmetric standard model is consistent with LEP data if the unification scale is at O(10 16 )GeV. A generic feature of the superstring standard-like models is the appearance of extra color triplets (D,D), and electroweak doublets (l,l), in vector-like representations, beyond the supersymmetric standard model. I show that the gauge coupling unification at O(10 18 GeV) in the superstring standard-like models can be consistent with LEP data. I present an explicit standard-like model that can realize superstring gauge coupling unification. (author)

  2. Ultrasound-assisted liposuction provides a source for functional adipose-derived stromal cells.

    Science.gov (United States)

    Duscher, Dominik; Maan, Zeshaan N; Luan, Anna; Aitzetmüller, Matthias M; Brett, Elizabeth A; Atashroo, David; Whittam, Alexander J; Hu, Michael S; Walmsley, Graham G; Houschyar, Khosrow S; Schilling, Arndt F; Machens, Hans-Guenther; Gurtner, Geoffrey C; Longaker, Michael T; Wan, Derrick C

    2017-12-01

    Regenerative medicine employs human mesenchymal stromal cells (MSCs) for their multi-lineage plasticity and their pro-regenerative cytokine secretome. Adipose-derived mesenchymal stromal cells (ASCs) are concentrated in fat tissue, and the ease of harvest via liposuction makes them a particularly interesting cell source. However, there are various liposuction methods, and few have been assessed regarding their impact on ASC functionality. Here we study the impact of the two most popular ultrasound-assisted liposuction (UAL) devices currently in clinical use, VASER (Solta Medical) and Lysonix 3000 (Mentor) on ASCs. After lipoaspirate harvest and processing, we sorted for ASCs using fluorescent-assisted cell sorting based on an established surface marker profile (CD34 + CD31 - CD45 - ). ASC yield, viability, osteogenic and adipogenic differentiation capacity and in vivo regenerative performance were assessed. Both UAL samples demonstrated equivalent ASC yield and viability. VASER UAL ASCs showed higher osteogenic and adipogenic marker expression, but a comparable differentiation capacity was observed. Soft tissue healing and neovascularization were significantly enhanced via both UAL-derived ASCs in vivo, and there was no significant difference between the cell therapy groups. Taken together, our data suggest that UAL allows safe and efficient harvesting of the mesenchymal stromal cellular fraction of adipose tissue and that cells harvested via this approach are suitable for cell therapy and tissue engineering applications. Copyright © 2017 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  3. Hamiltonian derivation of the nonhydrostatic pressure-coordinate model

    Science.gov (United States)

    Salmon, Rick; Smith, Leslie M.

    1994-07-01

    In 1989, the Miller-Pearce (MP) model for nonhydrostatic fluid motion governed by equations written in pressure coordinates was extended by removing the prescribed reference temperature, T(sub s)(p), while retaining the conservation laws and other desirable properties. It was speculated that this extension of the MP model had a Hamiltonian structure and that a slick derivation of the Ertel property could be constructed if the relevant Hamiltonian were known. In this note, the extended equations are derived using Hamilton's principle. The potential vorticity law arises from the usual particle-relabeling symmetry of the Lagrangian, and even the absence of sound waves is anticipated from the fact that the pressure inside the free energy G(p, theta) in the derived equation is hydrostatic and thus G is insensitive to local pressure fluctuations. The model extension is analogous to the semigeostrophic equations for nearly geostrophic flow, which do not incorporate a prescribed reference state, while the earlier MP model is analogous to the quasigeostrophic equations, which become highly inaccurate when the flow wanders from a prescribed state with nearly flat isothermal surfaces.

  4. Source characterization and dynamic fault modeling of induced seismicity

    Science.gov (United States)

    Lui, S. K. Y.; Young, R. P.

    2017-12-01

    In recent years there are increasing concerns worldwide that industrial activities in the sub-surface can cause or trigger damaging earthquakes. In order to effectively mitigate the damaging effects of induced seismicity, the key is to better understand the source physics of induced earthquakes, which still remain elusive at present. Furthermore, an improved understanding of induced earthquake physics is pivotal to assess large-magnitude earthquake triggering. A better quantification of the possible causes of induced earthquakes can be achieved through numerical simulations. The fault model used in this study is governed by the empirically-derived rate-and-state friction laws, featuring a velocity-weakening (VW) patch embedded into a large velocity-strengthening (VS) region. Outside of that, the fault is slipping at the background loading rate. The model is fully dynamic, with all wave effects resolved, and is able to resolve spontaneous long-term slip history on a fault segment at all stages of seismic cycles. An earlier study using this model has established that aseismic slip plays a major role in the triggering of small repeating earthquakes. This study presents a series of cases with earthquakes occurring on faults with different fault frictional properties and fluid-induced stress perturbations. The effects to both the overall seismicity rate and fault slip behavior are investigated, and the causal relationship between the pre-slip pattern prior to the event and the induced source characteristics is discussed. Based on simulation results, the subsequent step is to select specific cases for laboratory experiments which allow well controlled variables and fault parameters. Ultimately, the aim is to provide better constraints on important parameters for induced earthquakes based on numerical modeling and laboratory data, and hence to contribute to a physics-based induced earthquake hazard assessment.

  5. 26 CFR 1.863-8 - Source of income derived from space and ocean activity under section 863(d).

    Science.gov (United States)

    2010-04-01

    ..., DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Regulations Applicable to Taxable... from sources without the United States to the extent the income, based on all the facts and... income derived by a CFC is income from sources without the United States to the extent the income, based...

  6. Novel family of quasi-Z-source DC/DC converters derived from current-fed push-pull converters

    DEFF Research Database (Denmark)

    Chub, Andrii; Husev, Oleksandr; Vinnikov, Dmitri

    2014-01-01

    This paper is devoted to the step-up quasi-Z-source dc/dc push-pull converter family. The topologies in the family are derived from the isolated boost converter family by replacing input inductors with the quasi-Z-source network. Two new topologies are proposed, analyzed and compared. Theoretical...

  7. Sources of present Chernobyl-derived caesium concentrations in surface air and deposition samples

    Energy Technology Data Exchange (ETDEWEB)

    Hoetzl, H.; Rosner, G.; Winkler, R. (Gesellschaft fuer Strahlen-und Umweltforschung Munich, Neuherberg (Germany). Forschungszentrum fuer Umwelt und Gesundheit Gesellschaft fuer Strahlen- und Umweltforschung mbH Muenchen, Neuherberg (Germany). Inst. fuer Strahlenschutz)

    1992-06-01

    The sources of Chernobyl-derived caesium concentrations in air and deposition samples collected from mid-1986 to end-1990 at Munich- Neuherberg, Germany, were investigated. Local resuspension has been found to be the main source. By comparison with deposition data from other locations it is estimated that within a range from 20 Bq m[sup -2] to 60 kBq m[sup -2] of initially deposited [sup 137]Cs activity [approx]2% is re-deposited by the process of local resuspension in Austria, Germany, Japan and United Kingdom, while significantly higher total resuspension is to be expected for Denmark and Finland. Stratospheric contribution to present concentrations is shown to be negligible. This is confirmed by cross correlation analysis between the time series of [sup 137]Cs in air and precipitation before and after the Chernobyl accident and the respective time series of cosmogenic [sup 7]Be, which is an indicator of stratospheric input. Seasonal variations of caesium concentrations with maxima in winter were observed. (author). 32 refs.; 5 figs.; 1 tab.

  8. Sources of present Chernobyl-derived caesium concentrations in surface air and deposition samples

    International Nuclear Information System (INIS)

    Hoetzl, H.; Rosner, G.; Winkler, R.; Gesellschaft fuer Strahlen- und Umweltforschung mbH Muenchen, Neuherberg

    1992-01-01

    The sources of Chernobyl-derived caesium concentrations in air and deposition samples collected from mid-1986 to end-1990 at Munich- Neuherberg, Germany, were investigated. Local resuspension has been found to be the main source. By comparison with deposition data from other locations it is estimated that within a range from 20 Bq m -2 to 60 kBq m -2 of initially deposited 137 Cs activity ∼2% is re-deposited by the process of local resuspension in Austria, Germany, Japan and United Kingdom, while significantly higher total resuspension is to be expected for Denmark and Finland. Stratospheric contribution to present concentrations is shown to be negligible. This is confirmed by cross correlation analysis between the time series of 137 Cs in air and precipitation before and after the Chernobyl accident and the respective time series of cosmogenic 7 Be, which is an indicator of stratospheric input. Seasonal variations of caesium concentrations with maxima in winter were observed. (author). 32 refs.; 5 figs.; 1 tab

  9. Computational model of Amersham I-125 source model 6711 and Prosper Pd-103 source model MED3633 using MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Menezes, Artur F.; Reis Junior, Juraci P.; Silva, Ademir X., E-mail: ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear; Rosa, Luiz A.R. da, E-mail: lrosa@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Facure, Alessandro [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil); Cardoso, Simone C., E-mail: Simone@if.ufrj.b [Universidade Federal do Rio de Janeiro (IF/UFRJ), RJ (Brazil). Inst. de Fisica. Dept. de Fisica Nuclear

    2011-07-01

    Brachytherapy is used in cancer treatment at shorter distances through the use of small encapsulated source of ionizing radiation. In such treatment, a radiation source is positioned directly into or near the target volume to be treated. In this study the Monte Carlo based MCNP code was used to model and simulate the I-125 Amersham Health source model 6711 and the Pd-103 Prospera source model MED3633 in order to obtain the dosimetric parameter dose rate constant ({Lambda}) . The sources geometries were modeled and implemented in MCNPX code. The dose rate constant is an important parameter prostate LDR brachytherapy's treatments planning. This study was based on American Association of Physicists in Medicine (AAPM) recommendations which were produced by its Task Group 43. The results obtained were 0.941 and 0.65 for the dose rate constants of I-125 and Pd-103 sources, respectively. They present good agreement with the literature values based on different Monte Carlo codes. (author)

  10. Novel Thiazole Derivatives of Medicinal Potential: Synthesis and Modeling

    Directory of Open Access Journals (Sweden)

    Nour E. A. Abdel-Sattar

    2017-01-01

    Full Text Available This paper reports on the synthesis of new thiazole derivatives that could be profitably exploited in medical treatment of tumors. Molecular electronic structures have been modeled within density function theory (DFT framework. Reactivity indices obtained from the frontier orbital energies as well as electrostatic potential energy maps are discussed and correlated with the molecular structure. X-ray crystallographic data of one of the new compounds is measured and used to support and verify the theoretical results.

  11. Derivative Geometric Modeling of Basic Rotational Solids on CATIA

    Institute of Scientific and Technical Information of China (English)

    MENG Xiang-bao; PAN Zi-jian; ZHU Yu-xiang; LI Jun

    2011-01-01

    Hybrid models derived from rotational solids like cylinders, cones and spheres were implemented on CATIA software. Firstly, make the isosceles triangular prism, cuboid, cylinder, cone, sphere, and the prism with tangent conic and curved triangle ends, the cuboid with tangent cylindrical and curved rectangle ends, the cylinder with tangent spherical and curved circular ends as the basic Boolean deference units to the primary cylinders, cones and spheres on symmetrical and some critical geometric conditions, forming a series of variant solid models. Secondly, make the deference units above as the basic union units to the main cylinders, cones, and spheres accordingly, forming another set of solid models. Thirdly, make the tangent ends of union units into oblique conic, cylindrical, or with revolved triangular pyramid, quarterly cylinder and annulus ends on sketch based features to the main cylinders, cones, and spheres repeatedly, thus forming still another set of solid models. It is expected that these derivative models be beneficial both in the structure design, hybrid modeling, and finite element analysis of engineering components and in comprehensive training of spatial configuration of engineering graphics.

  12. Relativistic nuclear matter with alternative derivative coupling models

    International Nuclear Information System (INIS)

    Delfino, A.; Coelho, C.T.; Malheiro, M.

    1994-01-01

    Effective Lagrangians involving nucleons coupled to scalar and vector fields are investigated within the framework of relativistic mean-field theory. The study presents the traditional Walecka model and different kinds of scalar derivative coupling suggested by Zimanyi and Moszkowski. The incompressibility (presented in an analytical form), scalar potential, and vector potential at the saturation point of nuclear matter are compared for these models. The real optical potential for the models are calculated and one of the models fits well the experimental curve from-50 to 400 MeV while also gives a soft equation of state. By varying the coupling constants and keeping the saturation point of nuclear matter approximately fixed, only the Walecka model presents a first order phase transition of finite temperature at zero density. (author)

  13. On the derivation of approximations to cellular automata models and the assumption of independence.

    Science.gov (United States)

    Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V

    2014-07-01

    Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Evaluation of the influence of source and spatial resolution of DEMs on derivative products used in landslide mapping

    Directory of Open Access Journals (Sweden)

    Rubini Mahalingam

    2016-11-01

    Full Text Available Landslides are a major geohazard, which result in significant human, infrastructure, and economic losses. Landslide susceptibility mapping can help communities plan and prepare for these damaging events. Digital elevation models (DEMs are one of the most important data-sets used in landslide hazard assessment. Despite their frequent use, limited research has been completed to date on how the DEM source and spatial resolution can influence the accuracy of the produced landslide susceptibility maps. The aim of this paper is to analyse the influence of spatial resolutions and source of DEMs on landslide susceptibility mapping. For this purpose, Advanced Spaceborne Thermal Emission and Reflection (ASTER, National Elevation Dataset (NED, and Light Detection and Ranging (LiDAR DEMs were obtained for two study sections of approximately 140 km2 in north-west Oregon. Each DEM was resampled to 10, 30, and 50 m and slope and aspect grids were derived for each resolution. A set of nine spatial databases was constructed using geoinformation science (GIS for each of the spatial resolution and source. Additional factors such as distance to river and fault maps were included. An analytical hierarchical process (AHP, fuzzy logic model, and likelihood ratio-AHP representing qualitative, quantitative, and hybrid landslide mapping techniques were used for generating landslide susceptibility maps. The results from each of the techniques were verified with the Cohen's kappa index, confusion matrix, and a validation index based on agreement with detailed landslide inventory maps. The spatial resolution of 10 m, derived from the LiDAR data-set showed higher predictive accuracy in all the three techniques used for producing landslide susceptibility maps. At a resolution of 10 m, the output maps based on NED and ASTER had higher misclassification compared to the LiDAR-based outputs. Further, the 30-m LiDAR output showed improved results over the 10-m NED and 10-m

  15. Ab initio derivation of model energy density functionals

    International Nuclear Information System (INIS)

    Dobaczewski, Jacek

    2016-01-01

    I propose a simple and manageable method that allows for deriving coupling constants of model energy density functionals (EDFs) directly from ab initio calculations performed for finite fermion systems. A proof-of-principle application allows for linking properties of finite nuclei, determined by using the nuclear nonlocal Gogny functional, to the coupling constants of the quasilocal Skyrme functional. The method does not rely on properties of infinite fermion systems but on the ab initio calculations in finite systems. It also allows for quantifying merits of different model EDFs in describing the ab initio results. (letter)

  16. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Kokholm, Thomas

    to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...

  17. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Cont, Rama; Kokholm, Thomas

    2013-01-01

    to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...

  18. Prebiotic Synthesis of Autocatalytic Products From Formaldehyde-Derived Sugars as the Carbon and Energy Source

    Science.gov (United States)

    Weber, Arthur L.

    2003-01-01

    Our research objective is to understand and model the chemical processes on the primitive Earth that generated the first autocatalytic molecules and microstructures involved in the origin of life. Our approach involves: (a) investigation of a model origin-of-life process named the Sugar Model that is based on the reaction of formaldehyde- derived sugars (trioses and tetroses) with ammonia, and (b) elucidation of the constraints imposed on the chemistry of the origin of life by the fixed energies and rates of C,H,O-organic reactions under mild aqueous conditions. Recently, we demonstrated that under mild aqueous conditions the Sugar Model process yields autocatalytic products, and generates organic micropherules (2-20 micron dia.) that exhibit budding, size uniformity, and chain formation. We also discovered that the sugar substrates of the Sugar Model are capable of reducing nitrite to ammonia under mild aqueous conditions. In addition studies done in collaboration with Sandra Pizzarrello (Arizona State University) revealed that chiral amino acids (including meteoritic isovaline) catalyze both the synthesis and specific handedness of chiral sugars. Our systematic survey of the energies and rates of reactions of C,H,O-organic substrates under mild aqueous conditions revealed several general principles (rules) that govern the direction and rate of organic reactions. These reactivity principles constrain the structure of chemical pathways used in the origin of life, and in modern and primitive metabolism.

  19. Studies and modeling of cold neutron sources

    International Nuclear Information System (INIS)

    Campioni, G.

    2004-11-01

    With the purpose of updating knowledge in the fields of cold neutron sources, the work of this thesis has been run according to the 3 following axes. First, the gathering of specific information forming the materials of this work. This set of knowledge covers the following fields: cold neutron, cross-sections for the different cold moderators, flux slowing down, different measurements of the cold flux and finally, issues in the thermal analysis of the problem. Secondly, the study and development of suitable computation tools. After an analysis of the problem, several tools have been planed, implemented and tested in the 3-dimensional radiation transport code Tripoli-4. In particular, a module of uncoupling, integrated in the official version of Tripoli-4, can perform Monte-Carlo parametric studies with a spare factor of Cpu time fetching 50 times. A module of coupling, simulating neutron guides, has also been developed and implemented in the Monte-Carlo code McStas. Thirdly, achieving a complete study for the validation of the installed calculation chain. These studies focus on 3 cold sources currently functioning: SP1 from Orphee reactor and 2 other sources (SFH and SFV) from the HFR at the Laue Langevin Institute. These studies give examples of problems and methods for the design of future cold sources

  20. Quantification of source-term profiles from near-field geochemical models

    International Nuclear Information System (INIS)

    McKinley, I.G.

    1985-01-01

    A geochemical model of the near-field is described which quantitatively treats the processes of engineered barrier degradation, buffering of aqueous chemistry by solid phases, nuclide solubilization and transport through the near-field and release to the far-field. The radionuclide source-terms derived from this model are compared with those from a simpler model used for repository safety analysis. 10 refs., 2 figs., 2 tabs

  1. Discussion of Source Reconstruction Models Using 3D MCG Data

    Science.gov (United States)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  2. Systems biology derived source-sink mechanism of BMP gradient formation.

    Science.gov (United States)

    Zinski, Joseph; Bu, Ye; Wang, Xu; Dou, Wei; Umulis, David; Mullins, Mary C

    2017-08-09

    A morphogen gradient of Bone Morphogenetic Protein (BMP) signaling patterns the dorsoventral embryonic axis of vertebrates and invertebrates. The prevailing view in vertebrates for BMP gradient formation is through a counter-gradient of BMP antagonists, often along with ligand shuttling to generate peak signaling levels. To delineate the mechanism in zebrafish, we precisely quantified the BMP activity gradient in wild-type and mutant embryos and combined these data with a mathematical model-based computational screen to test hypotheses for gradient formation. Our analysis ruled out a BMP shuttling mechanism and a bmp transcriptionally-informed gradient mechanism. Surprisingly, rather than supporting a counter-gradient mechanism, our analyses support a fourth model, a source-sink mechanism, which relies on a restricted BMP antagonist distribution acting as a sink that drives BMP flux dorsally and gradient formation. We measured Bmp2 diffusion and found that it supports the source-sink model, suggesting a new mechanism to shape BMP gradients during development.

  3. Model predictive control for Z-source power converter

    DEFF Research Database (Denmark)

    Mo, W.; Loh, P.C.; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of impedance-source (commonly known as Z-source) power converter. Output voltage control and current control for Z-source inverter are analyzed and simulated. With MPC's ability of multi- system variables regulation, load current and voltage...

  4. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    Science.gov (United States)

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  5. Applying inversion techniques to derive source currents and geoelectric fields for geomagnetically induced current calculations

    Directory of Open Access Journals (Sweden)

    J. S. de Villiers

    2014-10-01

    Full Text Available This research focuses on the inversion of geomagnetic variation field measurement to obtain source currents in the ionosphere. During a geomagnetic disturbance, the ionospheric currents create magnetic field variations that induce geoelectric fields, which drive geomagnetically induced currents (GIC in power systems. These GIC may disturb the operation of power systems and cause damage to grounded power transformers. The geoelectric fields at any location of interest can be determined from the source currents in the ionosphere through a solution of the forward problem. Line currents running east–west along given surface position are postulated to exist at a certain height above the Earth's surface. This physical arrangement results in the fields on the ground having the magnetic north and down components, and the electric east component. Ionospheric currents are modelled by inverting Fourier integrals (over the wavenumber of elementary geomagnetic fields using the Levenberg–Marquardt technique. The output parameters of the inversion model are the current strength, height and surface position of the ionospheric current system. A ground conductivity structure with five layers from Quebec, Canada, based on the Layered-Earth model is used to obtain the complex skin depth at a given angular frequency. This paper presents preliminary and inversion results based on these structures and simulated geomagnetic fields. The results show some interesting features in the frequency domain. Model parameters obtained through inversion are within 2% of simulated values. This technique has applications for modelling the currents of electrojets at the equator and auroral regions, as well as currents in the magnetosphere.

  6. UV Stellar Distribution Model for the Derivation of Payload

    Directory of Open Access Journals (Sweden)

    Young-Jun Choi

    1999-12-01

    Full Text Available We present the results of a model calculation of the stellar distribution in a UV and centered at 2175Å corresponding to the well-known bump in the interstellar extinction curve. The stellar distribution model used here is based on the Bahcall-Soneira galaxy model (1980. The source code for model calculation was designed by Brosch (1991 and modified to investigate various designing factors for UV satellite payload. The model predicts UV stellar densities in different sky directions, and its results are compared with the TD-1 star counts for a number of sky regions. From this study, we can determine the field of view, size of optics, angular resolution, and number of stars in one orbit. There will provide the basic constrains in designing a satellite payload for UV observations.

  7. Earthquake source model using strong motion displacement

    Indian Academy of Sciences (India)

    The strong motion displacement records available during an earthquake can be treated as the response of the earth as the a structural system to unknown forces acting at unknown locations. Thus, if the part of the earth participating in ground motion is modelled as a known finite elastic medium, one can attempt to model the ...

  8. Variability in physical contamination assessment of source segregated biodegradable municipal waste derived composts.

    Science.gov (United States)

    Echavarri-Bravo, Virginia; Thygesen, Helene H; Aspray, Thomas J

    2017-01-01

    Physical contaminants (glass, metal, plastic and 'other') and stones were isolated and categorised from three finished commercial composts derived from source segregated biodegradable municipal waste (BMW). A subset of the identified physical contaminant fragments were subsequently reintroduced into the cleaned compost samples and sent to three commercial laboratories for testing in an inter-laboratory trial using the current PAS100:2011 method (AfOR MT PC&S). The trial showed that the 'other' category caused difficulty for all three laboratories with under reporting, particularly of the most common 'other' contaminants (paper and cardboard) and, over-reporting of non-man-made fragments. One laboratory underreported metal contaminant fragments (spiked as silver foil) in three samples. Glass, plastic and stones were variably underreported due to miss-classification or over reported due to contamination with compost (organic) fragments. The results are discussed in the context of global physical contaminant test methods and compost quality assurance schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Interspecific transfer of pyrrolizidine alkaloids: An unconsidered source of contaminations of phytopharmaceuticals and plant derived commodities.

    Science.gov (United States)

    Nowak, Melanie; Wittke, Carina; Lederer, Ines; Klier, Bernhard; Kleinwächter, Maik; Selmar, Dirk

    2016-12-15

    Many plant derived commodities contain traces of toxic pyrrolizidine alkaloids (PAs). The main source of these contaminations seems to be the accidental co-harvest of PA-containing weeds. Yet, based on the insights of the newly described phenomenon of the horizontal transfer of natural products, it is very likely that the PA-contaminations may also be due to an uptake of the alkaloids from the soil, previously being leached out from rotting PA-plants. The transfer of PAs was investigated using various herbs, which had been mulched with dried plant material from Senecio jacobaea. All of the acceptor plants exhibited marked concentrations of PAs. The extent and the composition of the imported PAs was dependent on the acceptor plant species. These results demonstrate that PAs indeed are leached out from dried Senecio material into the soil and confirm their uptake by the roots of the acceptor plants and the translocation into the leaves. Copyright © 2016. Published by Elsevier Ltd.

  10. Data Sources Available for Modeling Environmental Exposures in Older Adults

    Science.gov (United States)

    This report, “Data Sources Available for Modeling Environmental Exposures in Older Adults,” focuses on information sources and data available for modeling environmental exposures in the older U.S. population, defined here to be people 60 years and older, with an emphasis on those...

  11. Yukawa couplings in Superstring derived Standard-like models

    International Nuclear Information System (INIS)

    Faraggi, A.E.

    1991-01-01

    I discuss Yukawa couplings in Standard-like models which are derived from Superstring in the free fermionic formulation. I introduce new notation for the construction of these models. I show how choice of boundary conditions selects a trilevel Yukawa coupling either for +2/3 charged quark or for -1/3 charged quark. I prove this selection rule. I make the conjecture that in this class of standard-like models a possible connection may exist between the requirements of F and D flatness at the string level and the heaviness of the top quark relative to lighter quarks and leptons. I discuss how the choice of boundary conditions determines the non vanishing mass terms for quartic order terms. I discuss the implication on the mass of the top quark. (author)

  12. Multi-factor energy price models and exotic derivatives pricing

    Science.gov (United States)

    Hikspoors, Samuel

    The high pace at which many of the world's energy markets have gradually been opened to competition have generated a significant amount of new financial activity. Both academicians and practitioners alike recently started to develop the tools of energy derivatives pricing/hedging as a quantitative topic of its own. The energy contract structures as well as their underlying asset properties set the energy risk management industry apart from its more standard equity and fixed income counterparts. This thesis naturally contributes to these broad market developments in participating to the advances of the mathematical tools aiming at a better theory of energy contingent claim pricing/hedging. We propose many realistic two-factor and three-factor models for spot and forward price processes that generalize some well known and standard modeling assumptions. We develop the associated pricing methodologies and propose stable calibration algorithms that motivate the application of the relevant modeling schemes.

  13. Quasistatic modelling of the coaxial slow source

    International Nuclear Information System (INIS)

    Hahn, K.D.; Pietrzyk, Z.A.; Vlases, G.C.

    1986-01-01

    A new 1-D Lagrangian MHD numerical code in flux coordinates has been developed for the Coaxial Slow Source (CSS) geometry. It utilizes the quasistatic approximation so that the plasma evolves as a succession of equilibria. The P=P (psi) equilibrium constraint, along with the assumption of infinitely fast axial temperature relaxation on closed field lines, is incorporated. An axially elongated, rectangular plasma is assumed. The axial length is adjusted by the global average condition, or assumed to be fixed. In this paper predictions obtained with the code, and a limited amount of comparison with experimental data are presented

  14. Impact of Scattering Model on Disdrometer Derived Attenuation Scaling

    Science.gov (United States)

    Zemba, Michael; Luini, Lorenzo; Nessel, James; Riva, Carlo (Compiler)

    2016-01-01

    NASA Glenn Research Center (GRC), the Air Force Research Laboratory (AFRL), and the Politecnico di Milano (POLIMI) are currently entering the third year of a joint propagation study in Milan, Italy utilizing the 20 and 40 GHz beacons of the Alphasat TDP5 Aldo Paraboni scientific payload. The Ka- and Q-band beacon receivers were installed at the POLIMI campus in June of 2014 and provide direct measurements of signal attenuation at each frequency. Collocated weather instrumentation provides concurrent measurement of atmospheric conditions at the receiver; included among these weather instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which records droplet size distributions (DSD) and droplet velocity distributions (DVD) during precipitation events. This information can be used to derive the specific attenuation at frequencies of interest and thereby scale measured attenuation data from one frequency to another. Given the ability to both predict the 40 GHz attenuation from the disdrometer and the 20 GHz timeseries as well as to directly measure the 40 GHz attenuation with the beacon receiver, the Milan terminal is uniquely able to assess these scaling techniques and refine the methods used to infer attenuation from disdrometer data.In order to derive specific attenuation from the DSD, the forward scattering coefficient must be computed. In previous work, this has been done using the Mie scattering model, however, this assumes a spherical droplet shape. The primary goal of this analysis is to assess the impact of the scattering model and droplet shape on disdrometer derived attenuation predictions by comparing the use of the Mie scattering model to the use of the T-matrix method, which does not assume a spherical droplet. In particular, this paper will investigate the impact of these two scattering approaches on the error of the resulting predictions as well as on the relationship between prediction error and rain rate.

  15. CHROMOPHORIC DISSOLVED ORGANIC MATTER (CDOM) DERIVED FROM DECOMPOSITION OF VARIOUS VASCULAR PLANT AND ALGAL SOURCES

    Science.gov (United States)

    Chromophoric dissolved organic (CDOM) in aquatic environments is derived from the microbial decomposition of terrestrial and microbial organic matter. Here we present results of studies of the spectral properties and photoreactivity of the CDOM derived from several organic matter...

  16. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans.

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-07

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients' CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  17. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  18. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    Science.gov (United States)

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  19. Effects of source shape on the numerical aperture factor with a geometrical-optics model.

    Science.gov (United States)

    Wan, Der-Shen; Schmit, Joanna; Novak, Erik

    2004-04-01

    We study the effects of an extended light source on the calibration of an interference microscope, also referred to as an optical profiler. Theoretical and experimental numerical aperture (NA) factors for circular and linear light sources along with collimated laser illumination demonstrate that the shape of the light source or effective aperture cone is critical for a correct NA factor calculation. In practice, more-accurate results for the NA factor are obtained when a linear approximation to the filament light source shape is used in a geometric model. We show that previously measured and derived NA factors show some discrepancies because a circular rather than linear approximation to the filament source was used in the modeling.

  20. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point

  1. Comparing pharmacophore models derived from crystallography and NMR ensembles

    Science.gov (United States)

    Ghanakota, Phani; Carlson, Heather A.

    2017-11-01

    NMR and X-ray crystallography are the two most widely used methods for determining protein structures. Our previous study examining NMR versus X-Ray sources of protein conformations showed improved performance with NMR structures when used in our Multiple Protein Structures (MPS) method for receptor-based pharmacophores (Damm, Carlson, J Am Chem Soc 129:8225-8235, 2007). However, that work was based on a single test case, HIV-1 protease, because of the rich data available for that system. New data for more systems are available now, which calls for further examination of the effect of different sources of protein conformations. The MPS technique was applied to Growth factor receptor bound protein 2 (Grb2), Src SH2 homology domain (Src-SH2), FK506-binding protein 1A (FKBP12), and Peroxisome proliferator-activated receptor-γ (PPAR-γ). Pharmacophore models from both crystal and NMR ensembles were able to discriminate between high-affinity, low-affinity, and decoy molecules. As we found in our original study, NMR models showed optimal performance when all elements were used. The crystal models had more pharmacophore elements compared to their NMR counterparts. The crystal-based models exhibited optimum performance only when pharmacophore elements were dropped. This supports our assertion that the higher flexibility in NMR ensembles helps focus the models on the most essential interactions with the protein. Our studies suggest that the "extra" pharmacophore elements seen at the periphery in X-ray models arise as a result of decreased protein flexibility and make very little contribution to model performance.

  2. Nuisance Source Population Modeling for Radiation Detection System Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P; Lange, D; Nelson, K; Wheeler, R

    2009-10-05

    A major challenge facing the prospective deployment of radiation detection systems for homeland security applications is the discrimination of radiological or nuclear 'threat sources' from radioactive, but benign, 'nuisance sources'. Common examples of such nuisance sources include naturally occurring radioactive material (NORM), medical patients who have received radioactive drugs for either diagnostics or treatment, and industrial sources. A sensitive detector that cannot distinguish between 'threat' and 'benign' classes will generate false positives which, if sufficiently frequent, will preclude it from being operationally deployed. In this report, we describe a first-principles physics-based modeling approach that is used to approximate the physical properties and corresponding gamma ray spectral signatures of real nuisance sources. Specific models are proposed for the three nuisance source classes - NORM, medical and industrial. The models can be validated against measured data - that is, energy spectra generated with the model can be compared to actual nuisance source data. We show by example how this is done for NORM and medical sources, using data sets obtained from spectroscopic detector deployments for cargo container screening and urban area traffic screening, respectively. In addition to capturing the range of radioactive signatures of individual nuisance sources, a nuisance source population model must generate sources with a frequency of occurrence consistent with that found in actual movement of goods and people. Measured radiation detection data can indicate these frequencies, but, at present, such data are available only for a very limited set of locations and time periods. In this report, we make more general estimates of frequencies for NORM and medical sources using a range of data sources such as shipping manifests and medical treatment statistics. We also identify potential data sources for industrial

  3. Estimation of element deposition derived from road traffic sources by using mosses

    International Nuclear Information System (INIS)

    Zechmeister, H.G.; Hohenwallner, D.; Riss, A.; Hanus-Illnar, A.

    2005-01-01

    Sixty moss samples were taken along transects of nine roads in Austria. The concentrations of 17 elements in four moss species were determined. There was a high correlation between several elements like Cu/Sb (0.906), Ni/Co (0.897) or Cr/V (0.898), indicating a common traffic-related source. Enrichment factors were calculated, showing highest enrichment levels for: Cr, Mo, Sb, Zn, As, Fe, V, Cu, Ni, and Co. For these elements, road traffic has to be assumed as a source, which is confirmed by a significant negative correlation of the concentrations in mosses to the distance from the road for most of these metals. The rate of decrease followed a log-shaped curve at most of the investigated transects, although the decline cannot be explained by a single model. Multiple regression analysis highlighted traffic density, distance from and elevation of the road as the most influencing factors for the deposition of the investigated elements. Heavy duty vehicles (HDVs) and light duty vehicles (LDVs) showed different patterns. A comparison of sites likely to be influenced by traffic emissions with average values for the respective regions showed no significant differences for road distances of more than 250 m. Nevertheless, at heavily frequented roads, raised deposition of some elements was found even at a distance of 1000 m. - Cr, Mo, Sb, Zn, As, Fe, V, Cu, Ni, and Co were identified as road traffic emissions and were mainly deposited within a distance of 250 m from major roads

  4. Modeling Group Interactions via Open Data Sources

    Science.gov (United States)

    2011-08-30

    data. The state-of-art search engines are designed to help general query-specific search and not suitable for finding disconnected online groups. The...groups, (2) developing innovative mathematical and statistical models and efficient algorithms that leverage existing search engines and employ

  5. Nitrogen component in nonpoint source pollution models

    Science.gov (United States)

    Pollutants entering a water body can be very destructive to the health of that system. Best Management Practices (BMPs) and/or conservation practices are used to reduce these pollutants, but understanding the most effective practices is very difficult. Watershed models are an effective tool to aid...

  6. Neural assembly models derived through nano-scale measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Hongyou; Branda, Catherine; Schiek, Richard Louis; Warrender, Christina E.; Forsythe, James Chris

    2009-09-01

    This report summarizes accomplishments of a three-year project focused on developing technical capabilities for measuring and modeling neuronal processes at the nanoscale. It was successfully demonstrated that nanoprobes could be engineered that were biocompatible, and could be biofunctionalized, that responded within the range of voltages typically associated with a neuronal action potential. Furthermore, the Xyce parallel circuit simulator was employed and models incorporated for simulating the ion channel and cable properties of neuronal membranes. The ultimate objective of the project had been to employ nanoprobes in vivo, with the nematode C elegans, and derive a simulation based on the resulting data. Techniques were developed allowing the nanoprobes to be injected into the nematode and the neuronal response recorded. To the authors's knowledge, this is the first occasion in which nanoparticles have been successfully employed as probes for recording neuronal response in an in vivo animal experimental protocol.

  7. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    Science.gov (United States)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite

  8. Application of source-receptor models to determine source areas of biological components (pollen and butterflies)

    OpenAIRE

    M. Alarcón; M. Àvila; J. Belmonte; C. Stefanescu; R. Izquierdo

    2010-01-01

    The source-receptor models allow the establishment of relationships between a receptor point (sampling point) and the probable source areas (regions of emission) through the association of concentration values at the receptor point with the corresponding atmospheric back-trajectories, and, together with other techniques, to interpret transport phenomena on a synoptic scale. These models are generally used in air pollution studies to determine the areas of origin of chemical compounds measured...

  9. Multimorbidity in Australia: Comparing estimates derived using administrative data sources and survey data.

    Directory of Open Access Journals (Sweden)

    Sanja Lujic

    Full Text Available Estimating multimorbidity (presence of two or more chronic conditions using administrative data is becoming increasingly common. We investigated (1 the concordance of identification of chronic conditions and multimorbidity using self-report survey and administrative datasets; (2 characteristics of people with multimorbidity ascertained using different data sources; and (3 whether the same individuals are classified as multimorbid using different data sources.Baseline survey data for 90,352 participants of the 45 and Up Study-a cohort study of residents of New South Wales, Australia, aged 45 years and over-were linked to prior two-year pharmaceutical claims and hospital admission records. Concordance of eight self-report chronic conditions (reference with claims and hospital data were examined using sensitivity (Sn, positive predictive value (PPV, and kappa (κ.The characteristics of people classified as multimorbid were compared using logistic regression modelling.Agreement was found to be highest for diabetes in both hospital and claims data (κ = 0.79, 0.78; Sn = 79%, 72%; PPV = 86%, 90%. The prevalence of multimorbidity was highest using self-report data (37.4%, followed by claims data (36.1% and hospital data (19.3%. Combining all three datasets identified a total of 46 683 (52% people with multimorbidity, with half of these identified using a single dataset only, and up to 20% identified on all three datasets. Characteristics of persons with and without multimorbidity were generally similar. However, the age gradient was more pronounced and people speaking a language other than English at home were more likely to be identified as multimorbid by administrative data.Different individuals, with different combinations of conditions, are identified as multimorbid when different data sources are used. As such, caution should be applied when ascertaining morbidity from a single data source as the agreement between self-report and administrative

  10. Multimorbidity in Australia: Comparing estimates derived using administrative data sources and survey data.

    Science.gov (United States)

    Lujic, Sanja; Simpson, Judy M; Zwar, Nicholas; Hosseinzadeh, Hassan; Jorm, Louisa

    2017-01-01

    Estimating multimorbidity (presence of two or more chronic conditions) using administrative data is becoming increasingly common. We investigated (1) the concordance of identification of chronic conditions and multimorbidity using self-report survey and administrative datasets; (2) characteristics of people with multimorbidity ascertained using different data sources; and (3) whether the same individuals are classified as multimorbid using different data sources. Baseline survey data for 90,352 participants of the 45 and Up Study-a cohort study of residents of New South Wales, Australia, aged 45 years and over-were linked to prior two-year pharmaceutical claims and hospital admission records. Concordance of eight self-report chronic conditions (reference) with claims and hospital data were examined using sensitivity (Sn), positive predictive value (PPV), and kappa (κ).The characteristics of people classified as multimorbid were compared using logistic regression modelling. Agreement was found to be highest for diabetes in both hospital and claims data (κ = 0.79, 0.78; Sn = 79%, 72%; PPV = 86%, 90%). The prevalence of multimorbidity was highest using self-report data (37.4%), followed by claims data (36.1%) and hospital data (19.3%). Combining all three datasets identified a total of 46 683 (52%) people with multimorbidity, with half of these identified using a single dataset only, and up to 20% identified on all three datasets. Characteristics of persons with and without multimorbidity were generally similar. However, the age gradient was more pronounced and people speaking a language other than English at home were more likely to be identified as multimorbid by administrative data. Different individuals, with different combinations of conditions, are identified as multimorbid when different data sources are used. As such, caution should be applied when ascertaining morbidity from a single data source as the agreement between self-report and administrative data

  11. Effect of calcium source on structure and properties of sol-gel derived bioactive glasses.

    Science.gov (United States)

    Yu, Bobo; Turdean-Ionescu, Claudia A; Martin, Richard A; Newport, Robert J; Hanna, John V; Smith, Mark E; Jones, Julian R

    2012-12-18

    The aim was to determine the most effective calcium precursor for synthesis of sol-gel hybrids and for improving homogeneity of sol-gel bioactive glasses. Sol-gel derived bioactive calcium silicate glasses are one of the most promising materials for bone regeneration. Inorganic/organic hybrid materials, which are synthesized by incorporating a polymer into the sol-gel process, have also recently been produced to improve toughness. Calcium nitrate is conventionally used as the calcium source, but it has several disadvantages. Calcium nitrate causes inhomogeneity by forming calcium-rich regions, and it requires high temperature treatment (>400 °C) for calcium to be incorporated into the silicate network. Nitrates are also toxic and need to be burnt off. Calcium nitrate therefore cannot be used in the synthesis of hybrids as the highest temperature used in the process is typically 40-60 °C. Therefore, a different precursor is needed that can incorporate calcium into the silica network and enhance the homogeneity of the glasses at low (room) temperature. In this work, calcium methoxyethoxide (CME) was used to synthesize sol-gel bioactive glasses with a range of final processing temperatures from 60 to 800 °C. Comparison is made between the use of CME and calcium chloride and calcium nitrate. Using advanced probe techniques, the temperature at which Ca is incorporated into the network was identified for 70S30C (70 mol % SiO(2), 30 mol % CaO) for each of the calcium precursors. When CaCl(2) was used, the Ca did not seem to enter the network at any of the temperatures used. In contrast, Ca from CME entered the silica network at room temperature, as confirmed by X-ray diffraction, (29)Si magic angle spinning nuclear magnetic resonance spectroscopy, and dissolution studies. CME should be used in preference to calcium salts for hybrid synthesis and may improve homogeneity of sol-gel glasses.

  12. Structure activity relationships of quinoxalin-2-one derivatives as platelet-derived growth factor-beta receptor (PDGFbeta R) inhibitors, derived from molecular modeling.

    Science.gov (United States)

    Mori, Yoshikazu; Hirokawa, Takatsugu; Aoki, Katsuyuki; Satomi, Hisanori; Takeda, Shuichi; Aburada, Masaki; Miyamoto, Ken-ichi

    2008-05-01

    We previously reported a quinoxalin-2-one compound (Compound 1) that had inhibitory activity equivalent to existing platelet-derived growth factor-beta receptor (PDGFbeta R) inhibitors. Lead optimization of Compound 1 to increase its activity and selectivity, using structural information regarding PDGFbeta R-ligand interactions, is urgently needed. Here we present models of the PDGFbeta R kinase domain complexed with quinoxalin-2-one derivatives. The models were constructed using comparative modeling, molecular dynamics (MD) and ligand docking. In particular, conformations derived from MD, and ligand binding site information presented by alpha-spheres in the pre-docking processing, allowed us to identify optimal protein structures for docking of target ligands. By carrying out molecular modeling and MD of PDGFbeta R in its inactive state, we obtained two structural models having good Compound 1 binding potentials. In order to distinguish the optimal candidate, we evaluated the structural activity relationships (SAR) between the ligand-binding free energies and inhibitory activity values (IC50 values) for available quinoxalin-2-one derivatives. Consequently, a final model with a high SAR was identified. This model included a molecular interaction between the hydrophobic pocket behind the ATP binding site and the substitution region of the quinoxalin-2-one derivatives. These findings should prove useful in lead optimization of quinoxalin-2-one derivatives as PDGFb R inhibitors.

  13. Deriving a model for influenza epidemics from historical data.

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Lefantzi, Sophia

    2011-09-01

    In this report we describe how we create a model for influenza epidemics from historical data collected from both civilian and military societies. We derive the model when the population of the society is unknown but the size of the epidemic is known. Our interest lies in estimating a time-dependent infection rate to within a multiplicative constant. The model form fitted is chosen for its similarity to published models for HIV and plague, enabling application of Bayesian techniques to discriminate among infectious agents during an emerging epidemic. We have developed models for the progression of influenza in human populations. The model is framed as a integral, and predicts the number of people who exhibit symptoms and seek care over a given time-period. The start and end of the time period form the limits of integration. The disease progression model, in turn, contains parameterized models for the incubation period and a time-dependent infection rate. The incubation period model is obtained from literature, and the parameters of the infection rate are fitted from historical data including both military and civilian populations. The calibrated infection rate models display a marked difference in which the 1918 Spanish Influenza pandemic differed from the influenza seasons in the US between 2001-2008 and the progression of H1N1 in Catalunya, Spain. The data for the 1918 pandemic was obtained from military populations, while the rest are country-wide or province-wide data from the twenty-first century. We see that the initial growth of infection in all cases were about the same; however, military populations were able to control the epidemic much faster i.e., the decay of the infection-rate curve is much higher. It is not clear whether this was because of the much higher level of organization present in a military society or the seriousness with which the 1918 pandemic was addressed. Each outbreak to which the influenza model was fitted yields a separate set of

  14. 26 CFR 1.863-9 - Source of income derived from communications activity under section 863(a), (d), and (e).

    Science.gov (United States)

    2010-04-01

    ... SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Regulations Applicable... business within the United States is income from sources within the United States to the extent the income... taxpayer is paid to transmit the communication. Income derived by a United States or foreign person from...

  15. Derivation of Event-B Models from OWL Ontologies

    Directory of Open Access Journals (Sweden)

    Alkhammash Eman H.

    2016-01-01

    Full Text Available The derivation of formal specifications from large and complex requirements is a key challenge in systems engineering. In this paper we present an approach that aims to address this challenge by building formal models from OWL ontologies. An ontology is used in the field of knowledge representation to capture a clear view of the domain and to produce a concise and unambiguous set of domain requirements. We harness the power of ontologies to handle inconsistency of domain requirements and produce clear, concise and unambiguous set of domain requirements for Event-B modelling. The proposed approach works by generating Attempto Controlled English (ACE from the OWL ontology and then maps the ACE requirements to develop Event-B models. ACE is a subset of English that can be unambiguously translated into first-order logic. There is an injective mapping between OWL ontology and a subset of ACE. ACE is a suitable interlingua for producing the mapping between OWL and Event-B models for many reasons. Firstly, ACE is easy to learn and understand, it hides the math of OWL and would be natural to use by everybody. Secondly ACE has a parser that converts ACE texts into Discourse Representation Structures (DRS. Finally, ACE can be extended to target a richer syntactic subset of Event-B which ultimately would facilitate the translation of ACE requirements to Event-B.

  16. Comparison of human adipose-derived stem cells and bone marrow-derived stem cells in a myocardial infarction model

    DEFF Research Database (Denmark)

    Rasmussen, Jeppe; Frøbert, Ole; Holst-Hansen, Claus

    2014-01-01

    Background: Treatment of myocardial infarction with bone marrow-derived mesenchymal stem cells and recently also adipose-derived stem cells has shown promising results. In contrast to clinical trials and their use of autologous bone marrow-derived cells from the ischemic patient, the animal...... myocardial infarction models are often using young donors and young, often immune-compromised, recipient animals. Our objective was to compare bone marrow-derived mesenchymal stem cells with adipose-derived stem cells from an elderly ischemic patient in the treatment of myocardial infarction, using a fully...... grown non-immunecompromised rat model. Methods: Mesenchymal stem cells were isolated from adipose tissue and bone marrow and compared with respect to surface markers and proliferative capability. To compare the regenerative potential of the two stem cell populations, male Sprague-Dawley rats were...

  17. Bone marrow-derived stromal cells are more beneficial cell sources for tooth regeneration compared with adipose-derived stromal cells.

    Science.gov (United States)

    Ye, Lanfeng; Chen, Lin; Feng, Fan; Cui, Junhui; Li, Kaide; Li, Zhiyong; Liu, Lei

    2015-10-01

    Tooth loss is presently a global epidemic and tooth regeneration is thought to be a feasible and ideal treatment approach. Choice of cell source is a primary concern in tooth regeneration. In this study, the odontogenic differentiation potential of two non-dental-derived stem cells, adipose-derived stromal cells (ADSCs) and bone marrow-derived stromal cells (BMSCs), were evaluated both in vitro and in vivo. ADSCs and BMSCs were induced in vitro in the presence of tooth germ cell-conditioned medium (TGC-CM) prior to implantation into the omentum majus of rats, in combination with inactivated dentin matrix (IDM). Real-time quantitative polymerase chain reaction (RT-qPCR) was used to detect the mRNA expression levels of odontogenic-related genes. Immunofluorescence and immunohistochemical assays were used to detect the protein levels of odontogenic-specific genes, such as DSP and DMP-1 both in vitro and in vivo. The results suggest that both ADSCs and BMSCs have odontogenic differentiation potential. However, the odontogenic potential of BMSCs was greater compared with ADSCs, showing that BMSCs are a more appropriate cell source for tooth regeneration. © 2015 International Federation for Cell Biology.

  18. How organic carbon derived from multiple sources contributes to carbon sequestration processes in a shallow coastal system?

    Science.gov (United States)

    Watanabe, Kenta; Kuwae, Tomohiro

    2015-04-16

    Carbon captured by marine organisms helps sequester atmospheric CO 2 , especially in shallow coastal ecosystems, where rates of primary production and burial of organic carbon (OC) from multiple sources are high. However, linkages between the dynamics of OC derived from multiple sources and carbon sequestration are poorly understood. We investigated the origin (terrestrial, phytobenthos derived, and phytoplankton derived) of particulate OC (POC) and dissolved OC (DOC) in the water column and sedimentary OC using elemental, isotopic, and optical signatures in Furen Lagoon, Japan. Based on these data analysis, we explored how OC from multiple sources contributes to sequestration via storage in sediments, water column sequestration, and air-sea CO 2 exchanges, and analyzed how the contributions vary with salinity in a shallow seagrass meadow as well. The relative contribution of terrestrial POC in the water column decreased with increasing salinity, whereas autochthonous POC increased in the salinity range 10-30. Phytoplankton-derived POC dominated the water column POC (65-95%) within this salinity range; however, it was minor in the sediments (3-29%). In contrast, terrestrial and phytobenthos-derived POC were relatively minor contributors in the water column but were major contributors in the sediments (49-78% and 19-36%, respectively), indicating that terrestrial and phytobenthos-derived POC were selectively stored in the sediments. Autochthonous DOC, part of which can contribute to long-term carbon sequestration in the water column, accounted for >25% of the total water column DOC pool in the salinity range 15-30. Autochthonous OC production decreased the concentration of dissolved inorganic carbon in the water column and thereby contributed to atmospheric CO 2 uptake, except in the low-salinity zone. Our results indicate that shallow coastal ecosystems function not only as transition zones between land and ocean but also as carbon sequestration filters. They

  19. From salmon to shad: Shifting sources of marine-derived nutrients in the Columbia River Basin

    Science.gov (United States)

    Haskell, Craig A.

    2018-01-01

    Like Pacific salmon (Oncorhynchus spp.), nonnative American shad (Alosa sapidissima) have the potential to convey large quantities of nutrients between the Pacific Ocean and freshwater spawning areas in the Columbia River Basin (CRB). American shad are now the most numerous anadromous fish in the CRB, yet the magnitude of the resulting nutrient flux owing to the shift from salmon to shad is unknown. Nutrient flux models revealed that American shad conveyed over 15,000 kg of nitrogen (N) and 3,000 kg of phosphorus (P) annually to John Day Reservoir, the largest mainstem reservoir in the lower Columbia River. Shad were net importers of N, with juveniles and postspawners exporting just 31% of the N imported by adults. Shad were usually net importers of P, with juveniles and postspawners exporting 46% of the P imported by adults on average. American shad contributed salmon owing to their smaller size. Given the relatively high background P levels and low retention times in lower Columbia River reservoirs, it is unlikely that shad marine-derived nutrients affect nutrient balances or food web productivity through autotrophic pathways. However, a better understanding of shad spawning aggregations in the CRB is needed.

  20. Bayesian mixture models for source separation in MEG

    International Nuclear Information System (INIS)

    Calvetti, Daniela; Homa, Laura; Somersalo, Erkki

    2011-01-01

    This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal. (paper)

  1. Constraints on equivalent elastic source models from near-source data

    International Nuclear Information System (INIS)

    Stump, B.

    1993-01-01

    A phenomenological based seismic source model is important in quantifying the important physical processes that affect the observed seismic radiation in the linear-elastic regime. Representations such as these were used to assess yield effects on seismic waves under a Threshold Test Ban Treaty and to help transport seismic coupling experience at one test site to another. These same characterizations in a non-proliferation environment find applications in understanding the generation of the different types of body and surface waves from nuclear explosions, single chemical explosions, arrays of chemical explosions used in mining, rock bursts and earthquakes. Seismologists typically begin with an equivalent elastic representation of the source which when convolved with the propagation path effects produces a seismogram. The Representation Theorem replaces the true source with an equivalent set of body forces, boundary conditions or initial conditions. An extension of this representation shows the equivalence of the body forces, boundary conditions and initial conditions and replaces the source with a set of force moments, the first degree moment tensor for a point source representation. The difficulty with this formulation, which can completely describe the observed waveforms when the propagation path effects are known, is in the physical interpretation of the actual physical processes acting in the source volume. Observational data from within the source region, where processes are often nonlinear, linked to numerical models of the important physical processes in this region are critical to a unique physical understanding of the equivalent elastic source function

  2. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    Science.gov (United States)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration

  3. Data Sources for NetZero Ft Carson Model

    Data.gov (United States)

    U.S. Environmental Protection Agency — Table of values used to parameterize and evaluate the Ft Carson NetZero integrated Model with published reference sources for each value. This dataset is associated...

  4. Near-Source Modeling Updates: Building Downwash & Near-Road

    Science.gov (United States)

    The presentation describes recent research efforts in near-source model development focusing on building downwash and near-road barriers. The building downwash section summarizes a recent wind tunnel study, ongoing computational fluid dynamics simulations and efforts to improve ...

  5. Protein model discrimination using mutational sensitivity derived from deep sequencing.

    Science.gov (United States)

    Adkar, Bharat V; Tripathi, Arti; Sahoo, Anusmita; Bajaj, Kanika; Goswami, Devrishi; Chakrabarti, Purbani; Swarnkar, Mohit K; Gokhale, Rajesh S; Varadarajan, Raghavan

    2012-02-08

    A major bottleneck in protein structure prediction is the selection of correct models from a pool of decoys. Relative activities of ∼1,200 individual single-site mutants in a saturation library of the bacterial toxin CcdB were estimated by determining their relative populations using deep sequencing. This phenotypic information was used to define an empirical score for each residue (RankScore), which correlated with the residue depth, and identify active-site residues. Using these correlations, ∼98% of correct models of CcdB (RMSD ≤ 4Å) were identified from a large set of decoys. The model-discrimination methodology was further validated on eleven different monomeric proteins using simulated RankScore values. The methodology is also a rapid, accurate way to obtain relative activities of each mutant in a large pool and derive sequence-structure-function relationships without protein isolation or characterization. It can be applied to any system in which mutational effects can be monitored by a phenotypic readout. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  7. Earthquake Source Spectral Study beyond the Omega-Square Model

    Science.gov (United States)

    Uchide, T.; Imanishi, K.

    2017-12-01

    Earthquake source spectra have been used for characterizing earthquake source processes quantitatively and, at the same time, simply, so that we can analyze the source spectra for many earthquakes, especially for small earthquakes, at once and compare them each other. A standard model for the source spectra is the omega-square model, which has the flat spectrum and the falloff inversely proportional to the square of frequencies at low and high frequencies, respectively, which are bordered by a corner frequency. The corner frequency has often been converted to the stress drop under the assumption of circular crack models. However, recent studies claimed the existence of another corner frequency [Denolle and Shearer, 2016; Uchide and Imanishi, 2016] thanks to the recent development of seismic networks. We have found that many earthquakes in areas other than the area studied by Uchide and Imanishi [2016] also have source spectra deviating from the omega-square model. Another part of the earthquake spectra we now focus on is the falloff rate at high frequencies, which will affect the seismic energy estimation [e.g., Hirano and Yagi, 2017]. In June, 2016, we deployed seven velocity seismometers in the northern Ibaraki prefecture, where the shallow crustal seismicity mainly with normal-faulting events was activated by the 2011 Tohoku-oki earthquake. We have recorded seismograms at 1000 samples per second and at a short distance from the source, so that we can investigate the high-frequency components of the earthquake source spectra. Although we are still in the stage of discovery and confirmation of the deviation from the standard omega-square model, the update of the earthquake source spectrum model will help us systematically extract more information on the earthquake source process.

  8. Monte Carlo modelling of large scale NORM sources using MCNP.

    Science.gov (United States)

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  9. Optimization of Excitation in FDTD Method and Corresponding Source Modeling

    Directory of Open Access Journals (Sweden)

    B. Dimitrijevic

    2015-04-01

    Full Text Available Source and excitation modeling in FDTD formulation has a significant impact on the method performance and the required simulation time. Since the abrupt source introduction yields intensive numerical variations in whole computational domain, a generally accepted solution is to slowly introduce the source, using appropriate shaping functions in time. The main goal of the optimization presented in this paper is to find balance between two opposite demands: minimal required computation time and acceptable degradation of simulation performance. Reducing the time necessary for source activation and deactivation is an important issue, especially in design of microwave structures, when the simulation is intensively repeated in the process of device parameter optimization. Here proposed optimized source models are realized and tested within an own developed FDTD simulation environment.

  10. Stem Cells Derived from Amniotic Fluid: A Potential Pluripotent-Like Cell Source for Cellular Therapy?

    Science.gov (United States)

    Ramasamy, Thamil Selvee; Velaithan, Vithya; Yeow, Yelena; Sarkar, Fazlul H

    2018-01-01

    Regenerative medicine aims to provide therapeutic treatment for disease or injury, and cell-based therapy is a newer therapeutic approach different from conventional medicine. Ethical issues that rose by the utilisation of human embryonic stem cells (hESC) and the limited capacity of adult stem cells, however, hinder the application of these stem cells in regenerative medicine. Recently, isolation and characterisation of c-kit positive cells from human amniotic fluid, which possess intermediate characteristics between hESCs and adult stem cells, provided a new approach towards realising their promise for fetal and adult regenerative medicine. Despite the number of studies that have been initiated to characterize their molecular signature, research on developing approaches to maintain and enhance their regenerative potential is urgently needed and must be developed. Thus, this review is focused on understanding their potential uses and factors influencing their pluripotent status in vitro. In short, this cell source could be an ideal cellular resource for pluripotent cells for potential applications in allogeneic cellular replacement therapies, fetal tissue engineering, pharmaceutical screening, and in disease modelling. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  11. On the equivalence between the thirring model and a derivative coupling model

    International Nuclear Information System (INIS)

    Gomes, M.; Silva, A.J. da.

    1986-07-01

    The equivalence between the Thirring model and the fermionic sector of the theory of a Dirac field interacting via derivate coupling with two boson fields is analysed. For a certain choice of the parameters the two models have the same fermionic Green functions. (Author) [pt

  12. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  13. DFT application for chlorin derivatives photosensitizer drugs modeling

    Science.gov (United States)

    Machado, Neila; Carvalho, B. G.; Téllez Soto, C. A.; Martin, A. A.; Favero, P. P.

    2018-04-01

    Photodynamic therapy is an alternative form of cancer treatment that meets the desire for a less aggressive approach to the body. It is based on the interaction between a photosensitizer, activating light, and molecular oxygen. This interaction results in a cascade of reactions that leads to localized cell death. Many studies have been conducted to discover an ideal photosensitizer, which aggregates all the desirable characteristics of a potent cell killer and generates minimal side effects. Using Density Functional Theory (DFT) implemented in the program Vienna Ab-initio Simulation Package, new chlorin derivatives with different functional groups were simulated to evaluate the different absorption wavelengths to permit resonant absorption with the incident laser. Gaussian 09 program was used to determine vibrational wave numbers and Natural Bond Orbitals. The chosen drug with the best characteristics for the photosensitizer was a modified model of the original chlorin, which was called as Thiol chlorin. According to our calculations it is stable and is 19.6% more efficient at optical absorption in 708 nm in comparison to the conventional chlorin e6. Vibrational modes, optical and electronic properties were predicted. In conclusion, this study is an attempt to improve the development of new photosensitizer drugs through computational methods that save time and contribute to decrease the numbers of animals for model application.

  14. A global catalogue of large SO2 sources and emissions derived from the Ozone Monitoring Instrument

    Directory of Open Access Journals (Sweden)

    V. E. Fioletov

    2016-09-01

    Full Text Available Sulfur dioxide (SO2 measurements from the Ozone Monitoring Instrument (OMI satellite sensor processed with the new principal component analysis (PCA algorithm were used to detect large point emission sources or clusters of sources. The total of 491 continuously emitting point sources releasing from about 30 kt yr−1 to more than 4000 kt yr−1 of SO2 per year have been identified and grouped by country and by primary source origin: volcanoes (76 sources; power plants (297; smelters (53; and sources related to the oil and gas industry (65. The sources were identified using different methods, including through OMI measurements themselves applied to a new emission detection algorithm, and their evolution during the 2005–2014 period was traced by estimating annual emissions from each source. For volcanic sources, the study focused on continuous degassing, and emissions from explosive eruptions were excluded. Emissions from degassing volcanic sources were measured, many for the first time, and collectively they account for about 30 % of total SO2 emissions estimated from OMI measurements, but that fraction has increased in recent years given that cumulative global emissions from power plants and smelters are declining while emissions from oil and gas industry remained nearly constant. Anthropogenic emissions from the USA declined by 80 % over the 2005–2014 period as did emissions from western and central Europe, whereas emissions from India nearly doubled, and emissions from other large SO2-emitting regions (South Africa, Russia, Mexico, and the Middle East remained fairly constant. In total, OMI-based estimates account for about a half of total reported anthropogenic SO2 emissions; the remaining half is likely related to sources emitting less than 30 kt yr−1 and not detected by OMI.

  15. A Global Catalogue of Large SO2 Sources and Emissions Derived from the Ozone Monitoring Instrument

    Science.gov (United States)

    Fioletov, Vitali E.; McLinden, Chris A.; Krotkov, Nickolay; Li, Can; Joiner, Joanna; Theys, Nicolas; Carn, Simon; Moran, Mike D.

    2016-01-01

    Sulfur dioxide (SO2) measurements from the Ozone Monitoring Instrument (OMI) satellite sensor processed with the new principal component analysis (PCA) algorithm were used to detect large point emission sources or clusters of sources. The total of 491 continuously emitting point sources releasing from about 30 kt yr(exp -1) to more than 4000 kt yr(exp -1) of SO2 per year have been identified and grouped by country and by primary source origin: volcanoes (76 sources); power plants (297); smelters (53); and sources related to the oil and gas industry (65). The sources were identified using different methods, including through OMI measurements themselves applied to a new emission detection algorithm, and their evolution during the 2005- 2014 period was traced by estimating annual emissions from each source. For volcanic sources, the study focused on continuous degassing, and emissions from explosive eruptions were excluded. Emissions from degassing volcanic sources were measured, many for the first time, and collectively they account for about 30% of total SO2 emissions estimated from OMI measurements, but that fraction has increased in recent years given that cumulative global emissions from power plants and smelters are declining while emissions from oil and gas industry remained nearly constant. Anthropogenic emissions from the USA declined by 80% over the 2005-2014 period as did emissions from western and central Europe, whereas emissions from India nearly doubled, and emissions from other large SO2-emitting regions (South Africa, Russia, Mexico, and the Middle East) remained fairly constant. In total, OMI-based estimates account for about a half of total reported anthropogenic SO2 emissions; the remaining half is likely related to sources emitting less than 30 kt yr(exp -1) and not detected by OMI.

  16. The Unfolding of Value Sources During Online Business Model Transformation

    Directory of Open Access Journals (Sweden)

    Nadja Hoßbach

    2016-12-01

    Full Text Available Purpose: In the magazine publishing industry, viable online business models are still rare to absent. To prepare for the ‘digital future’ and safeguard their long-term survival, many publishers are currently in the process of transforming their online business model. Against this backdrop, this study aims to develop a deeper understanding of (1 how the different building blocks of an online business model are transformed over time and (2 how sources of value creation unfold during this transformation process. Methodology: To answer our research question, we conducted a longitudinal case study with a leading German business magazine publisher (called BIZ. Data was triangulated from multiple sources including interviews, internal documents, and direct observations. Findings: Based on our case study, we nd that BIZ used the transformation process to differentiate its online business model from its traditional print business model along several dimensions, and that BIZ’s online business model changed from an efficiency- to a complementarity- to a novelty-based model during this process. Research implications: Our findings suggest that different business model transformation phases relate to different value sources, questioning the appropriateness of value source-based approaches for classifying business models. Practical implications: The results of our case study highlight the need for online-offline business model differentiation and point to the important distinction between service and product differentiation. Originality: Our study contributes to the business model literature by applying a dynamic and holistic perspective on the link between online business model changes and unfolding value sources.

  17. Using an Altimeter-Derived Internal Tide Model to Remove Tides from in Situ Data

    Science.gov (United States)

    Zaron, Edward D.; Ray, Richard D.

    2017-01-01

    Internal waves at tidal frequencies, i.e., the internal tides, are a prominent source of variability in the ocean associated with significant vertical isopycnal displacements and currents. Because the isopycnal displacements are caused by ageostrophic dynamics, they contribute uncertainty to geostrophic transport inferred from vertical profiles in the ocean. Here it is demonstrated that a newly developed model of the main semidiurnal (M2) internal tide derived from satellite altimetry may be used to partially remove the tide from vertical profile data, as measured by the reduction of steric height variance inferred from the profiles. It is further demonstrated that the internal tide model can account for a component of the near-surface velocity as measured by drogued drifters. These comparisons represent a validation of the internal tide model using independent data and highlight its potential use in removing internal tide signals from in situ observations.

  18. Source terms derived from analyses of hypothetical accidents, 1950-1986

    International Nuclear Information System (INIS)

    Stratton, W.R.

    1987-01-01

    This paper reviews the history of reactor accident source term assumptions. After the Three Mile Island accident, a number of theoretical and experimental studies re-examined possible accident sequences and source terms. Some of these results are summarized in this paper

  19. Modeling water demand when households have multiple sources of water

    Science.gov (United States)

    Coulibaly, Lassina; Jakus, Paul M.; Keith, John E.

    2014-07-01

    A significant portion of the world's population lives in areas where public water delivery systems are unreliable and/or deliver poor quality water. In response, people have developed important alternatives to publicly supplied water. To date, most water demand research has been based on single-equation models for a single source of water, with very few studies that have examined water demand from two sources of water (where all nonpublic system water sources have been aggregated into a single demand). This modeling approach leads to two outcomes. First, the demand models do not capture the full range of alternatives, so the true economic relationship among the alternatives is obscured. Second, and more seriously, economic theory predicts that demand for a good becomes more price-elastic as the number of close substitutes increases. If researchers artificially limit the number of alternatives studied to something less than the true number, the price elasticity estimate may be biased downward. This paper examines water demand in a region with near universal access to piped water, but where system reliability and quality is such that many alternative sources of water exist. In extending the demand analysis to four sources of water, we are able to (i) demonstrate why households choose the water sources they do, (ii) provide a richer description of the demand relationships among sources, and (iii) calculate own-price elasticity estimates that are more elastic than those generally found in the literature.

  20. Simple model for deriving sdg interacting boson model Hamiltonians: 150Nd example

    International Nuclear Information System (INIS)

    Devi, Y.D.; Kota, V.K.B.

    1993-01-01

    A simple and yet useful model for deriving sdg interacting boson model (IBM) Hamiltonians is to assume that single-boson energies derive from identical particle (pp and nn) interactions and proton, neutron single-particle energies, and that the two-body matrix elements for bosons derive from pn interaction, with an IBM-2 to IBM-1 projection of the resulting p-n sdg IBM Hamiltonian. The applicability of this model in generating sdg IBM Hamiltonians is demonstrated, using a single-j-shell Otsuka-Arima-Iachello mapping of the quadrupole and hexadecupole operators in proton and neutron spaces separately and constructing a quadrupole-quadrupole plus hexadecupole-hexadecupole Hamiltonian in the analysis of the spectra, B(E2)'s, and E4 strength distribution in the example of 150 Nd

  1. Simple model for deriving sdg interacting boson model Hamiltonians: 150Nd example

    Science.gov (United States)

    Devi, Y. D.; Kota, V. K. B.

    1993-07-01

    A simple and yet useful model for deriving sdg interacting boson model (IBM) Hamiltonians is to assume that single-boson energies derive from identical particle (pp and nn) interactions and proton, neutron single-particle energies, and that the two-body matrix elements for bosons derive from pn interaction, with an IBM-2 to IBM-1 projection of the resulting p-n sdg IBM Hamiltonian. The applicability of this model in generating sdg IBM Hamiltonians is demonstrated, using a single-j-shell Otsuka-Arima-Iachello mapping of the quadrupole and hexadecupole operators in proton and neutron spaces separately and constructing a quadrupole-quadrupole plus hexadecupole-hexadecupole Hamiltonian in the analysis of the spectra, B(E2)'s, and E4 strength distribution in the example of 150Nd.

  2. Deriving phenological metrics from NDVI through an open source tool developed in QGIS

    Science.gov (United States)

    Duarte, Lia; Teodoro, A. C.; Gonçalves, Hernãni

    2014-10-01

    Vegetation indices have been commonly used over the past 30 years for studying vegetation characteristics using images collected by remote sensing satellites. One of the most commonly used is the Normalized Difference Vegetation Index (NDVI). The various stages that green vegetation undergoes during a complete growing season can be summarized through time-series analysis of NDVI data. The analysis of such time-series allow for extracting key phenological variables or metrics of a particular season. These characteristics may not necessarily correspond directly to conventional, ground-based phenological events, but do provide indications of ecosystem dynamics. A complete list of the phenological metrics that can be extracted from smoothed, time-series NDVI data is available in the USGS online resources (http://phenology.cr.usgs.gov/methods_deriving.php).This work aims to develop an open source application to automatically extract these phenological metrics from a set of satellite input data. The main advantage of QGIS for this specific application relies on the easiness and quickness in developing new plug-ins, using Python language, based on the experience of the research group in other related works. QGIS has its own application programming interface (API) with functionalities and programs to develop new features. The toolbar developed for this application was implemented using the plug-in NDVIToolbar.py. The user introduces the raster files as input and obtains a plot and a report with the metrics. The report includes the following eight metrics: SOST (Start Of Season - Time) corresponding to the day of the year identified as having a consistent upward trend in the NDVI time series; SOSN (Start Of Season - NDVI) corresponding to the NDVI value associated with SOST; EOST (End of Season - Time) which corresponds to the day of year identified at the end of a consistent downward trend in the NDVI time series; EOSN (End of Season - NDVI) corresponding to the NDVI value

  3. Evaluation of bias associated with capture maps derived from nonlinear groundwater flow models

    Science.gov (United States)

    Nadler, Cara; Allander, Kip K.; Pohll, Greg; Morway, Eric D.; Naranjo, Ramon C.; Huntington, Justin

    2018-01-01

    The impact of groundwater withdrawal on surface water is a concern of water users and water managers, particularly in the arid western United States. Capture maps are useful tools to spatially assess the impact of groundwater pumping on water sources (e.g., streamflow depletion) and are being used more frequently for conjunctive management of surface water and groundwater. Capture maps have been derived using linear groundwater flow models and rely on the principle of superposition to demonstrate the effects of pumping in various locations on resources of interest. However, nonlinear models are often necessary to simulate head-dependent boundary conditions and unconfined aquifers. Capture maps developed using nonlinear models with the principle of superposition may over- or underestimate capture magnitude and spatial extent. This paper presents new methods for generating capture difference maps, which assess spatial effects of model nonlinearity on capture fraction sensitivity to pumping rate, and for calculating the bias associated with capture maps. The sensitivity of capture map bias to selected parameters related to model design and conceptualization for the arid western United States is explored. This study finds that the simulation of stream continuity, pumping rates, stream incision, well proximity to capture sources, aquifer hydraulic conductivity, and groundwater evapotranspiration extinction depth substantially affect capture map bias. Capture difference maps demonstrate that regions with large capture fraction differences are indicative of greater potential capture map bias. Understanding both spatial and temporal bias in capture maps derived from nonlinear groundwater flow models improves their utility and defensibility as conjunctive-use management tools.

  4. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  5. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  6. Variance analysis of the Monte Carlo perturbation source method in inhomogeneous linear particle transport problems. Derivation of formulae

    International Nuclear Information System (INIS)

    Noack, K.

    1981-01-01

    The perturbation source method is used in the Monte Carlo method in calculating small effects in a particle field. It offers primising possibilities for introducing positive correlation between subtracting estimates even in the cases where other methods fail, in the case of geometrical variations of a given arrangement. The perturbation source method is formulated on the basis of integral equations for the particle fields. The formulae for the second moment of the difference of events are derived. Explicity a certain class of transport games and different procedures for generating the so-called perturbation particles are considered [ru

  7. MCNP model for the many KE-Basin radiation sources

    International Nuclear Information System (INIS)

    Rittmann, P.D.

    1997-01-01

    This document presents a model for the location and strength of radiation sources in the accessible areas of KE-Basin which agrees well with data taken on a regular grid in September of 1996. This modelling work was requested to support dose rate reduction efforts in KE-Basin. Anticipated fuel removal activities require lower dose rates to minimize annual dose to workers. With this model, the effects of component cleanup or removal can be estimated in advance to evaluate their effectiveness. In addition, the sources contributing most to the radiation fields in a given location can be identified and dealt with

  8. Open source data assimilation framework for hydrological modeling

    Science.gov (United States)

    Ridler, Marc; Hummel, Stef; van Velzen, Nils; Katrine Falk, Anne; Madsen, Henrik

    2013-04-01

    An open-source data assimilation framework is proposed for hydrological modeling. Data assimilation (DA) in hydrodynamic and hydrological forecasting systems has great potential to improve predictions and improve model result. The basic principle is to incorporate measurement information into a model with the aim to improve model results by error minimization. Great strides have been made to assimilate traditional in-situ measurements such as discharge, soil moisture, hydraulic head and snowpack into hydrologic models. More recently, remotely sensed data retrievals of soil moisture, snow water equivalent or snow cover area, surface water elevation, terrestrial water storage and land surface temperature have been successfully assimilated in hydrological models. The assimilation algorithms have become increasingly sophisticated to manage measurement and model bias, non-linear systems, data sparsity (time & space) and undetermined system uncertainty. It is therefore useful to use a pre-existing DA toolbox such as OpenDA. OpenDA is an open interface standard for (and free implementation of) a set of tools to quickly implement DA and calibration for arbitrary numerical models. The basic design philosophy of OpenDA is to breakdown DA into a set of building blocks programmed in object oriented languages. To implement DA, a model must interact with OpenDA to create model instances, propagate the model, get/set variables (or parameters) and free the model once DA is completed. An open-source interface for hydrological models exists capable of all these tasks: OpenMI. OpenMI is an open source standard interface already adopted by key hydrological model providers. It defines a universal approach to interact with hydrological models during simulation to exchange data during runtime, thus facilitating the interactions between models and data sources. The interface is flexible enough so that models can interact even if the model is coded in a different language, represent

  9. Bioaccumulation of hydrocarbons derived from terrestrial and anthropogenic sources in the Asian clam, Potamocorbula amurensis, in San Francisco Bay estuary

    Science.gov (United States)

    Pereira, Wilfred E.; Hostettler, Frances D.; Rapp, John B.

    1992-01-01

    An assessment was made in Suisun Bay, California, of the distributions of hydrocarbons in estuarine bed and suspended sediments and in the recently introduced asian clam, Potamocorbula amurensis. Sediments and clams were contaminated with hydrocarbons derived from petrogenic and pyrogenic sources. Distributions of alkanes and of hopane and sterane biomarkers in sediments and clams were similar, indicating that petroleum hydrocarbons associated with sediments are bioavailable to Potamocorbula amurensis. Polycyclic aromatic hydrocarbons in the sediments and clams were derived mainly from combustion sources. Potamocorbula amurensis is therefore a useful bioindicator of hydrocarbon contamination, and may be used as a biomonitor of hydrocarbon pollution in San Francisco Bay.

  10. A Remote Sensing-Derived Corn Yield Assessment Model

    Science.gov (United States)

    Shrestha, Ranjay Man

    be further associated with the actual yield. Utilizing satellite remote sensing products, such as daily NDVI derived from Moderate Resolution Imaging Spectroradiometer (MODIS) at 250 m pixel size, the crop yield estimation can be performed at a very fine spatial resolution. Therefore, this study examined the potential of these daily NDVI products within agricultural studies and crop yield assessments. In this study, a regression-based approach was proposed to estimate the annual corn yield through changes in MODIS daily NDVI time series. The relationship between daily NDVI and corn yield was well defined and established, and as changes in corn phenology and yield were directly reflected by the changes in NDVI within the growing season, these two entities were combined to develop a relational model. The model was trained using 15 years (2000-2014) of historical NDVI and county-level corn yield data for four major corn producing states: Kansas, Nebraska, Iowa, and Indiana, representing four climatic regions as South, West North Central, East North Central, and Central, respectively, within the U.S. Corn Belt area. The model's goodness of fit was well defined with a high coefficient of determination (R2>0.81). Similarly, using 2015 yield data for validation, 92% of average accuracy signified the performance of the model in estimating corn yield at county level. Besides providing the county-level corn yield estimations, the derived model was also accurate enough to estimate the yield at finer spatial resolution (field level). The model's assessment accuracy was evaluated using the randomly selected field level corn yield within the study area for 2014, 2015, and 2016. A total of over 120 plot level corn yield were used for validation, and the overall average accuracy was 87%, which statistically justified the model's capability to estimate plot-level corn yield. Additionally, the proposed model was applied to the impact estimation by examining the changes in corn yield

  11. Effects of Source RDP Models and Near-source Propagation: Implication for Seismic Yield Estimation

    Science.gov (United States)

    Saikia, C. K.; Helmberger, D. V.; Stead, R. J.; Woods, B. B.

    - It has proven difficult to uniquely untangle the source and propagation effects on the observed seismic data from underground nuclear explosions, even when large quantities of near-source, broadband data are available for analysis. This leads to uncertainties in our ability to quantify the nuclear seismic source function and, consequently the accuracy of seismic yield estimates for underground explosions. Extensive deterministic modeling analyses of the seismic data recorded from underground explosions at a variety of test sites have been conducted over the years and the results of these studies suggest that variations in the seismic source characteristics between test sites may be contributing to the observed differences in the magnitude/yield relations applicable at those sites. This contributes to our uncertainty in the determination of seismic yield estimates for explosions at previously uncalibrated test sites. In this paper we review issues involving the relationship of Nevada Test Site (NTS) source scaling laws to those at other sites. The Joint Verification Experiment (JVE) indicates that a magnitude (mb) bias (δmb) exists between the Semipalatinsk test site (STS) in the former Soviet Union (FSU) and the Nevada test site (NTS) in the United States. Generally this δmb is attributed to differential attenuation in the upper-mantle beneath the two test sites. This assumption results in rather large estimates of yield for large mb tunnel shots at Novaya Zemlya. A re-examination of the US testing experiments suggests that this δmb bias can partly be explained by anomalous NTS (Pahute) source characteristics. This interpretation is based on the modeling of US events at a number of test sites. Using a modified Haskell source description, we investigated the influence of the source Reduced Displacement Potential (RDP) parameters ψ ∞ , K and B by fitting short- and long-period data simultaneously, including the near-field body and surface waves. In general

  12. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  13. Modeling Anti-HIV Activity of HEPT Derivatives Revisited. Multiregression Models Are Not Inferior Ones

    International Nuclear Information System (INIS)

    Basic, Ivan; Nadramija, Damir; Flajslik, Mario; Amic, Dragan; Lucic, Bono

    2007-01-01

    Several quantitative structure-activity studies for this data set containing 107 HEPT derivatives have been performed since 1997, using the same set of molecules by (more or less) different classes of molecular descriptors. Multivariate Regression (MR) and Artificial Neural Network (ANN) models were developed and in each study the authors concluded that ANN models are superior to MR ones. We re-calculated multivariate regression models for this set of molecules using the same set of descriptors, and compared our results with the previous ones. Two main reasons for overestimation of the quality of the ANN models in previous studies comparing with MR models are: (1) wrong calculation of leave-one-out (LOO) cross-validated (CV) correlation coefficient for MR models in Luco et al., J. Chem. Inf. Comput. Sci. 37 392-401 (1997), and (2) incorrect estimation/interpretation of leave-one-out (LOO) cross-validated and predictive performance and power of ANN models. More precise and fairer comparison of fit and LOO CV statistical parameters shows that MR models are more stable. In addition, MR models are much simpler than ANN ones. For real testing the predictive performance of both classes of models we need more HEPT derivatives, because all ANN models that presented results for external set of molecules used experimental values in optimization of modeling procedure and model parameters

  14. Double point source W-phase inversion: Real-time implementation and automated model selection

    Science.gov (United States)

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  15. Induced pluripotent stem cells (iPSCs) derived from different cell sources and their potential for regenerative and personalized medicine.

    Science.gov (United States)

    Shtrichman, R; Germanguz, I; Itskovitz-Eldor, J

    2013-06-01

    Human induced pluripotent stem cells (hiPSCs) have great potential as a robust source of progenitors for regenerative medicine. The novel technology also enables the derivation of patient-specific cells for applications to personalized medicine, such as for personal drug screening and toxicology. However, the biological characteristics of iPSCs are not yet fully understood and their similarity to human embryonic stem cells (hESCs) is still unresolved. Variations among iPSCs, resulting from their original tissue or cell source, and from the experimental protocols used for their derivation, significantly affect epigenetic properties and differentiation potential. Here we review the potential of iPSCs for regenerative and personalized medicine, and assess their expression pattern, epigenetic memory and differentiation capabilities in relation to their parental tissue source. We also summarize the patient-specific iPSCs that have been derived for applications in biological research and drug discovery; and review risks that must be overcome in order to use iPSC technology for clinical applications.

  16. Rapid Automatic Lighting Control of a Mixed Light Source for Image Acquisition using Derivative Optimum Search Methods

    Directory of Open Access Journals (Sweden)

    Kim HyungTae

    2015-01-01

    Full Text Available Automatic lighting (auto-lighting is a function that maximizes the image quality of a vision inspection system by adjusting the light intensity and color.In most inspection systems, a single color light source is used, and an equal step search is employed to determine the maximum image quality. However, when a mixed light source is used, the number of iterations becomes large, and therefore, a rapid search method must be applied to reduce their number. Derivative optimum search methods follow the tangential direction of a function and are usually faster than other methods. In this study, multi-dimensional forms of derivative optimum search methods are applied to obtain the maximum image quality considering a mixed-light source. The auto-lighting algorithms were derived from the steepest descent and conjugate gradient methods, which have N-size inputs of driving voltage and one output of image quality. Experiments in which the proposed algorithm was applied to semiconductor patterns showed that a reduced number of iterations is required to determine the locally maximized image quality.

  17. Patient-derived xenograft models to improve targeted therapy in epithelial ovarian cancer treatment

    Directory of Open Access Journals (Sweden)

    Clare eScott

    2013-12-01

    Full Text Available Despite increasing evidence that precision therapy targeted to the molecular drivers of a cancer has the potential to improve clinical outcomes, high-grade epithelial ovarian cancer patients are currently treated without consideration of molecular phenotype, and predictive biomarkers that could better inform treatment remain unknown. Delivery of precision therapy requires improved integration of laboratory-based models and cutting-edge clinical research, with pre-clinical models predicting patient subsets that will benefit from a particular targeted therapeutic. Patient-derived xenografts (PDX are renewable tumor models engrafted in mice, generated from fresh human tumors without prior in vitro exposure. PDX models allow an invaluable assessment of tumor evolution and adaptive response to therapy.PDX models have been applied to preclinical drug testing and biomarker identification in a number of cancers including ovarian, pancreatic, breast and prostate cancers. These models have been shown to be biologically stable and accurately reflect the patient tumor with regards to histopathology, gene expression, genetic mutations and therapeutic response. However, pre-clinical analyses of molecularly annotated PDX models derived from high-grade serous ovarian cancer (HG-SOC remain limited. In vivo response to conventional and/or targeted therapeutics has only been described for very small numbers of individual HG-SOC PDX in conjunction with sparse molecular annotation and patient outcome data. Recently, two consecutive panels of epithelial ovarian cancer PDX correlate in vivo platinum response with molecular aberrations and source patient clinical outcomes. These studies underpin the value of PDX models to better direct chemotherapy and predict response to targeted therapy. Tumor heterogeneity, before and following treatment, as well as the importance of multiple molecular aberrations per individual tumor underscore some of the important issues

  18. Open Sourcing Social Change: Inside the Constellation Model

    OpenAIRE

    Tonya Surman; Mark Surman

    2008-01-01

    The constellation model was developed by and for the Canadian Partnership for Children's Health and the Environment. The model offers an innovative approach to organizing collaborative efforts in the social mission sector and shares various elements of the open source model. It emphasizes self-organizing and concrete action within a network of partner organizations working on a common issue. Constellations are self-organizing action teams that operate within the broader strategic vision of a ...

  19. White Dwarf Model Atmospheres: Synthetic Spectra for Super Soft Sources

    OpenAIRE

    Rauch, Thomas

    2011-01-01

    The T\\"ubingen NLTE Model-Atmosphere Package (TMAP) calculates fully metal-line blanketed white dwarf model atmospheres and spectral energy distributions (SEDs) at a high level of sophistication. Such SEDs are easily accessible via the German Astrophysical Virtual Observatory (GAVO) service TheoSSA. We discuss applications of TMAP models to (pre) white dwarfs during the hottest stages of their stellar evolution, e.g. in the parameter range of novae and super soft sources.

  20. White Dwarf Model Atmospheres: Synthetic Spectra for Supersoft Sources

    Science.gov (United States)

    Rauch, Thomas

    2013-01-01

    The Tübingen NLTE Model-Atmosphere Package (TMAP) calculates fully metal-line blanketed white dwarf model atmospheres and spectral energy distributions (SEDs) at a high level of sophistication. Such SEDs are easily accessible via the German Astrophysical Virtual Observatory (GAVO) service TheoSSA. We discuss applications of TMAP models to (pre) white dwarfs during the hottest stages of their stellar evolution, e.g. in the parameter range of novae and supersoft sources.

  1. Craton-derived alluvium as a major sediment source in the Himalayan Foreland Basin of India

    DEFF Research Database (Denmark)

    Sinha, R.; Kettanah, Y.; Gibling, M.R.

    2009-01-01

    of the Bundelkhand Complex. Along the Yamuna Valley the red alluvium is overlain by gray alluvium dated at 82–35 ka ago, which also yields a cratonic signature, with large amounts of smectite derived from the Deccan Traps. Cratonic contributions are evident in alluvium as young as 9 ka ago in a section 25 km north...

  2. Sediment delivery estimates in water quality models altered by resolution and source of topographic data.

    Science.gov (United States)

    Beeson, Peter C; Sadeghi, Ali M; Lang, Megan W; Tomer, Mark D; Daughtry, Craig S T

    2014-01-01

    Moderate-resolution (30-m) digital elevation models (DEMs) are normally used to estimate slope for the parameterization of non-point source, process-based water quality models. These models, such as the Soil and Water Assessment Tool (SWAT), use the Universal Soil Loss Equation (USLE) and Modified USLE to estimate sediment loss. The slope length and steepness factor, a critical parameter in USLE, significantly affects sediment loss estimates. Depending on slope range, a twofold difference in slope estimation potentially results in as little as 50% change or as much as 250% change in the LS factor and subsequent sediment estimation. Recently, the availability of much finer-resolution (∼3 m) DEMs derived from Light Detection and Ranging (LiDAR) data has increased. However, the use of these data may not always be appropriate because slope values derived from fine spatial resolution DEMs are usually significantly higher than slopes derived from coarser DEMs. This increased slope results in considerable variability in modeled sediment output. This paper addresses the implications of parameterizing models using slope values calculated from DEMs with different spatial resolutions (90, 30, 10, and 3 m) and sources. Overall, we observed over a 2.5-fold increase in slope when using a 3-m instead of a 90-m DEM, which increased modeled soil loss using the USLE calculation by 130%. Care should be taken when using LiDAR-derived DEMs to parameterize water quality models because doing so can result in significantly higher slopes, which considerably alter modeled sediment loss. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  3. Development of a Model for Dynamic Recrystallization Consistent with the Second Derivative Criterion

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2017-11-01

    Full Text Available Dynamic recrystallization (DRX processes are widely used in industrial hot working operations, not only to keep the forming forces low but also to control the microstructure and final properties of the workpiece. According to the second derivative criterion (SDC by Poliak and Jonas, the onset of DRX can be detected from an inflection point in the strain-hardening rate as a function of flow stress. Various models are available that can predict the evolution of flow stress from incipient plastic flow up to steady-state deformation in the presence of DRX. Some of these models have been implemented into finite element codes and are widely used for the design of metal forming processes, but their consistency with the SDC has not been investigated. This work identifies three sources of inconsistencies that models for DRX may exhibit. For a consistent modeling of the DRX kinetics, a new strain-hardening model for the hardening stages III to IV is proposed and combined with consistent recrystallization kinetics. The model is devised in the Kocks-Mecking space based on characteristic transition in the strain-hardening rate. A linear variation of the transition and inflection points is observed for alloy 800H at all tested temperatures and strain rates. The comparison of experimental and model results shows that the model is able to follow the course of the strain-hardening rate very precisely, such that highly accurate flow stress predictions are obtained.

  4. Extended nonnegative tensor factorisation models for musical sound source separation.

    Science.gov (United States)

    FitzGerald, Derry; Cranitch, Matt; Coyle, Eugene

    2008-01-01

    Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  5. Extended Nonnegative Tensor Factorisation Models for Musical Sound Source Separation

    Directory of Open Access Journals (Sweden)

    Derry FitzGerald

    2008-01-01

    Full Text Available Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  6. Monitoring alert and drowsy states by modeling EEG source nonstationarity

    Science.gov (United States)

    Hsu, Sheng-Hsiou; Jung, Tzyy-Ping

    2017-10-01

    Objective. As a human brain performs various cognitive functions within ever-changing environments, states of the brain characterized by recorded brain activities such as electroencephalogram (EEG) are inevitably nonstationary. The challenges of analyzing the nonstationary EEG signals include finding neurocognitive sources that underlie different brain states and using EEG data to quantitatively assess the state changes. Approach. This study hypothesizes that brain activities under different states, e.g. levels of alertness, can be modeled as distinct compositions of statistically independent sources using independent component analysis (ICA). This study presents a framework to quantitatively assess the EEG source nonstationarity and estimate levels of alertness. The framework was tested against EEG data collected from 10 subjects performing a sustained-attention task in a driving simulator. Main results. Empirical results illustrate that EEG signals under alert versus drowsy states, indexed by reaction speeds to driving challenges, can be characterized by distinct ICA models. By quantifying the goodness-of-fit of each ICA model to the EEG data using the model deviation index (MDI), we found that MDIs were significantly correlated with the reaction speeds (r  =  -0.390 with alertness models and r  =  0.449 with drowsiness models) and the opposite correlations indicated that the two models accounted for sources in the alert and drowsy states, respectively. Based on the observed source nonstationarity, this study also proposes an online framework using a subject-specific ICA model trained with an initial (alert) state to track the level of alertness. For classification of alert against drowsy states, the proposed online framework achieved an averaged area-under-curve of 0.745 and compared favorably with a classic power-based approach. Significance. This ICA-based framework provides a new way to study changes of brain states and can be applied to

  7. The SSI TOOLBOX Source Term Model SOSIM - Screening for important radionuclides and parameter sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Avila Moreno, R.; Barrdahl, R.; Haegg, C.

    1995-05-01

    The main objective of the present study was to carry out a screening and a sensitivity analysis of the SSI TOOLBOX source term model SOSIM. This model is a part of the SSI TOOLBOX for radiological impact assessment of the Swedish disposal concept for high-level waste KBS-3. The outputs of interest for this purpose were: the total released fraction, the time of total release, the time and value of maximum release rate, the dose rates after direct releases of the biosphere. The source term equations were derived and simple equations and methods were proposed for calculation of these. A literature survey has been performed in order to determine a characteristic variation range and a nominal value for each model parameter. In order to reduce the model uncertainties the authors recommend a change in the initial boundary condition for solution of the diffusion equation for highly soluble nuclides. 13 refs.

  8. Currents, HF Radio-derived, Monterey Bay, Normal Model, Zonal, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The data is the zonal component of ocean surface currents derived from High Frequency Radio-derived measurements, with missing values filled in by a normal model....

  9. Time-dependent source model of the Lusi mud volcano

    Science.gov (United States)

    Shirzaei, M.; Rudolph, M. L.; Manga, M.

    2014-12-01

    The Lusi mud eruption, near Sidoarjo, East Java, Indonesia, began erupting in May 2006 and continues to erupt today. Previous analyses of surface deformation data suggested an exponential decay of the pressure in the mud source, but did not constrain the geometry and evolution of the source(s) from which the erupting mud and fluids ascend. To understand the spatiotemporal evolution of the mud and fluid sources, we apply a time-dependent inversion scheme to a densely populated InSAR time series of the surface deformation at Lusi. The SAR data set includes 50 images acquired on 3 overlapping tracks of the ALOS L-band satellite between May 2006 and April 2011. Following multitemporal analysis of this data set, the obtained surface deformation time series is inverted in a time-dependent framework to solve for the volume changes of distributed point sources in the subsurface. The volume change distribution resulting from this modeling scheme shows two zones of high volume change underneath Lusi at 0.5-1.5 km and 4-5.5km depth as well as another shallow zone, 7 km to the west of Lusi and underneath the Wunut gas field. The cumulative volume change within the shallow source beneath Lusi is ~2-4 times larger than that of the deep source, whilst the ratio of the Lusi shallow source volume change to that of Wunut gas field is ~1. This observation and model suggest that the Lusi shallow source played a key role in eruption process and mud supply, but that additional fluids do ascend from depths >4 km on eruptive timescales.

  10. Analysis of potential combustion source impacts on acid deposition using an independently derived inventory. Volume I

    Energy Technology Data Exchange (ETDEWEB)

    1983-12-01

    This project had three major objectives. The first objective was to develop a fossil fuel combustion source inventory (NO/sub x/, SO/sub x/, and hydrocarbon emissions) that would be relatively easy to use and update for analyzing the impact of combustion emissions on acid deposition in the eastern United States. The second objective of the project was to use the inventory data as a basis for selection of a number of areas that, by virtue of their importance in the acid rain issue, could be further studied to assess the impact of local and intraregional combustion sources. The third objective was to conduct an analysis of wet deposition monitoring data in the areas under study, along with pertinent physical characteristics, meteorological conditions, and emission patterns of these areas, to investigate probable relationships between local and intraregional combustion sources and the deposition of acidic material. The combustion source emissions inventory has been developed for the eastern United States. It characterizes all important area sources and point sources on a county-by-county basis. Its design provides flexibility and simplicity and makes it uniquely useful in overall analysis of emission patterns in the eastern United States. Three regions with basically different emission patterns have been identified and characterized. The statistical analysis of wet deposition monitoring data in conjunction with emission patterns, wind direction, and topography has produced consistent results for each study area and has demonstrated that the wet deposition in each area reflects the characteristics of the localized area around the monitoring sites (typically 50 to 150 miles). 8 references, 28 figures, 39 tables.

  11. Evaluation of Stem Cell-Derived Red Blood Cells as a Transfusion Product Using a Novel Animal Model.

    Science.gov (United States)

    Shah, Sandeep N; Gelderman, Monique P; Lewis, Emily M A; Farrel, John; Wood, Francine; Strader, Michael Brad; Alayash, Abdu I; Vostal, Jaroslav G

    2016-01-01

    Reliance on volunteer blood donors can lead to transfusion product shortages, and current liquid storage of red blood cells (RBCs) is associated with biochemical changes over time, known as 'the storage lesion'. Thus, there is a need for alternative sources of transfusable RBCs to supplement conventional blood donations. Extracorporeal production of stem cell-derived RBCs (stemRBCs) is a potential and yet untapped source of fresh, transfusable RBCs. A number of groups have attempted RBC differentiation from CD34+ cells. However, it is still unclear whether these stemRBCs could eventually be effective substitutes for traditional RBCs due to potential differences in oxygen carrying capacity, viability, deformability, and other critical parameters. We have generated ex vivo stemRBCs from primary human cord blood CD34+ cells and compared them to donor-derived RBCs based on a number of in vitro parameters. In vivo, we assessed stemRBC circulation kinetics in an animal model of transfusion and oxygen delivery in a mouse model of exercise performance. Our novel, chronically anemic, SCID mouse model can evaluate the potential of stemRBCs to deliver oxygen to tissues (muscle) under resting and exercise-induced hypoxic conditions. Based on our data, stem cell-derived RBCs have a similar biochemical profile compared to donor-derived RBCs. While certain key differences remain between donor-derived RBCs and stemRBCs, the ability of stemRBCs to deliver oxygen in a living organism provides support for further development as a transfusion product.

  12. Host-Derived Sialic Acids Are an Important Nutrient Source Required for Optimal Bacterial Fitness In Vivo

    Directory of Open Access Journals (Sweden)

    Nathan D. McDonald

    2016-04-01

    Full Text Available A major challenge facing bacterial intestinal pathogens is competition for nutrient sources with the host microbiota. Vibrio cholerae is an intestinal pathogen that causes cholera, which affects millions each year; however, our knowledge of its nutritional requirements in the intestinal milieu is limited. In this study, we demonstrated that V. cholerae can grow efficiently on intestinal mucus and its component sialic acids and that a tripartite ATP-independent periplasmic SiaPQM strain, transporter-deficient mutant NC1777, was attenuated for colonization using a streptomycin-pretreated adult mouse model. In in vivo competition assays, NC1777 was significantly outcompeted for up to 3 days postinfection. NC1777 was also significantly outcompeted in in vitro competition assays in M9 minimal medium supplemented with intestinal mucus, indicating that sialic acid uptake is essential for fitness. Phylogenetic analyses demonstrated that the ability to utilize sialic acid was distributed among 452 bacterial species from eight phyla. The majority of species belonged to four phyla, Actinobacteria (members of Actinobacillus, Corynebacterium, Mycoplasma, and Streptomyces, Bacteroidetes (mainly Bacteroides, Capnocytophaga, and Prevotella, Firmicutes (members of Streptococcus, Staphylococcus, Clostridium, and Lactobacillus, and Proteobacteria (including Escherichia, Shigella, Salmonella, Citrobacter, Haemophilus, Klebsiella, Pasteurella, Photobacterium, Vibrio, and Yersinia species, mostly commensals and/or pathogens. Overall, our data demonstrate that the ability to take up host-derived sugars and sialic acid specifically allows V. cholerae a competitive advantage in intestinal colonization and that this is a trait that is sporadic in its occurrence and phylogenetic distribution and ancestral in some genera but horizontally acquired in others.

  13. Low-level radioactive waste performance assessments: Source term modeling

    International Nuclear Information System (INIS)

    Icenhour, A.S.; Godbee, H.W.; Miller, L.F.

    1995-01-01

    Low-level radioactive wastes (LLW) generated by government and commercial operations need to be isolated from the environment for at least 300 to 500 yr. Most existing sites for the storage or disposal of LLW employ the shallow-land burial approach. However, the U.S. Department of Energy currently emphasizes the use of engineered systems (e.g., packaging, concrete and metal barriers, and water collection systems). Future commercial LLW disposal sites may include such systems to mitigate radionuclide transport through the biosphere. Performance assessments must be conducted for LUW disposal facilities. These studies include comprehensive evaluations of radionuclide migration from the waste package, through the vadose zone, and within the water table. Atmospheric transport mechanisms are also studied. Figure I illustrates the performance assessment process. Estimates of the release of radionuclides from the waste packages (i.e., source terms) are used for subsequent hydrogeologic calculations required by a performance assessment. Computer models are typically used to describe the complex interactions of water with LLW and to determine the transport of radionuclides. Several commonly used computer programs for evaluating source terms include GWSCREEN, BLT (Breach-Leach-Transport), DUST (Disposal Unit Source Term), BARRIER (Ref. 5), as well as SOURCE1 and SOURCE2 (which are used in this study). The SOURCE1 and SOURCE2 codes were prepared by Rogers and Associates Engineering Corporation for the Oak Ridge National Laboratory (ORNL). SOURCE1 is designed for tumulus-type facilities, and SOURCE2 is tailored for silo, well-in-silo, and trench-type disposal facilities. This paper focuses on the source term for ORNL disposal facilities, and it describes improved computational methods for determining radionuclide transport from waste packages

  14. A model for acoustic absorbent materials derived from coconut fiber

    Directory of Open Access Journals (Sweden)

    Ramis, J.

    2014-03-01

    Full Text Available In the present paper, a methodology is proposed for obtaining empirical equations describing the sound absorption characteristics of an absorbing material obtained from natural fibers, specifically from coconut. The method, which was previously applied to other materials, requires performing measurements of air-flow resistivity and of acoustic impedance for samples of the material under study. The equations that govern the acoustic behavior of the material are then derived by means of a least-squares fit of the acoustic impedance and of the propagation constant. These results can be useful since they allow the empirically obtained analytical equations to be easily incorporated in prediction and simulation models of acoustic systems for noise control that incorporate the studied materials.En este trabajo se describe el proceso seguido para obtener ecuaciones empíricas del comportamiento acústico de un material absorbente obtenido a partir de fibras naturales, concretamente el coco. El procedimiento, que ha sido ensayado con éxito en otros materiales, implica la realización de medidas de impedancia y resistencia al flujo de muestras del material bajo estudio. Las ecuaciones que gobiernan el comportamiento desde el punto de vista acústico del material se obtienen a partir del ajuste de ecuaciones de comportamiento de la impedancia acústica y la constante de propagación del material. Los resultados son útiles ya que, al disponer de ecuaciones analíticas obtenidas empíricamente, facilitan la incorporación de estos materiales en predicciones mediante métodos numéricos del comportamiento cuando son instalados formando parte de dispositivos para el control del ruido.

  15. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  16. Topographic filtering simulation model for sediment source apportionment

    Science.gov (United States)

    Cho, Se Jong; Wilcock, Peter; Hobbs, Benjamin

    2018-05-01

    We propose a Topographic Filtering simulation model (Topofilter) that can be used to identify those locations that are likely to contribute most of the sediment load delivered from a watershed. The reduced complexity model links spatially distributed estimates of annual soil erosion, high-resolution topography, and observed sediment loading to determine the distribution of sediment delivery ratio across a watershed. The model uses two simple two-parameter topographic transfer functions based on the distance and change in elevation from upland sources to the nearest stream channel and then down the stream network. The approach does not attempt to find a single best-calibrated solution of sediment delivery, but uses a model conditioning approach to develop a large number of possible solutions. For each model run, locations that contribute to 90% of the sediment loading are identified and those locations that appear in this set in most of the 10,000 model runs are identified as the sources that are most likely to contribute to most of the sediment delivered to the watershed outlet. Because the underlying model is quite simple and strongly anchored by reliable information on soil erosion, topography, and sediment load, we believe that the ensemble of simulation outputs provides a useful basis for identifying the dominant sediment sources in the watershed.

  17. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  18. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    Science.gov (United States)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  19. Open source Modeling and optimization tools for Planning

    Energy Technology Data Exchange (ETDEWEB)

    Peles, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-10

    Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward to complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.

  20. Induced pluripotent stem cells (iPSC)-derived retinal cells in disease modeling and regenerative medicine.

    Science.gov (United States)

    Rathod, Reena; Surendran, Harshini; Battu, Rajani; Desai, Jogin; Pal, Rajarshi

    2018-02-12

    Retinal degenerative disorders are a leading cause of the inherited, irreversible and incurable vision loss. While various rodent model systems have provided crucial information in this direction, lack of disease-relevant tissue availability and species-specific differences have proven to be a major roadblock. Human induced pluripotent stem cells (iPSC) have opened up a whole new avenue of possibilities not just in understanding the disease mechanism but also potential therapeutic approaches towards a cure. In this review, we have summarized recent advances in the methods of deriving retinal cell types from iPSCs which can serve as a renewable source of disease-relevant cell population for basic as well as translational studies. We also provide an overview of the ongoing efforts towards developing a suitable in vitro model for modeling retinal degenerative diseases. This basic understanding in turn has contributed to advances in translational goals such as drug screening and cell-replacement therapies. Furthermore we discuss gene editing approaches for autologous repair of genetic disorders and allogeneic transplantation of stem cell-based retinal derivatives for degenerative disorders with an ultimate goal to restore vision. It is pertinent to note however, that these exciting new developments throw up several challenges that need to be overcome before their full clinical potential can be realized. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    Science.gov (United States)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  2. The Growth of open source: A look at how companies are utilizing open source software in their business models

    OpenAIRE

    Feare, David

    2009-01-01

    This paper examines how open source software is being incorporated into the business models of companies in the software industry. The goal is to answer the question of whether the open source model can help sustain economic growth. While some companies are able to maintain a "pure" open source approach with their business model, the reality is that most companies are relying on proprietary add-on value in order to generate revenue because open source itself is simply not big business. Ultima...

  3. A stochastic post-processing method for solar irradiance forecasts derived from NWPs models

    Science.gov (United States)

    Lara-Fanego, V.; Pozo-Vazquez, D.; Ruiz-Arias, J. A.; Santos-Alamillos, F. J.; Tovar-Pescador, J.

    2010-09-01

    Solar irradiance forecast is an important area of research for the future of the solar-based renewable energy systems. Numerical Weather Prediction models (NWPs) have proved to be a valuable tool for solar irradiance forecasting with lead time up to a few days. Nevertheless, these models show low skill in forecasting the solar irradiance under cloudy conditions. Additionally, climatic (averaged over seasons) aerosol loading are usually considered in these models, leading to considerable errors for the Direct Normal Irradiance (DNI) forecasts during high aerosols load conditions. In this work we propose a post-processing method for the Global Irradiance (GHI) and DNI forecasts derived from NWPs. Particularly, the methods is based on the use of Autoregressive Moving Average with External Explanatory Variables (ARMAX) stochastic models. These models are applied to the residuals of the NWPs forecasts and uses as external variables the measured cloud fraction and aerosol loading of the day previous to the forecast. The method is evaluated for a set one-moth length three-days-ahead forecast of the GHI and DNI, obtained based on the WRF mesoscale atmospheric model, for several locations in Andalusia (Southern Spain). The Cloud fraction is derived from MSG satellite estimates and the aerosol loading from the MODIS platform estimates. Both sources of information are readily available at the time of the forecast. Results showed a considerable improvement of the forecasting skill of the WRF model using the proposed post-processing method. Particularly, relative improvement (in terms of the RMSE) for the DNI during summer is about 20%. A similar value is obtained for the GHI during the winter.

  4. Induced pluripotent stem cell-derived cardiomyocytes for cardiovascular disease modeling and drug screening.

    Science.gov (United States)

    Sharma, Arun; Wu, Joseph C; Wu, Sean M

    2013-12-24

    Human induced pluripotent stem cells (hiPSCs) have emerged as a novel tool for drug discovery and therapy in cardiovascular medicine. hiPSCs are functionally similar to human embryonic stem cells (hESCs) and can be derived autologously without the ethical challenges associated with hESCs. Given the limited regenerative capacity of the human heart following myocardial injury, cardiomyocytes derived from hiPSCs (hiPSC-CMs) have garnered significant attention from basic and translational scientists as a promising cell source for replacement therapy. However, ongoing issues such as cell immaturity, scale of production, inter-line variability, and cell purity will need to be resolved before human clinical trials can begin. Meanwhile, the use of hiPSCs to explore cellular mechanisms of cardiovascular diseases in vitro has proven to be extremely valuable. For example, hiPSC-CMs have been shown to recapitulate disease phenotypes from patients with monogenic cardiovascular disorders. Furthermore, patient-derived hiPSC-CMs are now providing new insights regarding drug efficacy and toxicity. This review will highlight recent advances in utilizing hiPSC-CMs for cardiac disease modeling in vitro and as a platform for drug validation. The advantages and disadvantages of using hiPSC-CMs for drug screening purposes will be explored as well.

  5. TOXICOLOGICAL EVALUATION OF REALISTIC EMISSIONS OF SOURCE AEROSOLS (TERESA): APPLICATION TO POWER PLANT-DERIVED PM2.5

    Energy Technology Data Exchange (ETDEWEB)

    Annette C. Rohr; Petros Koutrakis; John Godleski

    2011-03-31

    Determining the health impacts of different sources and components of fine particulate matter (PM2.5) is an important scientific goal, because PM is a complex mixture of both inorganic and organic constituents that likely differ in their potential to cause adverse health outcomes. The TERESA (Toxicological Evaluation of Realistic Emissions of Source Aerosols) study focused on two PM sources - coal-fired power plants and mobile sources - and sought to investigate the toxicological effects of exposure to realistic emissions from these sources. The DOE-EPRI Cooperative Agreement covered the performance and analysis of field experiments at three power plants. The mobile source component consisted of experiments conducted at a traffic tunnel in Boston; these activities were funded through the Harvard-EPA Particulate Matter Research Center and will be reported separately in the peer-reviewed literature. TERESA attempted to delineate health effects of primary particles, secondary (aged) particles, and mixtures of these with common atmospheric constituents. The study involved withdrawal of emissions directly from power plant stacks, followed by aging and atmospheric transformation of emissions in a mobile laboratory in a manner that simulated downwind power plant plume processing. Secondary organic aerosol (SOA) derived from the biogenic volatile organic compound {alpha}-pinene was added in some experiments, and in others ammonia was added to neutralize strong acidity. Specifically, four scenarios were studied at each plant: primary particles (P); secondary (oxidized) particles (PO); oxidized particles + secondary organic aerosol (SOA) (POS); and oxidized and neutralized particles + SOA (PONS). Extensive exposure characterization was carried out, including gas-phase and particulate species. Male Sprague Dawley rats were exposed for 6 hours to filtered air or different atmospheric mixtures. Toxicological endpoints included (1) breathing pattern; (2) bronchoalveolar lavage

  6. Mitigating Spreadsheet Model Risk with Python Open Source Infrastructure

    OpenAIRE

    Beavers, Oliver

    2018-01-01

    Across an aggregation of EuSpRIG presentation papers, two maxims hold true: spreadsheets models are akin to software, yet spreadsheet developers are not software engineers. As such, the lack of traditional software engineering tools and protocols invites a higher rate of error in the end result. This paper lays ground work for spreadsheet modelling professionals to develop reproducible audit tools using freely available, open source packages built with the Python programming language, enablin...

  7. OSeMOSYS: The Open Source Energy Modeling System

    International Nuclear Information System (INIS)

    Howells, Mark; Rogner, Holger; Strachan, Neil; Heaps, Charles; Huntington, Hillard; Kypreos, Socrates; Hughes, Alison; Silveira, Semida; DeCarolis, Joe; Bazillian, Morgan; Roehrl, Alexander

    2011-01-01

    This paper discusses the design and development of the Open Source Energy Modeling System (OSeMOSYS). It describes the model's formulation in terms of a 'plain English' description, algebraic formulation, implementation-in terms of its full source code, as well as a detailed description of the model inputs, parameters, and outputs. A key feature of the OSeMOSYS implementation is that it is contained in less than five pages of documented, easily accessible code. Other existing energy system models that do not have this emphasis on compactness and openness makes the barrier to entry by new users much higher, as well as making the addition of innovative new functionality very difficult. The paper begins by describing the rationale for the development of OSeMOSYS and its structure. The current preliminary implementation of the model is then demonstrated for a discrete example. Next, we explain how new development efforts will build on the existing OSeMOSYS codebase. The paper closes with thoughts regarding the organization of the OSeMOSYS community, associated capacity development efforts, and linkages to other open source efforts including adding functionality to the LEAP model. - Highlights: → OSeMOSYS is a new free and open source energy systems. → This model is written in a simple, open, flexible and transparent manner to support teaching. → OSeMOSYS is based on free software and optimizes using a free solver. → This model replicates the results of many popular tools, such as MARKAL. → A link between OSeMOSYS and LEAP has been developed.

  8. Automatic landslide detection from LiDAR DTM derivatives by geographic-object-based image analysis based on open-source software

    Science.gov (United States)

    Knevels, Raphael; Leopold, Philip; Petschko, Helene

    2017-04-01

    With high-resolution airborne Light Detection and Ranging (LiDAR) data more commonly available, many studies have been performed to facilitate the detailed information on the earth surface and to analyse its limitation. Specifically in the field of natural hazards, digital terrain models (DTM) have been used to map hazardous processes such as landslides mainly by visual interpretation of LiDAR DTM derivatives. However, new approaches are striving towards automatic detection of landslides to speed up the process of generating landslide inventories. These studies usually use a combination of optical imagery and terrain data, and are designed in commercial software packages such as ESRI ArcGIS, Definiens eCognition, or MathWorks MATLAB. The objective of this study was to investigate the potential of open-source software for automatic landslide detection based only on high-resolution LiDAR DTM derivatives in a study area within the federal state of Burgenland, Austria. The study area is very prone to landslides which have been mapped with different methodologies in recent years. The free development environment R was used to integrate open-source geographic information system (GIS) software, such as SAGA (System for Automated Geoscientific Analyses), GRASS (Geographic Resources Analysis Support System), or TauDEM (Terrain Analysis Using Digital Elevation Models). The implemented geographic-object-based image analysis (GEOBIA) consisted of (1) derivation of land surface parameters, such as slope, surface roughness, curvature, or flow direction, (2) finding optimal scale parameter by the use of an objective function, (3) multi-scale segmentation, (4) classification of landslide parts (main scarp, body, flanks) by k-mean thresholding, (5) assessment of the classification performance using a pre-existing landslide inventory, and (6) post-processing analysis for the further use in landslide inventories. The results of the developed open-source approach demonstrated good

  9. Unexpected source of Fukushima-derived radiocesium to the coastal ocean of Japan

    Science.gov (United States)

    Sanial, Virginie; Buesseler, Ken O.; Charette, Matthew A.; Nagao, Seiya

    2017-12-01

    Synthesizing published data, we provide a quantitative summary of the global biogeochemical cycle of vanadium (V), including both human-derived and natural fluxes. Through mining of V ores (130 × 109 g V/y) and extraction and combustion of fossil fuels (600 × 109 g V/y), humans are the predominant force in the geochemical cycle of V at Earth’s surface. Human emissions of V to the atmosphere are now likely to exceed background emissions by as much as a factor of 1.7, and, presumably, we have altered the deposition of V from the atmosphere by a similar amount. Excessive V in air and water has potential, but poorly documented, consequences for human health. Much of the atmospheric flux probably derives from emissions from the combustion of fossil fuels, but the magnitude of this flux depends on the type of fuel, with relatively low emissions from coal and higher contributions from heavy crude oils, tar sands bitumen, and petroleum coke. Increasing interest in petroleum derived from unconventional deposits is likely to lead to greater emissions of V to the atmosphere in the near future. Our analysis further suggests that the flux of V in rivers has been incremented by about 15% from human activities. Overall, the budget of dissolved V in the oceans is remarkably well balanced—with about 40 × 109 g V/y to 50 × 109 g V/y inputs and outputs, and a mean residence time for dissolved V in seawater of about 130,000 y with respect to inputs from rivers.

  10. MODEL OF A PERSONWALKING AS A STRUCTURE BORNE SOUND SOURCE

    DEFF Research Database (Denmark)

    Lievens, Matthias; Brunskog, Jonas

    2007-01-01

    has to be considered and the contact history must be integrated in the model. This is complicated by the fact that nonlinearities occur at different stages in the system either on the source or receiver side. ot only lightweight structures but also soft floor coverings would benefit from an accurate...

  11. Modeling Noise Sources and Propagation in External Gear Pumps

    Directory of Open Access Journals (Sweden)

    Sangbeom Woo

    2017-07-01

    Full Text Available As a key component in power transfer, positive displacement machines often represent the major source of noise in hydraulic systems. Thus, investigation into the sources of noise and discovering strategies to reduce noise is a key part of improving the performance of current hydraulic systems, as well as applying fluid power systems to a wider range of applications. The present work aims at developing modeling techniques on the topic of noise generation caused by external gear pumps for high pressure applications, which can be useful and effective in investigating the interaction between noise sources and radiated noise and establishing the design guide for a quiet pump. In particular, this study classifies the internal noise sources into four types of effective load functions and, in the proposed model, these load functions are applied to the corresponding areas of the pump case in a realistic way. Vibration and sound radiation can then be predicted using a combined finite element and boundary element vibro-acoustic model. The radiated sound power and sound pressure for the different operating conditions are presented as the main outcomes of the acoustic model. The noise prediction was validated through comparison with the experimentally measured sound power levels.

  12. Modeling of an autonomous microgrid for renewable energy sources integration

    DEFF Research Database (Denmark)

    Serban, I.; Teodorescu, Remus; Guerrero, Josep M.

    2009-01-01

    The frequency stability analysis in an autonomous microgrid (MG) with renewable energy sources (RES) is a continuously studied issue. This paper presents an original method for modeling an autonomous MG with a battery energy storage system (BESS) and a wind power plant (WPP), with the purpose...

  13. Development of the detection technology of the source area derived from nuclear activities

    International Nuclear Information System (INIS)

    Suh, Kyungsuk; Kim, Ingyu; Keum, Dongkwon; Lim, Kwangmuk; Lee, Jinyong

    2012-07-01

    - It is necessary to establish of the overall preparedness for analysis of the nuclear activities near the neighboring countries by increasing of the construction of the nuclear power plants and reprocessing facilities in China, North Korean Japan and Russia. - In Korea, the analysis and measurements for nuclear activities have been conducted, however the detection technology to find out the source area has not been developed. It is important to estimate the source origin of the radioisotope from the neighboring countries including Korea in the aspects of the surveillance and safety for the covert nuclear activities in Northeast Asia region - In this study, the data DB, treatment of the weather data and the development of connection module were conducted to track the origin of the radioisotope in the first year of the research. It had constructed the DB like the reactor types, places in China, Taiwan, Japan and Korea and the release amounts of the noble gas released into the air

  14. Development of the detection technology of the source area derived from nuclear activities

    Energy Technology Data Exchange (ETDEWEB)

    Suh, Kyungsuk; Kim, Ingyu; Keum, Dongkwon; Lim, Kwangmuk; Lee, Jinyong

    2012-07-15

    - It is necessary to establish of the overall preparedness for analysis of the nuclear activities near the neighboring countries by increasing of the construction of the nuclear power plants and reprocessing facilities in China, North Korean Japan and Russia. - In Korea, the analysis and measurements for nuclear activities have been conducted, however the detection technology to find out the source area has not been developed. It is important to estimate the source origin of the radioisotope from the neighboring countries including Korea in the aspects of the surveillance and safety for the covert nuclear activities in Northeast Asia region - In this study, the data DB, treatment of the weather data and the development of connection module were conducted to track the origin of the radioisotope in the first year of the research. It had constructed the DB like the reactor types, places in China, Taiwan, Japan and Korea and the release amounts of the noble gas released into the air.

  15. Bright and durable field emission source derived from refractory taylor cones

    Science.gov (United States)

    Hirsch, Gregory

    2016-12-20

    A method of producing field emitters having improved brightness and durability relying on the creation of a liquid Taylor cone from electrically conductive materials having high melting points. The method calls for melting the end of a wire substrate with a focused laser beam, while imposing a high positive potential on the material. The resulting molten Taylor cone is subsequently rapidly quenched by cessation of the laser power. Rapid quenching is facilitated in large part by radiative cooling, resulting in structures having characteristics closely matching that of the original liquid Taylor cone. Frozen Taylor cones thus obtained yield desirable tip end forms for field emission sources in electron beam applications. Regeneration of the frozen Taylor cones in-situ is readily accomplished by repeating the initial formation procedures. The high temperature liquid Taylor cones can also be employed as bright ion sources with chemical elements previously considered impractical to implement.

  16. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    Science.gov (United States)

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice

    2016-04-01

    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling

  17. Depletion and capture: revisiting “The source of water derived from wells"

    Science.gov (United States)

    Konikow, Leonard F.; Leake, Stanley A.

    2014-01-01

    A natural consequence of groundwater withdrawals is the removal of water from subsurface storage, but the overall rates and magnitude of groundwater depletion and capture relative to groundwater withdrawals (extraction or pumpage) have not previously been well characterized. This study assesses the partitioning of long-term cumulative withdrawal volumes into fractions derived from storage depletion and capture, where capture includes both increases in recharge and decreases in discharge. Numerical simulation of a hypothetical groundwater basin is used to further illustrate some of Theis' (1940) principles, particularly when capture is constrained by insufficient available water. Most prior studies of depletion and capture have assumed that capture is unconstrained through boundary conditions that yield linear responses. Examination of real systems indicates that capture and depletion fractions are highly variable in time and space. For a large sample of long-developed groundwater systems, the depletion fraction averages about 0.15 and the capture fraction averages about 0.85 based on cumulative volumes. Higher depletion fractions tend to occur in more arid regions, but the variation is high and the correlation coefficient between average annual precipitation and depletion fraction for individual systems is only 0.40. Because 85% of long-term pumpage is derived from capture in these real systems, capture must be recognized as a critical factor in assessing water budgets, groundwater storage depletion, and sustainability of groundwater development. Most capture translates into streamflow depletion, so it can detrimentally impact ecosystems.

  18. Wheat multiple synthetic derivatives: a new source for heat stress tolerance adaptive traits

    Science.gov (United States)

    Elbashir, Awad Ahmed Elawad; Gorafi, Yasir Serag Alnor; Tahir, Izzat Sidahmed Ali; Kim, June-Sik; Tsujimoto, Hisashi

    2017-01-01

    Heat stress is detrimental to wheat (Triticum aestivum L.) productivity. In this study, we aimed to select heat-tolerant plants from a multiple synthetic derivatives (MSD) population and evaluate their agronomic and physiological traits. We selected six tolerant plants from the population with the background of the cultivar ‘Norin 61’ (N61) and established six MNH (MSD population of N61 selected as heat stress-tolerant) lines. We grew these lines with N61 in the field and growth chamber. In the field, we used optimum and late sowings to ensure plant exposure to heat. In the growth chamber, in addition to N61, we used the heat-tolerant cultivars ‘Gelenson’ and ‘Bacanora’. We confirmed that MNH2 and MNH5 lines acquired heat tolerance. These lines had higher photosynthesis and stomata conductance and exhibited no reduction in grain yield and biomass under heat stress compared to N61. We noticed that N61 had relatively good adaptability to heat stress. Our results indicate that the MSD population includes the diversity of Aegilops tauschii and is a promising resource to uncover useful quantitative traits derived from this wild species. Selected lines could be useful for heat stress tolerance breeding. PMID:28744178

  19. Leukocyte- and endothelial-derived microparticles: a circulating source for fibrinolysis

    Science.gov (United States)

    Lacroix, Romaric; Plawinski, Laurent; Robert, Stéphane; Doeuvre, Loïc; Sabatier, Florence; Martinez de Lizarrondo, Sara; Mezzapesa, Anna; Anfosso, Francine; Leroyer, Aurelie S.; Poullin, Pascale; Jourde, Noémie; Njock, Makon-Sébastien; Boulanger, Chantal M.; Anglés-Cano, Eduardo; Dignat-George, Françoise

    2012-01-01

    Background We recently assigned a new fibrinolytic function to cell-derived microparticles in vitro. In this study we explored the relevance of this novel property of microparticles to the in vivo situation. Design and Methods Circulating microparticles were isolated from the plasma of patients with thrombotic thrombocytopenic purpura or cardiovascular disease and from healthy subjects. Microparticles were also obtained from purified human blood cell subpopulations. The plasminogen activators on microparticles were identified by flow cytometry and enzyme-linked immunosorbent assays; their capacity to generate plasmin was quantified with a chromogenic assay and their fibrinolytic activity was determined by zymography. Results Circulating microparticles isolated from patients generate a range of plasmin activity at their surface. This property was related to a variable content of urokinase-type plasminogen activator and/or tissue plasminogen activator. Using distinct microparticle subpopulations, we demonstrated that plasmin is generated on endothelial and leukocyte microparticles, but not on microparticles of platelet or erythrocyte origin. Leukocyte-derived microparticles bear urokinase-type plasminogen activator and its receptor whereas endothelial microparticles carry tissue plasminogen activator and tissue plasminogen activator/inhibitor complexes. Conclusions Endothelial and leukocyte microparticles, bearing respectively tissue plasminogen activator or urokinase-type plasminogen activator, support a part of the fibrinolytic activity in the circulation which is modulated in pathological settings. Awareness of this blood-borne fibrinolytic activity conveyed by microparticles provides a more comprehensive view of the role of microparticles in the hemostatic equilibrium. PMID:22733025

  20. Average stopping powers for electron and photon sources for radiobiological modeling and microdosimetric applications

    Science.gov (United States)

    Vassiliev, Oleg N.; Kry, Stephen F.; Grosshans, David R.; Mohan, Radhe

    2018-03-01

    This study concerns calculation of the average electronic stopping power for photon and electron sources. It addresses two problems that have not yet been fully resolved. The first is defining the electron spectrum used for averaging in a way that is most suitable for radiobiological modeling. We define it as the spectrum of electrons entering the sensitive to radiation volume (SV) within the cell nucleus, at the moment they enter the SV. For this spectrum we derive a formula that combines linearly the fluence spectrum and the source spectrum. The latter is the distribution of initial energies of electrons produced by a source. Previous studies used either the fluence or source spectra, but not both, thereby neglecting a part of the complete spectrum. Our derived formula reduces to these two prior methods in the case of high and low energy sources, respectively. The second problem is extending electron spectra to low energies. Previous studies used an energy cut-off on the order of 1 keV. However, as we show, even for high energy sources, such as 60Co, electrons with energies below 1 keV contribute about 30% to the dose. In this study all the spectra were calculated with Geant4-DNA code and a cut-off energy of only 11 eV. We present formulas for calculating frequency- and dose-average stopping powers, numerical results for several important electron and photon sources, and tables with all the data needed to use our formulas for arbitrary electron and photon sources producing electrons with initial energies up to  ∼1 MeV.

  1. Race of source effects in the elaboration likelihood model.

    Science.gov (United States)

    White, P H; Harkins, S G

    1994-11-01

    In a series of experiments, we investigated the effect of race of source on persuasive communications in the Elaboration Likelihood Model (R.E. Petty & J.T. Cacioppo, 1981, 1986). In Experiment 1, we found no evidence that White participants responded to a Black source as a simple negative cue. Experiment 2 suggested the possibility that exposure to a Black source led to low-involvement message processing. In Experiments 3 and 4, a distraction paradigm was used to test this possibility, and it was found that participants under low involvement were highly motivated to process a message presented by a Black source. In Experiment 5, we found that attitudes toward the source's ethnic group, rather than violations of expectancies, accounted for this processing effect. Taken together, the results of these experiments are consistent with S.L. Gaertner and J.F. Dovidio's (1986) theory of aversive racism, which suggests that Whites, because of a combination of egalitarian values and underlying negative racial attitudes, are very concerned about not appearing unfavorable toward Blacks, leading them to be highly motivated to process messages presented by a source from this group.

  2. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  3. Absorptivity Measurements and Heat Source Modeling to Simulate Laser Cladding

    Science.gov (United States)

    Wirth, Florian; Eisenbarth, Daniel; Wegener, Konrad

    The laser cladding process gains importance, as it does not only allow the application of surface coatings, but also additive manufacturing of three-dimensional parts. In both cases, process simulation can contribute to process optimization. Heat source modeling is one of the main issues for an accurate model and simulation of the laser cladding process. While the laser beam intensity distribution is readily known, the other two main effects on the process' heat input are non-trivial. Namely the measurement of the absorptivity of the applied materials as well as the powder attenuation. Therefore, calorimetry measurements were carried out. The measurement method and the measurement results for laser cladding of Stellite 6 on structural steel S 235 and for the processing of Inconel 625 are presented both using a CO2 laser as well as a high power diode laser (HPDL). Additionally, a heat source model is deduced.

  4. Diffusion theory model for optimization calculations of cold neutron sources

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1987-01-01

    Cold neutron sources are becoming increasingly important and common experimental facilities made available at many research reactors around the world due to the high utility of cold neutrons in scattering experiments. The authors describe a simple two-group diffusion model of an infinite slab LD 2 cold source. The simplicity of the model permits to obtain an analytical solution from which one can deduce the reason for the optimum thickness based solely on diffusion-type phenomena. Also, a second more sophisticated model is described and the results compared to a deterministic transport calculation. The good (particularly qualitative) agreement between the results suggests that diffusion theory methods can be used in parametric and optimization studies to avoid the generally more expensive transport calculations

  5. The Arbitrage Pricing Model: A Pedagogic Derivation and a Spreadsheet-Based Illustration

    Directory of Open Access Journals (Sweden)

    Clarence C. Y. Kwan

    2016-05-01

    Full Text Available This paper derives, from a pedagogic perspective, the Arbitrage Pricing Model, which is an important asset pricing model in modern finance. The derivation is based on the idea that, if a self-financed investment has no risk exposures, the payoff from the investment can only be zero. Microsoft Excel plays an important pedagogic role in this paper. The Excel illustration not only helps students recognize more fully the various nuances in the model derivation, but also serves as a good starting point for students to explore on their own the relevance of the noise issue in the model derivation.

  6. Residential radon in Finland: sources, variation, modelling and dose comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Arvela, H

    1995-09-01

    The study deals with sources of indoor radon in Finland, seasonal variations in radon concentration, the effect of house construction and ventilation and also with the radiation dose from indoor radon and terrestrial gamma radiation. The results are based on radon measurements in approximately 4000 dwellings and on air exchange measurements in 250 dwellings as well as on model calculations. The results confirm that convective soil air flow is by far the most important source of indoor radon in Finnish low-rise residential housing. (97 refs., 61 figs., 30 tabs.).

  7. Residential radon in Finland: sources, variation, modelling and dose comparisons

    International Nuclear Information System (INIS)

    Arvela, H.

    1995-09-01

    The study deals with sources of indoor radon in Finland, seasonal variations in radon concentration, the effect of house construction and ventilation and also with the radiation dose from indoor radon and terrestrial gamma radiation. The results are based on radon measurements in approximately 4000 dwellings and on air exchange measurements in 250 dwellings as well as on model calculations. The results confirm that convective soil air flow is by far the most important source of indoor radon in Finnish low-rise residential housing. (97 refs., 61 figs., 30 tabs.)

  8. Depletion and capture: revisiting "the source of water derived from wells".

    Science.gov (United States)

    Konikow, L F; Leake, S A

    2014-09-01

    A natural consequence of groundwater withdrawals is the removal of water from subsurface storage, but the overall rates and magnitude of groundwater depletion and capture relative to groundwater withdrawals (extraction or pumpage) have not previously been well characterized. This study assesses the partitioning of long-term cumulative withdrawal volumes into fractions derived from storage depletion and capture, where capture includes both increases in recharge and decreases in discharge. Numerical simulation of a hypothetical groundwater basin is used to further illustrate some of Theis' (1940) principles, particularly when capture is constrained by insufficient available water. Most prior studies of depletion and capture have assumed that capture is unconstrained through boundary conditions that yield linear responses. Examination of real systems indicates that capture and depletion fractions are highly variable in time and space. For a large sample of long-developed groundwater systems, the depletion fraction averages about 0.15 and the capture fraction averages about 0.85 based on cumulative volumes. Higher depletion fractions tend to occur in more arid regions, but the variation is high and the correlation coefficient between average annual precipitation and depletion fraction for individual systems is only 0.40. Because 85% of long-term pumpage is derived from capture in these real systems, capture must be recognized as a critical factor in assessing water budgets, groundwater storage depletion, and sustainability of groundwater development. Most capture translates into streamflow depletion, so it can detrimentally impact ecosystems. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  9. Measurement of circulating cell-derived microparticles by flow cytometry: sources of variability within the assay.

    Science.gov (United States)

    Ayers, Lisa; Kohler, Malcolm; Harrison, Paul; Sargent, Ian; Dragovic, Rebecca; Schaap, Marianne; Nieuwland, Rienk; Brooks, Susan A; Ferry, Berne

    2011-04-01

    Circulating cell-derived microparticles (MPs) have been implicated in several disease processes and elevated levels are found in many pathological conditions. The detection and accurate measurement of MPs, although attracting widespread interest, is hampered by a lack of standardisation. The aim of this study was to establish a reliable flow cytometric assay to measure distinct subtypes of MPs in disease and to identify any significant causes of variability in MP quantification. Circulating MPs within plasma were identified by their phenotype (platelet, endothelial, leukocyte and annexin-V positivity (AnnV+). The influence of key variables (i.e. time between venepuncture and centrifugation, washing steps, the number of centrifugation steps, freezing/long-term storage and temperature of thawing) on MP measurement were investigated. Increasing time between venepuncture and centrifugation leads to increased MP levels. Washing samples results in decreased AnnV+MPs (P=0.002) and platelet-derived MPs (PMPs) (P=0.002). Double centrifugation of MPs prior to freezing decreases numbers of AnnV+MPs (P=0.0004) and PMPs (P=0.0004). A single freeze thaw cycle of samples led to an increase in AnnV+MPs (P=0.0020) and PMPs (P=0.0039). Long-term storage of MP samples at -80° resulted in decreased MP levels. This study found that minor protocol changes significantly affected MP levels. This is one of the first studies attempting to standardise a method for obtaining and measuring circulating MPs. Standardisation will be essential for successful development of MP technologies, allowing direct comparison of results between studies and leading to a greater understanding of MPs in disease. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  10. Dynamic modeling of the advanced neutron source reactor

    International Nuclear Information System (INIS)

    March-Leuba, J.; Ibn-Khayat, M.

    1990-01-01

    The purpose of this paper is to provide a summary description and some applications of a computer model that has been developed to simulate the dynamic behavior of the advanced neutron source (ANS) reactor. The ANS dynamic model is coded in the advanced continuous simulation language (ACSL), and it represents the reactor core, vessel, primary cooling system, and secondary cooling systems. The use of a simple dynamic model in the early stages of the reactor design has proven very valuable not only in the development of the control and plant protection system but also of components such as pumps and heat exchangers that are usually sized based on steady-state calculations

  11. Derivation and analysis of the Feynman-alpha formula for deterministically pulsed sources

    International Nuclear Information System (INIS)

    Wright, J.; Pazsit, I.

    2004-03-01

    The purpose or this report is to give a detailed description of the calculation of the Feynman-alpha formula with deterministically pulsed sources. In contrast to previous calculations, Laplace transform and complex function methods are used to arrive at a compact solution in form of a Fourier series-like expansion. The advantage of this method is that it is capable to treat various pulse shapes. In particular, in addition to square- and Dirac delta pulses, a more realistic Gauss-shaped pulse is also considered here. The final solution of the modified variance-to-mean, that is the Feynman Y(t) function, can be quantitatively evaluated fast and with little computational effort. The analytical solutions obtained are then analysed quantitatively. The behaviour of the number or neutrons in the system is investigated in detail, together with the transient that follows the switching on of the source. An analysis of the behaviour of the Feynman Y(t) function was made with respect to the pulse width and repetition frequency. Lastly, the possibility of using me formulae for the extraction of the parameter alpha from a simulated measurement is also investigated

  12. Use of a probabilistic PBPK/PD model to calculate Data Derived Extrapolation Factors for chlorpyrifos.

    Science.gov (United States)

    Poet, Torka S; Timchalk, Charles; Bartels, Michael J; Smith, Jordan N; McDougal, Robin; Juberg, Daland R; Price, Paul S

    2017-06-01

    A physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) model combined with Monte Carlo analysis of inter-individual variation was used to assess the effects of the insecticide, chlorpyrifos and its active metabolite, chlorpyrifos oxon in humans. The PBPK/PD model has previously been validated and used to describe physiological changes in typical individuals as they grow from birth to adulthood. This model was updated to include physiological and metabolic changes that occur with pregnancy. The model was then used to assess the impact of inter-individual variability in physiology and biochemistry on predictions of internal dose metrics and quantitatively assess the impact of major sources of parameter uncertainty and biological diversity on the pharmacodynamics of red blood cell acetylcholinesterase inhibition. These metrics were determined in potentially sensitive populations of infants, adult women, pregnant women, and a combined population of adult men and women. The parameters primarily responsible for inter-individual variation in RBC acetylcholinesterase inhibition were related to metabolic clearance of CPF and CPF-oxon. Data Derived Extrapolation Factors that address intra-species physiology and biochemistry to replace uncertainty factors with quantitative differences in metrics were developed in these same populations. The DDEFs were less than 4 for all populations. These data and modeling approach will be useful in ongoing and future human health risk assessments for CPF and could be used for other chemicals with potential human exposure. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Numerical model of electron cyclotron resonance ion source

    Directory of Open Access Journals (Sweden)

    V. Mironov

    2015-12-01

    Full Text Available Important features of the electron cyclotron resonance ion source (ECRIS operation are accurately reproduced with a numerical code. The code uses the particle-in-cell technique to model the dynamics of ions in ECRIS plasma. It is shown that a gas dynamical ion confinement mechanism is sufficient to provide the ion production rates in ECRIS close to the experimentally observed values. Extracted ion currents are calculated and compared to the experiment for a few sources. Changes in the simulated extracted ion currents are obtained with varying the gas flow into the source chamber and the microwave power. Empirical scaling laws for ECRIS design are studied and the underlying physical effects are discussed.

  14. Mathematical modelling of electricity market with renewable energy sources

    International Nuclear Information System (INIS)

    Marchenko, O.V.

    2007-01-01

    The paper addresses the electricity market with conventional energy sources on fossil fuel and non-conventional renewable energy sources (RESs) with stochastic operating conditions. A mathematical model of long-run (accounting for development of generation capacities) equilibrium in the market is constructed. The problem of determining optimal parameters providing the maximum social criterion of efficiency is also formulated. The calculations performed have shown that the adequate choice of price cap, environmental tax, subsidies to RESs and consumption tax make it possible to take into account external effects (environmental damage) and to create incentives for investors to construct conventional and renewable energy sources in an optimal (from the society view point) mix. (author)

  15. A FRAMEWORK FOR AN OPEN SOURCE GEOSPATIAL CERTIFICATION MODEL

    Directory of Open Access Journals (Sweden)

    T. U. R. Khan

    2016-06-01

    Full Text Available The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission “Making geospatial education and opportunities accessible to all”. Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the “Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM. The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and

  16. a Framework for AN Open Source Geospatial Certification Model

    Science.gov (United States)

    Khan, T. U. R.; Davis, P.; Behr, F.-J.

    2016-06-01

    The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission "Making geospatial education and opportunities accessible to all". Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the "Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM). The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and evaluated with 105

  17. Modeling a Hypothetical 170Tm Source for Brachytherapy Applications

    International Nuclear Information System (INIS)

    Enger, Shirin A.; D'Amours, Michel; Beaulieu, Luc

    2011-01-01

    Purpose: To perform absorbed dose calculations based on Monte Carlo simulations for a hypothetical 170 Tm source and to investigate the influence of encapsulating material on the energy spectrum of the emitted electrons and photons. Methods: GEANT4 Monte Carlo code version 9.2 patch 2 was used to simulate the decay process of 170 Tm and to calculate the absorbed dose distribution using the GEANT4 Penelope physics models. A hypothetical 170 Tm source based on the Flexisource brachytherapy design with the active core set as a pure thulium cylinder (length 3.5 mm and diameter 0.6 mm) and different cylindrical source encapsulations (length 5 mm and thickness 0.125 mm) constructed of titanium, stainless-steel, gold, or platinum were simulated. The radial dose function for the line source approximation was calculated following the TG-43U1 formalism for the stainless-steel encapsulation. Results: For the titanium and stainless-steel encapsulation, 94% of the total bremsstrahlung is produced inside the core, 4.8 and 5.5% in titanium and stainless-steel capsules, respectively, and less than 1% in water. For the gold capsule, 85% is produced inside the core, 14.2% inside the gold capsule, and a negligible amount ( 170 Tm source is primarily a bremsstrahlung source, with the majority of bremsstrahlung photons being generated in the source core and experiencing little attenuation in the source encapsulation. Electrons are efficiently absorbed by the gold and platinum encapsulations. However, for the stainless-steel capsule (or other lower Z encapsulations) electrons will escape. The dose from these electrons is dominant over the photon dose in the first few millimeter but is not taken into account by current standard treatment planning systems. The total energy spectrum of photons emerging from the source depends on the encapsulation composition and results in mean photon energies well above 100 keV. This is higher than the main gamma-ray energy peak at 84 keV. Based on our

  18. Open Sourcing Social Change: Inside the Constellation Model

    Directory of Open Access Journals (Sweden)

    Tonya Surman

    2008-09-01

    Full Text Available The constellation model was developed by and for the Canadian Partnership for Children's Health and the Environment. The model offers an innovative approach to organizing collaborative efforts in the social mission sector and shares various elements of the open source model. It emphasizes self-organizing and concrete action within a network of partner organizations working on a common issue. Constellations are self-organizing action teams that operate within the broader strategic vision of a partnership. These constellations are outwardly focused, placing their attention on creating value for those in the external environment rather than on the partnership itself. While serious effort is invested into core partnership governance and management, most of the energy is devoted to the decision making, resources and collaborative effort required to create social value. The constellations drive and define the partnership. The constellation model emerged from a deep understanding of the power of networks and peer production. Leadership rotates fluidly amongst partners, with each partner having the freedom to head up a constellation and to participate in constellations that carry out activities that are of more peripheral interest. The Internet provided the platform, the partner network enabled the expertise to align itself, and the goal of reducing chemical exposure in children kept the energy flowing. Building on seven years of experience, this article provides an overview of the constellation model, discusses the results from the CPCHE, and identifies similarities and differences between the constellation and open source models.

  19. Host-Derived Sialic Acids Are an Important Nutrient Source Required for Optimal Bacterial Fitness In Vivo.

    Science.gov (United States)

    McDonald, Nathan D; Lubin, Jean-Bernard; Chowdhury, Nityananda; Boyd, E Fidelma

    2016-04-12

    A major challenge facing bacterial intestinal pathogens is competition for nutrient sources with the host microbiota.Vibrio cholerae is an intestinal pathogen that causes cholera, which affects millions each year; however, our knowledge of its nutritional requirements in the intestinal milieu is limited. In this study, we demonstrated that V. cholerae can grow efficiently on intestinal mucus and its component sialic acids and that a tripartite ATP-independent periplasmic SiaPQM strain, transporter-deficient mutant NC1777, was attenuated for colonization using a streptomycin-pretreated adult mouse model. In in vivo competition assays, NC1777 was significantly outcompeted for up to 3 days postinfection. NC1777 was also significantly outcompeted in in vitro competition assays in M9 minimal medium supplemented with intestinal mucus, indicating that sialic acid uptake is essential for fitness. Phylogenetic analyses demonstrated that the ability to utilize sialic acid was distributed among 452 bacterial species from eight phyla. The majority of species belonged to four phyla, Actinobacteria (members of Actinobacillus, Corynebacterium, Mycoplasma, and Streptomyces), Bacteroidetes (mainly Bacteroides, Capnocytophaga, and Prevotella), Firmicutes (members of Streptococcus, Staphylococcus, Clostridium, and Lactobacillus), and Proteobacteria (including Escherichia, Shigella, Salmonella, Citrobacter, Haemophilus, Klebsiella, Pasteurella, Photobacterium, Vibrio, and Yersinia species), mostly commensals and/or pathogens. Overall, our data demonstrate that the ability to take up host-derived sugars and sialic acid specifically allows V. cholerae a competitive advantage in intestinal colonization and that this is a trait that is sporadic in its occurrence and phylogenetic distribution and ancestral in some genera but horizontally acquired in others. Sialic acids are nine carbon amino sugars that are abundant on all mucous surfaces. The deadly human pathogen Vibrio cholerae contains

  20. Model of the Sgr B2 radio source

    International Nuclear Information System (INIS)

    Gosachinskij, I.V.; Khersonskij, V.K.

    1981-01-01

    The dynamical model of the gas cloud around the radio source Sagittarius B2 is suggested. This model describes the kinematic features of the gas in this source: contraction of the core and rotation of the envelope. The stability of the cloud at the initial stage is supported by the turbulent motion of the gas, turbulence energy dissipates due to magnetic viscosity. This process is occurring more rapidly in the dense core and the core begins to collapse but the envelope remains stable. The parameters of the primary cloud and some parameters (mass, density and size) of the collapse are calculated. The conditions in the core at the moment of its fragmentation into masses of stellar order are established [ru

  1. Assessment of source-receptor relationships of aerosols: An integrated forward and backward modeling approach

    Science.gov (United States)

    Kulkarni, Sarika

    This dissertation presents a scientific framework that facilitates enhanced understanding of aerosol source -- receptor (S/R) relationships and their impact on the local, regional and global air quality by employing a complementary suite of modeling methods. The receptor -- oriented Positive Matrix Factorization (PMF) technique is combined with Potential Source Contribution Function (PSCF), a trajectory ensemble model, to characterize sources influencing the aerosols measured at Gosan, Korea during spring 2001. It is found that the episodic dust events originating from desert regions in East Asia (EA) that mix with pollution along the transit path, have a significant and pervasive impact on the air quality of Gosan. The intercontinental and hemispheric transport of aerosols is analyzed by a series of emission perturbation simulations with the Sulfur Transport and dEposition Model (STEM), a regional scale Chemical Transport Model (CTM), evaluated with observations from the 2008 NASA ARCTAS field campaign. This modeling study shows that pollution transport from regions outside North America (NA) contributed ˜ 30 and 20% to NA sulfate and BC surface concentration. This study also identifies aerosols transported from Europe, NA and EA regions as significant contributors to springtime Arctic sulfate and BC. Trajectory ensemble models are combined with source region tagged tracer model output to identify the source regions and possible instances of quasi-lagrangian sampled air masses during the 2006 NASA INTEX-B field campaign. The impact of specific emission sectors from Asia during the INTEX-B period is studied with the STEM model, identifying residential sector as potential target for emission reduction to combat global warming. The output from the STEM model constrained with satellite derived aerosol optical depth and ground based measurements of single scattering albedo via an optimal interpolation assimilation scheme is combined with the PMF technique to

  2. Sponge-derived Kocuria and Micrococcus spp. as sources of the new thiazolyl peptide antibiotic kocurin.

    Science.gov (United States)

    Palomo, Sara; González, Ignacio; de la Cruz, Mercedes; Martín, Jesús; Tormo, José Rubén; Anderson, Matthew; Hill, Russell T; Vicente, Francisca; Reyes, Fernando; Genilloud, Olga

    2013-03-28

    Forty four marine actinomycetes of the family Microccocaceae isolated from sponges collected primarily in Florida Keys (USA) were selected from our strain collection to be studied as new sources for the production of bioactive natural products. A 16S rRNA gene based phylogenetic analysis showed that the strains are members of the genera Kocuria and Micrococcus. To assess their biosynthetic potential, the strains were PCR screened for the presence of secondary metabolite genes encoding nonribosomal synthetase (NRPS) and polyketide synthases (PKS). A small extract collection of 528 crude extracts generated from nutritional microfermentation arrays was tested for the production of bioactive secondary metabolites against clinically relevant strains (Bacillus subtilis, methicillin-resistant Staphylococcus aureus (MRSA), Acinetobacter baumannii and Candida albicans). Three independent isolates were shown to produce a new anti-MRSA bioactive compound that was identified as kocurin, a new member of the thiazolyl peptide family of antibiotics emphasizing the role of this family as a prolific resource for novel drugs.

  3. Biological denitrification from mature landfill leachate using a food-waste-derived carbon source.

    Science.gov (United States)

    Yan, Feng; Jiang, Jianguo; Zhang, Haowei; Liu, Nuo; Zou, Quan

    2018-05-15

    The mature landfill leachate containing high ammonia concentration (>1000 mg/L) is a serious threat to environment; however, the low COD to TN ratio (C/N, waste and oil-added food waste, were first applied as external carbon sources for the biological nitrogen removal from mature landfill leachate in an aerobic/anoxic membrane bioreactor. "Acidogenic liquid b" served quite better than commercial sodium acetate, considering the higher denitrification efficiency and the slightly rapider denitrification rate. The effect of C/N and temperature were investigated under hydraulic retention time (HRT) of 7 d, which showed that C/N ≥ 7 (25 °C) was enough to meet the general discharge standards of NH 4 + -N, TN and COD in China. Even for some special areas of China, the more stringent discharge standards (NH 4 + -N ≤ 8 mg/L, TN ≤ 20 mg/L) could also be achieved under longer HRT of 14 d and C/N ≥ 6. Notably, the COD concentration in effluent could also be well reduced to 50-55 mg/L, without further physical-chemical treatment. This proposed strategy, involving the high-value utilization of food waste, is thus promising for efficient nitrogen removal from mature landfill leachate. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Local indigenous fruit-derived juices as alternate source of acidity regulators.

    Science.gov (United States)

    D'souza, Cassandra; Fernandes, Rosaline; Kudale, Subhash; Naik, Azza Silotry

    2018-03-01

    Acidity regulators are additives that alter and control food acidity. The objective of this study was to explore local indigenous fruits as sources of natural acidity regulators. Juices extracted from Garcinia indica (kokum), Embilica officinalis (amla) and Tamarindus indica (tamarind) were used as acidulants for media such as coconut milk and bottle gourd juice. The buffering capacity β, acid composition, antioxidant activity and shelf-life study of the acidified media were estimated. Potentiometric titration showed G. indica to possess the highest buffering capacity in both ranges. High-performance liquid chromatography analysis showed T. indica contained a high level of tartaric acid (4.84 ± 0.01 mg g -1 ), while G. indica had citric acid (22.37 ± 0.84 mg g -1 ) and E. officinalis had citric acid (2.75 ± 0.02 mg g -1 ) along with ascorbic acid (2.68 ± 0.01 mg g -1 ). 1,1-Diphenyl-2-picrylhydrazyl scavenging activity was high for E. officinalis (91.24 ± 0.66%) and T. indica (90.93 ± 0.817%) and relatively lower for G. indica (34.61 ± 3.66%). The shelf-life study showed total plate count to be within the prescribed limits up to a week, in accordance with safety regulations. This investigation confirmed the suitability of indigenous fruit juices as alternatives to existing acidity regulators. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  5. Plant-Derived Natural Products as Sources of Anti-Quorum Sensing Compounds

    Directory of Open Access Journals (Sweden)

    Kok-Gan Chan

    2013-05-01

    Full Text Available Quorum sensing is a system of stimuli and responses in relation to bacterial cell population density that regulates gene expression, including virulence determinants. Consequently, quorum sensing has been an attractive target for the development of novel anti-infective measures that do not rely on the use of antibiotics. Anti-quorum sensing has been a promising strategy to combat bacterial infections as it is unlikely to develop multidrug resistant pathogens since it does not impose any selection pressure. A number of anti-quorum sensing approaches have been documented and plant-based natural products have been extensively studied in this context. Plant matter is one of the major sources of chemicals in use today in various industries, ranging from the pharmaceutical, cosmetic, and food biotechnology to the textile industries. Just like animals and humans, plants are constantly exposed to bacterial infections, it is therefore logical to expect that plants have developed sophisticated of chemical mechanisms to combat pathogens. In this review, we have surveyed the various types of plant-based natural products that exhibit anti-quorum sensing properties and their anti-quorum sensing mechanisms.

  6. Nitrate source apportionment in a subtropical watershed using Bayesian model

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Liping; Han, Jiangpei; Xue, Jianlong; Zeng, Lingzao [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Shi, Jiachun, E-mail: jcshi@zju.edu.cn [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Wu, Laosheng, E-mail: laowu@zju.edu.cn [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Jiang, Yonghai [State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research Academy of Environmental Sciences, Beijing, 100012 (China)

    2013-10-01

    Nitrate (NO{sub 3}{sup −}) pollution in aquatic system is a worldwide problem. The temporal distribution pattern and sources of nitrate are of great concern for water quality. The nitrogen (N) cycling processes in a subtropical watershed located in Changxing County, Zhejiang Province, China were greatly influenced by the temporal variations of precipitation and temperature during the study period (September 2011 to July 2012). The highest NO{sub 3}{sup −} concentration in water was in May (wet season, mean ± SD = 17.45 ± 9.50 mg L{sup −1}) and the lowest concentration occurred in December (dry season, mean ± SD = 10.54 ± 6.28 mg L{sup −1}). Nevertheless, no water sample in the study area exceeds the WHO drinking water limit of 50 mg L{sup −1} NO{sub 3}{sup −}. Four sources of NO{sub 3}{sup −} (atmospheric deposition, AD; soil N, SN; synthetic fertilizer, SF; manure and sewage, M and S) were identified using both hydrochemical characteristics [Cl{sup −}, NO{sub 3}{sup −}, HCO{sub 3}{sup −}, SO{sub 4}{sup 2−}, Ca{sup 2+}, K{sup +}, Mg{sup 2+}, Na{sup +}, dissolved oxygen (DO)] and dual isotope approach (δ{sup 15}N–NO{sub 3}{sup −} and δ{sup 18}O–NO{sub 3}{sup −}). Both chemical and isotopic characteristics indicated that denitrification was not the main N cycling process in the study area. Using a Bayesian model (stable isotope analysis in R, SIAR), the contribution of each source was apportioned. Source apportionment results showed that source contributions differed significantly between the dry and wet season, AD and M and S contributed more in December than in May. In contrast, SN and SF contributed more NO{sub 3}{sup −} to water in May than that in December. M and S and SF were the major contributors in December and May, respectively. Moreover, the shortcomings and uncertainties of SIAR were discussed to provide implications for future works. With the assessment of temporal variation and sources of NO{sub 3}{sup −}, better

  7. Nitrate source apportionment in a subtropical watershed using Bayesian model

    International Nuclear Information System (INIS)

    Yang, Liping; Han, Jiangpei; Xue, Jianlong; Zeng, Lingzao; Shi, Jiachun; Wu, Laosheng; Jiang, Yonghai

    2013-01-01

    Nitrate (NO 3 − ) pollution in aquatic system is a worldwide problem. The temporal distribution pattern and sources of nitrate are of great concern for water quality. The nitrogen (N) cycling processes in a subtropical watershed located in Changxing County, Zhejiang Province, China were greatly influenced by the temporal variations of precipitation and temperature during the study period (September 2011 to July 2012). The highest NO 3 − concentration in water was in May (wet season, mean ± SD = 17.45 ± 9.50 mg L −1 ) and the lowest concentration occurred in December (dry season, mean ± SD = 10.54 ± 6.28 mg L −1 ). Nevertheless, no water sample in the study area exceeds the WHO drinking water limit of 50 mg L −1 NO 3 − . Four sources of NO 3 − (atmospheric deposition, AD; soil N, SN; synthetic fertilizer, SF; manure and sewage, M and S) were identified using both hydrochemical characteristics [Cl − , NO 3 − , HCO 3 − , SO 4 2− , Ca 2+ , K + , Mg 2+ , Na + , dissolved oxygen (DO)] and dual isotope approach (δ 15 N–NO 3 − and δ 18 O–NO 3 − ). Both chemical and isotopic characteristics indicated that denitrification was not the main N cycling process in the study area. Using a Bayesian model (stable isotope analysis in R, SIAR), the contribution of each source was apportioned. Source apportionment results showed that source contributions differed significantly between the dry and wet season, AD and M and S contributed more in December than in May. In contrast, SN and SF contributed more NO 3 − to water in May than that in December. M and S and SF were the major contributors in December and May, respectively. Moreover, the shortcomings and uncertainties of SIAR were discussed to provide implications for future works. With the assessment of temporal variation and sources of NO 3 − , better agricultural management practices and sewage disposal programs can be implemented to sustain water quality in subtropical watersheds

  8. The geometrical precision of virtual bone models derived from clinical computed tomography data for forensic anthropology.

    Science.gov (United States)

    Colman, Kerri L; Dobbe, Johannes G G; Stull, Kyra E; Ruijter, Jan M; Oostra, Roelof-Jan; van Rijn, Rick R; van der Merwe, Alie E; de Boer, Hans H; Streekstra, Geert J

    2017-07-01

    Almost all European countries lack contemporary skeletal collections for the development and validation of forensic anthropological methods. Furthermore, legal, ethical and practical considerations hinder the development of skeletal collections. A virtual skeletal database derived from clinical computed tomography (CT) scans provides a potential solution. However, clinical CT scans are typically generated with varying settings. This study investigates the effects of image segmentation and varying imaging conditions on the precision of virtual modelled pelves. An adult human cadaver was scanned using varying imaging conditions, such as scanner type and standard patient scanning protocol, slice thickness and exposure level. The pelvis was segmented from the various CT images resulting in virtually modelled pelves. The precision of the virtual modelling was determined per polygon mesh point. The fraction of mesh points resulting in point-to-point distance variations of 2 mm or less (95% confidence interval (CI)) was reported. Colour mapping was used to visualise modelling variability. At almost all (>97%) locations across the pelvis, the point-to-point distance variation is less than 2 mm (CI = 95%). In >91% of the locations, the point-to-point distance variation was less than 1 mm (CI = 95%). This indicates that the geometric variability of the virtual pelvis as a result of segmentation and imaging conditions rarely exceeds the generally accepted linear error of 2 mm. Colour mapping shows that areas with large variability are predominantly joint surfaces. Therefore, results indicate that segmented bone elements from patient-derived CT scans are a sufficiently precise source for creating a virtual skeletal database.

  9. Receptor models for source apportionment of remote aerosols in Brazil

    International Nuclear Information System (INIS)

    Artaxo Netto, P.E.

    1985-11-01

    The PIXE (particle induced X-ray emission), and PESA (proton elastic scattering analysis) method were used in conjunction with receptor models for source apportionment of remote aerosols in Brazil. The PIXE used in the determination of concentration for elements with Z >- 11, has a detection limit of about 1 ng/m 3 . The concentrations of carbon, nitrogen and oxygen in the fine fraction of Amazon Basin aerosols was measured by PESA. We sampled in Jureia (SP), Fernando de Noronha, Arembepe (BA), Firminopolis (GO), Itaberai (GO) and Amazon Basin. For collecting the airbone particles we used cascade impactors, stacked filter units, and streaker samplers. Three receptor models were used: chemical mass balance, stepwise multiple regression analysis and principal factor analysis. The elemental and gravimetric concentrations were explained by the models within the experimental errors. Three sources of aerosol were quantitatively distinguished: marine aerosol, soil dust and aerosols related to forests. The emission of aerosols by vegetation is very clear for all the sampling sites. In Amazon Basin and Jureia it is the major source, responsible for 60 to 80% of airborne concentrations. (Author) [pt

  10. Empirically derived neighbourhood rules for urban land-use modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2012-01-01

    Land-use modelling and spatial scenarios have gained attention as a means to meet the challenge of reducing uncertainty in spatial planning and decision making. Many of the recent modelling efforts incorporate cellular automata to accomplish spatially explicit land-use-change modelling. Spatial...

  11. Flat directions in left-right symmetric string derived models

    International Nuclear Information System (INIS)

    Cleaver, Gerald B.; Clements, David J.; Faraggi, Alon E.

    2002-01-01

    The only string models known to reproduce the minimal supersymmetric standard model in the low energy effective field theory are those constructed in the free fermionic formulation. We demonstrate the existence of quasirealistic free fermionic heterotic string models in which supersymmetric singlet flat directions do not exist. This raises the possibility that supersymmetry is broken perturbatively in such models by the one-loop Fayet-Iliopoulos term. We show, however, that supersymmetric flat directions that utilize vacuum expectation values of some non-Abelian fields in the massless string spectrum do exist in the model. We argue that hidden sector condensates lift the flat directions and break supersymmetry hierarchically

  12. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  13. Receptor Model Source Apportionment of Nonmethane Hydrocarbons in Mexico City

    Directory of Open Access Journals (Sweden)

    V. Mugica

    2002-01-01

    Full Text Available With the purpose of estimating the source contributions of nonmethane hydrocarbons (NMHC to the atmosphere at three different sites in the Mexico City Metropolitan Area, 92 ambient air samples were measured from February 23 to March 22 of 1997. Light- and heavy-duty vehicular profiles were determined to differentiate the NMHC contribution of diesel and gasoline to the atmosphere. Food cooking source profiles were also determined for chemical mass balance receptor model application. Initial source contribution estimates were carried out to determine the adequate combination of source profiles and fitting species. Ambient samples of NMHC were apportioned to motor vehicle exhaust, gasoline vapor, handling and distribution of liquefied petroleum gas (LP gas, asphalt operations, painting operations, landfills, and food cooking. Both gasoline and diesel motor vehicle exhaust were the major NMHC contributors for all sites and times, with a percentage of up to 75%. The average motor vehicle exhaust contributions increased during the day. In contrast, LP gas contribution was higher during the morning than in the afternoon. Apportionment for the most abundant individual NMHC showed that the vehicular source is the major contributor to acetylene, ethylene, pentanes, n-hexane, toluene, and xylenes, while handling and distribution of LP gas was the major source contributor to propane and butanes. Comparison between CMB estimates of NMHC and the emission inventory showed a good agreement for vehicles, handling and distribution of LP gas, and painting operations; nevertheless, emissions from diesel exhaust and asphalt operations showed differences, and the results suggest that these emissions could be underestimated.

  14. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...

  15. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    Science.gov (United States)

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  16. Atmospheric mercury dispersion modelling from two nearest hypothetical point sources

    Energy Technology Data Exchange (ETDEWEB)

    Al Razi, Khandakar Md Habib; Hiroshi, Moritomi; Shinji, Kambara [Environmental and Renewable Energy System (ERES), Graduate School of Engineering, Gifu University, Yanagido, Gifu City, 501-1193 (Japan)

    2012-07-01

    The Japan coastal areas are still environmentally friendly, though there are multiple air emission sources originating as a consequence of several developmental activities such as automobile industries, operation of thermal power plants, and mobile-source pollution. Mercury is known to be a potential air pollutant in the region apart from SOX, NOX, CO and Ozone. Mercury contamination in water bodies and other ecosystems due to deposition of atmospheric mercury is considered a serious environmental concern. Identification of sources contributing to the high atmospheric mercury levels will be useful for formulating pollution control and mitigation strategies in the region. In Japan, mercury and its compounds were categorized as hazardous air pollutants in 1996 and are on the list of 'Substances Requiring Priority Action' published by the Central Environmental Council of Japan. The Air Quality Management Division of the Environmental Bureau, Ministry of the Environment, Japan, selected the current annual mean environmental air quality standard for mercury and its compounds of 0.04 ?g/m3. Long-term exposure to mercury and its compounds can have a carcinogenic effect, inducing eg, Minamata disease. This study evaluates the impact of mercury emissions on air quality in the coastal area of Japan. Average yearly emission of mercury from an elevated point source in this area with background concentration and one-year meteorological data were used to predict the ground level concentration of mercury. To estimate the concentration of mercury and its compounds in air of the local area, two different simulation models have been used. The first is the National Institute of Advanced Science and Technology Atmospheric Dispersion Model for Exposure and Risk Assessment (AIST-ADMER) that estimates regional atmospheric concentration and distribution. The second is the Hybrid Single Particle Lagrangian Integrated trajectory Model (HYSPLIT) that estimates the atmospheric

  17. Greenhouse Gas Source Attribution: Measurements Modeling and Uncertainty Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhen [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States); LaFranchi, Brian W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ivey, Mark D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Schrader, Paul E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Michelsen, Hope A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bambha, Ray P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2014-09-01

    In this project we have developed atmospheric measurement capabilities and a suite of atmospheric modeling and analysis tools that are well suited for verifying emissions of green- house gases (GHGs) on an urban-through-regional scale. We have for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate atmospheric CO2 . This will allow for the examination of regional-scale transport and distribution of CO2 along with air pollutants traditionally studied using CMAQ at relatively high spatial and temporal resolution with the goal of leveraging emissions verification efforts for both air quality and climate. We have developed a bias-enhanced Bayesian inference approach that can remedy the well-known problem of transport model errors in atmospheric CO2 inversions. We have tested the approach using data and model outputs from the TransCom3 global CO2 inversion comparison project. We have also performed two prototyping studies on inversion approaches in the generalized convection-diffusion context. One of these studies employed Polynomial Chaos Expansion to accelerate the evaluation of a regional transport model and enable efficient Markov Chain Monte Carlo sampling of the posterior for Bayesian inference. The other approach uses de- terministic inversion of a convection-diffusion-reaction system in the presence of uncertainty. These approaches should, in principle, be applicable to realistic atmospheric problems with moderate adaptation. We outline a regional greenhouse gas source inference system that integrates (1) two ap- proaches of atmospheric dispersion simulation and (2) a class of Bayesian inference and un- certainty quantification algorithms. We use two different and complementary approaches to simulate atmospheric dispersion. Specifically, we use a Eulerian chemical transport model CMAQ and a Lagrangian Particle Dispersion Model - FLEXPART-WRF. These two models share the same WRF

  18. Impact of external sources of infection on the dynamics of bovine tuberculosis in modelled badger populations

    Directory of Open Access Journals (Sweden)

    Hardstaff Joanne L

    2012-06-01

    Full Text Available Abstract Background The persistence of bovine TB (bTB in various countries throughout the world is enhanced by the existence of wildlife hosts for the infection. In Britain and Ireland, the principal wildlife host for bTB is the badger (Meles meles. The objective of our study was to examine the dynamics of bTB in badgers in relation to both badger-derived infection from within the population and externally-derived, trickle-type, infection, such as could occur from other species or environmental sources, using a spatial stochastic simulation model. Results The presence of external sources of infection can increase mean prevalence and reduce the threshold group size for disease persistence. Above the threshold equilibrium group size of 6–8 individuals predicted by the model for bTB persistence in badgers based on internal infection alone, external sources of infection have relatively little impact on the persistence or level of disease. However, within a critical range of group sizes just below this threshold level, external infection becomes much more important in determining disease dynamics. Within this critical range, external infection increases the ratio of intra- to inter-group infections due to the greater probability of external infections entering fully-susceptible groups. The effect is to enable bTB persistence and increase bTB prevalence in badger populations which would not be able to maintain bTB based on internal infection alone. Conclusions External sources of bTB infection can contribute to the persistence of bTB in badger populations. In high-density badger populations, internal badger-derived infections occur at a sufficient rate that the additional effect of external sources in exacerbating disease is minimal. However, in lower-density populations, external sources of infection are much more important in enhancing bTB prevalence and persistence. In such circumstances, it is particularly important that control strategies to

  19. Impact of external sources of infection on the dynamics of bovine tuberculosis in modelled badger populations.

    Science.gov (United States)

    Hardstaff, Joanne L; Bulling, Mark T; Marion, Glenn; Hutchings, Michael R; White, Piran C L

    2012-06-27

    The persistence of bovine TB (bTB) in various countries throughout the world is enhanced by the existence of wildlife hosts for the infection. In Britain and Ireland, the principal wildlife host for bTB is the badger (Meles meles). The objective of our study was to examine the dynamics of bTB in badgers in relation to both badger-derived infection from within the population and externally-derived, trickle-type, infection, such as could occur from other species or environmental sources, using a spatial stochastic simulation model. The presence of external sources of infection can increase mean prevalence and reduce the threshold group size for disease persistence. Above the threshold equilibrium group size of 6-8 individuals predicted by the model for bTB persistence in badgers based on internal infection alone, external sources of infection have relatively little impact on the persistence or level of disease. However, within a critical range of group sizes just below this threshold level, external infection becomes much more important in determining disease dynamics. Within this critical range, external infection increases the ratio of intra- to inter-group infections due to the greater probability of external infections entering fully-susceptible groups. The effect is to enable bTB persistence and increase bTB prevalence in badger populations which would not be able to maintain bTB based on internal infection alone. External sources of bTB infection can contribute to the persistence of bTB in badger populations. In high-density badger populations, internal badger-derived infections occur at a sufficient rate that the additional effect of external sources in exacerbating disease is minimal. However, in lower-density populations, external sources of infection are much more important in enhancing bTB prevalence and persistence. In such circumstances, it is particularly important that control strategies to reduce bTB in badgers include efforts to minimise such

  20. Discrete-Time Domain Modelling of Voltage Source Inverters in Standalone Applications

    DEFF Research Database (Denmark)

    Federico, de Bosio; de Sousa Ribeiro, Luiz Antonio; Freijedo Fernandez, Francisco Daniel

    2017-01-01

    modelling of the LC plant with consideration of delay and sample-and-hold effects on the state feedback cross-coupling decoupling is derived. From this plant formulation, current controllers with wide bandwidth and good relative stability properties are developed. Two controllers based on lead compensation......The decoupling of the capacitor voltage and inductor current has been shown to improve significantly the dynamic performance of voltage source inverters in standalone applications. However, the computation and PWM delays still limit the achievable bandwidth. In this paper a discrete-time domain...

  1. Modeling of low pressure plasma sources for microelectronics fabrication

    International Nuclear Information System (INIS)

    Agarwal, Ankur; Bera, Kallol; Kenney, Jason; Rauf, Shahid; Likhanskii, Alexandre

    2017-01-01

    Chemically reactive plasmas operating in the 1 mTorr–10 Torr pressure range are widely used for thin film processing in the semiconductor industry. Plasma modeling has come to play an important role in the design of these plasma processing systems. A number of 3-dimensional (3D) fluid and hybrid plasma modeling examples are used to illustrate the role of computational investigations in design of plasma processing hardware for applications such as ion implantation, deposition, and etching. A model for a rectangular inductively coupled plasma (ICP) source is described, which is employed as an ion source for ion implantation. It is shown that gas pressure strongly influences ion flux uniformity, which is determined by the balance between the location of plasma production and diffusion. The effect of chamber dimensions on plasma uniformity in a rectangular capacitively coupled plasma (CCP) is examined using an electromagnetic plasma model. Due to high pressure and small gap in this system, plasma uniformity is found to be primarily determined by the electric field profile in the sheath/pre-sheath region. A 3D model is utilized to investigate the confinement properties of a mesh in a cylindrical CCP. Results highlight the role of hole topology and size on the formation of localized hot-spots. A 3D electromagnetic plasma model for a cylindrical ICP is used to study inductive versus capacitive power coupling and how placement of ground return wires influences it. Finally, a 3D hybrid plasma model for an electron beam generated magnetized plasma is used to understand the role of reactor geometry on plasma uniformity in the presence of E  ×  B drift. (paper)

  2. Modeling of low pressure plasma sources for microelectronics fabrication

    Science.gov (United States)

    Agarwal, Ankur; Bera, Kallol; Kenney, Jason; Likhanskii, Alexandre; Rauf, Shahid

    2017-10-01

    Chemically reactive plasmas operating in the 1 mTorr-10 Torr pressure range are widely used for thin film processing in the semiconductor industry. Plasma modeling has come to play an important role in the design of these plasma processing systems. A number of 3-dimensional (3D) fluid and hybrid plasma modeling examples are used to illustrate the role of computational investigations in design of plasma processing hardware for applications such as ion implantation, deposition, and etching. A model for a rectangular inductively coupled plasma (ICP) source is described, which is employed as an ion source for ion implantation. It is shown that gas pressure strongly influences ion flux uniformity, which is determined by the balance between the location of plasma production and diffusion. The effect of chamber dimensions on plasma uniformity in a rectangular capacitively coupled plasma (CCP) is examined using an electromagnetic plasma model. Due to high pressure and small gap in this system, plasma uniformity is found to be primarily determined by the electric field profile in the sheath/pre-sheath region. A 3D model is utilized to investigate the confinement properties of a mesh in a cylindrical CCP. Results highlight the role of hole topology and size on the formation of localized hot-spots. A 3D electromagnetic plasma model for a cylindrical ICP is used to study inductive versus capacitive power coupling and how placement of ground return wires influences it. Finally, a 3D hybrid plasma model for an electron beam generated magnetized plasma is used to understand the role of reactor geometry on plasma uniformity in the presence of E  ×  B drift.

  3. Sensitivity of numerical dispersion modeling to explosive source parameters

    International Nuclear Information System (INIS)

    Baskett, R.L.; Cederwall, R.T.

    1991-01-01

    The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs

  4. Particle model of a cylindrical inductively coupled ion source

    Science.gov (United States)

    Ippolito, N. D.; Taccogna, F.; Minelli, P.; Cavenago, M.; Veltri, P.

    2017-08-01

    In spite of the wide use of RF sources, a complete understanding of the mechanisms regulating the RF-coupling of the plasma is still lacking so self-consistent simulations of the involved physics are highly desirable. For this reason we are developing a 2.5D fully kinetic Particle-In-Cell Monte-Carlo-Collision (PIC-MCC) model of a cylindrical ICP-RF source, keeping the time step of the simulation small enough to resolve the plasma frequency scale. The grid cell dimension is now about seven times larger than the average Debye length, because of the large computational demand of the code. It will be scaled down in the next phase of the development of the code. The filling gas is Xenon, in order to minimize the time lost by the MCC collision module in the first stage of development of the code. The results presented here are preliminary, with the code already showing a good robustness. The final goal will be the modeling of the NIO1 (Negative Ion Optimization phase 1) source, operating in Padua at Consorzio RFX.

  5. A theoretical model of a liquid metal ion source

    International Nuclear Information System (INIS)

    Kingham, D.R.; Swanson, L.W.

    1984-01-01

    A model of liquid metal ion source (LMIS) operation has been developed which gives a consistent picture of three different aspects of LMI sources: (i) the shape and size of the ion emitting region; (ii) the mechanism of ion formation; (iii) properties of the ion beam such as angular intensity and energy spread. It was found that the emitting region takes the shape of a jet-like protrusion on the end of a Taylor cone with ion emission from an area only a few tens of A across, in agreement with recent TEM pictures by Sudraud. This is consistent with ion formation predominantly by field evaporation. Calculated angular intensities and current-voltage characteristics based on our fluid dynamic jet-like protrusion model agree well with experiment. The formation of doubly charged ions is attributed to post-ionization of field evaporated singly charged ions and an apex field strength of about 2.0 V A -1 was calculated for a Ga source. The ion energy spread is mainly due to space charge effects, it is known to be reduced for doubly charged ions in agreement with this post-ionization mechanism. (author)

  6. Extended gamma sources modelling using multipole expansion: Application to the Tunisian gamma source load planning

    International Nuclear Information System (INIS)

    Loussaief, Abdelkader

    2007-01-01

    In this work we extend the use of multipole moments expansion to the case of inner radiation fields. A series expansion of the photon flux was established. The main advantage of this approach is that it offers the opportunity to treat both inner and external radiation field cases. We determined the expression of the inner multipole moments in both spherical harmonics and in cartesian coordinates. As an application we applied the analytical model to a radiation facility used for small target irradiation. Theoretical, experimental and simulation studies were performed, in air and in a product, and good agreement was reached.Conventional dose distribution study for gamma irradiation facility involves the use of isodose maps. The establishment of these maps requires the measurement of the absorbed dose in many points, which makes the task expensive experimentally and very long by simulation. However, a lack of points of measurement can distort the dose distribution cartography. To overcome these problems, we present in this paper a mathematical method to describe the dose distribution in air. This method is based on the multipole expansion in spherical harmonics of the photon flux emitted by the gamma source. The determination of the multipole coefficients of this development allows the modeling of the radiation field around the gamma source. (Author)

  7. One loop beta functions and fixed points in higher derivative sigma models

    International Nuclear Information System (INIS)

    Percacci, Roberto; Zanusso, Omar

    2010-01-01

    We calculate the one loop beta functions of nonlinear sigma models in four dimensions containing general two- and four-derivative terms. In the O(N) model there are four such terms and nontrivial fixed points exist for all N≥4. In the chiral SU(N) models there are in general six couplings, but only five for N=3 and four for N=2; we find fixed points only for N=2, 3. In the approximation considered, the four-derivative couplings are asymptotically free but the coupling in the two-derivative term has a nonzero limit. These results support the hypothesis that certain sigma models may be asymptotically safe.

  8. Comparison of the landslide susceptibility models in Taipei Water Source Domain, Taiwan

    Science.gov (United States)

    WU, C. Y.; Yeh, Y. C.; Chou, T. H.

    2017-12-01

    Taipei Water Source Domain, locating at the southeast of Taipei Metropolis, is the main source of water resource in this region. Recently, the downstream turbidity often soared significantly during the typhoon period because of the upstream landslides. The landslide susceptibilities should be analysed to assess the influence zones caused by different rainfall events, and to ensure the abilities of this domain to serve enough and quality water resource. Generally, the landslide susceptibility models can be established based on either a long-term landslide inventory or a specified landslide event. Sometimes, there is no long-term landslide inventory in some areas. Thus, the event-based landslide susceptibility models are established widely. However, the inventory-based and event-based landslide susceptibility models may result in dissimilar susceptibility maps in the same area. So the purposes of this study were to compare the landslide susceptibility maps derived from the inventory-based and event-based models, and to interpret how to select a representative event to be included in the susceptibility model. The landslide inventory from Typhoon Tim in July, 1994 and Typhoon Soudelor in August, 2015 was collected, and used to establish the inventory-based landslide susceptibility model. The landslides caused by Typhoon Nari and rainfall data were used to establish the event-based model. The results indicated the high susceptibility slope-units were located at middle upstream Nan-Shih Stream basin.

  9. Assessing the impact of different sources of topographic data on 1-D hydraulic modelling of floods

    Science.gov (United States)

    Ali, A. Md; Solomatine, D. P.; Di Baldassarre, G.

    2015-01-01

    Topographic data, such as digital elevation models (DEMs), are essential input in flood inundation modelling. DEMs can be derived from several sources either through remote sensing techniques (spaceborne or airborne imagery) or from traditional methods (ground survey). The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), the Shuttle Radar Topography Mission (SRTM), the light detection and ranging (lidar), and topographic contour maps are some of the most commonly used sources of data for DEMs. These DEMs are characterized by different precision and accuracy. On the one hand, the spatial resolution of low-cost DEMs from satellite imagery, such as ASTER and SRTM, is rather coarse (around 30 to 90 m). On the other hand, the lidar technique is able to produce high-resolution DEMs (at around 1 m), but at a much higher cost. Lastly, contour mapping based on ground survey is time consuming, particularly for higher scales, and may not be possible for some remote areas. The use of these different sources of DEM obviously affects the results of flood inundation models. This paper shows and compares a number of 1-D hydraulic models developed using HEC-RAS as model code and the aforementioned sources of DEM as geometric input. To test model selection, the outcomes of the 1-D models were also compared, in terms of flood water levels, to the results of 2-D models (LISFLOOD-FP). The study was carried out on a reach of the Johor River, in Malaysia. The effect of the different sources of DEMs (and different resolutions) was investigated by considering the performance of the hydraulic models in simulating flood water levels as well as inundation maps. The outcomes of our study show that the use of different DEMs has serious implications to the results of hydraulic models. The outcomes also indicate that the loss of model accuracy due to re-sampling the highest resolution DEM (i.e. lidar 1 m) to lower resolution is much less than the loss of model accuracy due

  10. SOURCE 2.0 model development: UO2 thermal properties

    International Nuclear Information System (INIS)

    Reid, P.J.; Richards, M.J.; Iglesias, F.C.; Brito, A.C.

    1997-01-01

    During analysis of CANDU postulated accidents, the reactor fuel is estimated to experience large temperature variations and to be exposed to a variety of environments from highly oxidized to mildly reducing. The exposure of CANDU fuel to these environments and temperatures may affect fission product releases from the fuel and cause degradation of the fuel thermal properties. The SOURCE 2.0 project is a safety analysis code which will model the necessary mechanisms required to calculate fission product release for a variety of accident scenarios, including large break loss of coolant accidents (LOCAs) with or without emergency core cooling. The goal of the model development is to generate models which are consistent with each other and phenomenologically based, insofar as that is possible given the state of theoretical understanding

  11. RF Plasma modeling of the Linac4 H− ion source

    CERN Document Server

    Mattei, S; Hatayama, A; Lettry, J; Kawamura, Y; Yasumoto, M; Schmitzer, C

    2013-01-01

    This study focuses on the modelling of the ICP RF-plasma in the Linac4 H− ion source currently being constructed at CERN. A self-consistent model of the plasma dynamics with the RF electromagnetic field has been developed by a PIC-MCC method. In this paper, the model is applied to the analysis of a low density plasma discharge initiation, with particular interest on the effect of the external magnetic field on the plasma properties, such as wall loss, electron density and electron energy. The use of a multi-cusp magnetic field effectively limits the wall losses, particularly in the radial direction. Preliminary results however indicate that a reduced heating efficiency results in such a configuration. The effect is possibly due to trapping of electrons in the multi-cusp magnetic field, preventing their continuous acceleration in the azimuthal direction.

  12. How to Model Super-Soft X-ray Sources?

    Science.gov (United States)

    Rauch, Thomas

    2012-07-01

    During outbursts, the surface temperatures of white dwarfs in cataclysmic variables exceed by far half a million Kelvin. In this phase, they may become the brightest super-soft sources (SSS) in the sky. Time-series of high-resolution, high S/N X-ray spectra taken during rise, maximum, and decline of their X-ray luminosity provide insights into the processes following such outbursts as well as in the surface composition of the white dwarf. Their analysis requires adequate NLTE model atmospheres. The Tuebingen Non-LTE Model-Atmosphere Package (TMAP) is a powerful tool for their calculation. We present the application of TMAP models to SSS spectra and discuss their validity.

  13. Rigorous theoretical derivation of lumped models to transmission line systems

    International Nuclear Information System (INIS)

    Zhao Jixiang

    2012-01-01

    By virtue of the negative electric parameter concept, i.e. negative lumped resistance, inductance, conductance and capacitance (N-RLGC), the lumped equivalent models of transmission line systems, including the circuit model, two-port π-network and T-network, are given. We start from the N-segment-ladder-like equivalent networks composed distributed parameters, and achieve the input impedance in the form of a continued fraction. Utilizing the continued fraction theory, the expressions of input impedance are obtained under three kinds of extreme cases, i.e. the load impedances are equal to zero, infinity and characteristic impedance, respectively. When the number of segment N is limited to infinity, they are transformed to lumped elements. Comparison between the distributed model and lumped model of transmission lines, the expression of tanh γd, which is the key term in the transmission line equations, are obtained by RLGC, furthermore, according to input admittance, admittance matrix and ABCD matrix of transmission lines, the lumped equivalent circuit models, π-networks and T-networks have been given. The models are verified in the frequency and time domain, respectively, showing that the models are accurate and efficient. (semiconductor integrated circuits)

  14. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  15. Experimental validation of a kilovoltage x-ray source model for computing imaging dose

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Yannick, E-mail: yannick.poirier@cancercare.mb.ca [CancerCare Manitoba, 675 McDermot Ave, Winnipeg, Manitoba R3E 0V9 (Canada); Kouznetsov, Alexei; Koger, Brandon [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4 (Canada); Tambasco, Mauro, E-mail: mtambasco@mail.sdsu.edu [Department of Physics, San Diego State University, San Diego, California 92182-1233 and Department of Physics and Astronomy and Department of Oncology, University of Calgary, Calgary, Alberta T2N 1N4 (Canada)

    2014-04-15

    Purpose: To introduce and validate a kilovoltage (kV) x-ray source model and characterization method to compute absorbed dose accrued from kV x-rays. Methods: The authors propose a simplified virtual point source model and characterization method for a kV x-ray source. The source is modeled by: (1) characterizing the spatial spectral and fluence distributions of the photons at a plane at the isocenter, and (2) creating a virtual point source from which photons are generated to yield the derived spatial spectral and fluence distribution at isocenter of an imaging system. The spatial photon distribution is determined by in-air relative dose measurements along the transverse (x) and radial (y) directions. The spectrum is characterized using transverse axis half-value layer measurements and the nominal peak potential (kVp). This source modeling approach is used to characterize a Varian{sup ®} on-board-imager (OBI{sup ®}) for four default cone-beam CT beam qualities: beams using a half bowtie filter (HBT) with 110 and 125 kVp, and a full bowtie filter (FBT) with 100 and 125 kVp. The source model and characterization method was validated by comparing dose computed by the authors’ inhouse software (kVDoseCalc) to relative dose measurements in a homogeneous and a heterogeneous block phantom comprised of tissue, bone, and lung-equivalent materials. Results: The characterized beam qualities and spatial photon distributions are comparable to reported values in the literature. Agreement between computed and measured percent depth-dose curves is ⩽2% in the homogeneous block phantom and ⩽2.5% in the heterogeneous block phantom. Transverse axis profiles taken at depths of 2 and 6 cm in the homogeneous block phantom show an agreement within 4%. All transverse axis dose profiles in water, in bone, and lung-equivalent materials for beams using a HBT, have an agreement within 5%. Measured profiles of FBT beams in bone and lung-equivalent materials were higher than their

  16. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Cont, Rama; Kokholm, Thomas

    observed properties of variance swap dynamics and allows for jumps in volatility and returns. An affine specification using L´evy processes as building blocks leads to analytically tractable pricing formulas for options on variance swaps as well as efficient numerical methods for pricing of European......We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically...... options on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options...

  17. microRNA expression profile in human coronary smooth muscle cell-derived microparticles is a source of biomarkers.

    Science.gov (United States)

    de Gonzalo-Calvo, David; Cenarro, Ana; Civeira, Fernando; Llorente-Cortes, Vicenta

    2016-01-01

    microRNA (miRNA) expression profile of extracellular vesicles is a potential tool for clinical practice. Despite the key role of vascular smooth muscle cells (VSMC) in cardiovascular pathology, there is limited information about the presence of miRNAs in microparticles secreted by this cell type, including human coronary artery smooth muscle cells (HCASMC). Here, we tested whether HCASMC-derived microparticles contain miRNAs and the value of these miRNAs as biomarkers. HCASMC and explants from atherosclerotic or non-atherosclerotic areas were obtained from coronary arteries of patients undergoing heart transplant. Plasma samples were collected from: normocholesterolemic controls (N=12) and familial hypercholesterolemia (FH) patients (N=12). Both groups were strictly matched for age, sex and cardiovascular risk factors. Microparticle (0.1-1μm) isolation and characterization was performed using standard techniques. VSMC-enriched miRNAs expression (miR-21-5p, -143-3p, -145-5p, -221-3p and -222-3p) was analyzed using RT-qPCR. Total RNA isolated from HCASMC-derived microparticles contained small RNAs, including VSMC-enriched miRNAs. Exposition of HCASMC to pathophysiological conditions, such as hypercholesterolemia, induced a decrease in the expression level of miR-143-3p and miR-222-3p in microparticles, not in cells. Expression levels of miR-222-3p were lower in circulating microparticles from FH patients compared to normocholesterolemic controls. Microparticles derived from atherosclerotic plaque areas showed a decreased level of miR-143-3p and miR-222-3p compared to non-atherosclerotic areas. We demonstrated for the first time that microparticles secreted by HCASMC contain microRNAs. Hypercholesterolemia alters the microRNA profile of HCASMC-derived microparticles. The miRNA signature of HCASMC-derived microparticles is a source of cardiovascular biomarkers. Copyright © 2016 Sociedad Española de Arteriosclerosis. Publicado por Elsevier España, S.L.U. All rights

  18. Modeling Degradation Product Partitioning in Chlorinated-DNAPL Source Zones

    Science.gov (United States)

    Boroumand, A.; Ramsburg, A.; Christ, J.; Abriola, L.

    2009-12-01

    Metabolic reductive dechlorination degrades aqueous phase contaminant concentrations, increasing the driving force for DNAPL dissolution. Results from laboratory and field investigations suggest that accumulation of cis-dichloroethene (cis-DCE) and vinyl chloride (VC) may occur within DNAPL source zones. The lack of (or slow) degradation of cis-DCE and VC within bioactive DNAPL source zones may result in these dechlorination products becoming distributed among the solid, aqueous, and organic phases. Partitioning of cis-DCE and VC into the organic phase may reduce aqueous phase concentrations of these contaminants and result in the enrichment of these dechlorination products within the non-aqueous phase. Enrichment of degradation products within DNAPL may reduce some of the advantages associated with the application of bioremediation in DNAPL source zones. Thus, it is important to quantify how partitioning (between the aqueous and organic phases) influences the transport of cis-DCE and VC within bioactive DNAPL source zones. In this work, abiotic two-phase (PCE-water) one-dimensional column experiments are modeled using analytical and numerical methods to examine the rate of partitioning and the capacity of PCE-DNAPL to reversibly sequester cis-DCE. These models consider aqueous-phase, nonaqueous phase, and aqueous plus nonaqueous phase mass transfer resistance using linear driving force and spherical diffusion expressions. Model parameters are examined and compared for different experimental conditions to evaluate the mechanisms controlling partitioning. Biot number, a dimensionless number which is an index of the ratio of the aqueous phase mass transfer rate in boundary layer to the mass transfer rate within the NAPL, is used to characterize conditions in which either or both processes are controlling. Results show that application of a single aqueous resistance is capable to capture breakthrough curves when DNAPL is distributed in porous media as low

  19. Novel sources of Flavor Changed Neutral Currents in the 331RHN model

    International Nuclear Information System (INIS)

    Cogollo, D.; Vital de Andrade, A.; Queiroz, F.S.; Teles, P.R.

    2012-01-01

    Sources of Flavor Changed Neutral Currents (FCNC) emerge naturally from a well motivated framework called 3-3-1 with right-handed neutrinos model, 331 RHN for short, mediated by an extra neutral gauge boson Z '. Following previous work we calculate these sources and in addition we derive new ones coming from CP-even and -odd neutral scalars which appear due to their non-diagonal interactions with the physical standard quarks. Furthermore, by using 4 texture zeros for the quark mass matrices, we derive the mass difference terms for the neutral mesons systems K 0 - anti K 0 , D 0 - anti D 0 and B 0 - anti B 0 and show that, though one can discern that the Z' contribution is the most relevant one for mesons oscillations purposes, scalar contributions play a role also in this processes and hence it is worthwhile to investigate them and derive new bounds on space of parameters. In particular, studying the B 0 - anti B 0 system we set the bounds M Z' >or similar 4.2 TeV and M S 2 ,M I 3 >or similar 7.5 TeV in order to be consistent with the current measurements. (orig.)

  20. A hidden markov model derived structural alphabet for proteins.

    Science.gov (United States)

    Camproux, A C; Gautier, R; Tufféry, P

    2004-06-04

    Understanding and predicting protein structures depends on the complexity and the accuracy of the models used to represent them. We have set up a hidden Markov model that discretizes protein backbone conformation as series of overlapping fragments (states) of four residues length. This approach learns simultaneously the geometry of the states and their connections. We obtain, using a statistical criterion, an optimal systematic decomposition of the conformational variability of the protein peptidic chain in 27 states with strong connection logic. This result is stable over different protein sets. Our model fits well the previous knowledge related to protein architecture organisation and seems able to grab some subtle details of protein organisation, such as helix sub-level organisation schemes. Taking into account the dependence between the states results in a description of local protein structure of low complexity. On an average, the model makes use of only 8.3 states among 27 to describe each position of a protein structure. Although we use short fragments, the learning process on entire protein conformations captures the logic of the assembly on a larger scale. Using such a model, the structure of proteins can be reconstructed with an average accuracy close to 1.1A root-mean-square deviation and for a complexity of only 3. Finally, we also observe that sequence specificity increases with the number of states of the structural alphabet. Such models can constitute a very relevant approach to the analysis of protein architecture in particular for protein structure prediction.

  1. Cardiac magnetic source imaging based on current multipole model

    International Nuclear Information System (INIS)

    Tang Fa-Kuan; Wang Qian; Hua Ning; Lu Hong; Tang Xue-Zheng; Ma Ping

    2011-01-01

    It is widely accepted that the heart current source can be reduced into a current multipole. By adopting three linear inverse methods, the cardiac magnetic imaging is achieved in this article based on the current multipole model expanded to the first order terms. This magnetic imaging is realized in a reconstruction plane in the centre of human heart, where the current dipole array is employed to represent realistic cardiac current distribution. The current multipole as testing source generates magnetic fields in the measuring plane, serving as inputs of cardiac magnetic inverse problem. In the heart-torso model constructed by boundary element method, the current multipole magnetic field distribution is compared with that in the homogeneous infinite space, and also with the single current dipole magnetic field distribution. Then the minimum-norm least-squares (MNLS) method, the optimal weighted pseudoinverse method (OWPIM), and the optimal constrained linear inverse method (OCLIM) are selected as the algorithms for inverse computation based on current multipole model innovatively, and the imaging effects of these three inverse methods are compared. Besides, two reconstructing parameters, residual and mean residual, are also discussed, and their trends under MNLS, OWPIM and OCLIM each as a function of SNR are obtained and compared. (general)

  2. Modeling spot markets for electricity and pricing electricity derivatives

    Science.gov (United States)

    Ning, Yumei

    Spot prices for electricity have been very volatile with dramatic price spikes occurring in restructured market. The task of forecasting electricity prices and managing price risk presents a new challenge for market players. The objectives of this dissertation are: (1) to develop a stochastic model of price behavior and predict price spikes; (2) to examine the effect of weather forecasts on forecasted prices; (3) to price electricity options and value generation capacity. The volatile behavior of prices can be represented by a stochastic regime-switching model. In the model, the means of the high-price and low-price regimes and the probabilities of switching from one regime to the other are specified as functions of daily peak load. The probability of switching to the high-price regime is positively related to load, but is still not high enough at the highest loads to predict price spikes accurately. An application of this model shows how the structure of the Pennsylvania-New Jersey-Maryland market changed when market-based offers were allowed, resulting in higher price spikes. An ARIMA model including temperature, seasonal, and weekly effects is estimated to forecast daily peak load. Forecasts of load under different assumptions about weather patterns are used to predict changes of price behavior given the regime-switching model of prices. Results show that the range of temperature forecasts from a normal summer to an extremely warm summer cause relatively small increases in temperature (+1.5%) and load (+3.0%). In contrast, the increases in prices are large (+20%). The conclusion is that the seasonal outlook forecasts provided by NOAA are potentially valuable for predicting prices in electricity markets. The traditional option models, based on Geometric Brownian Motion are not appropriate for electricity prices. An option model using the regime-switching framework is developed to value a European call option. The model includes volatility risk and allows changes

  3. A model for managing sources of groundwater pollution

    Science.gov (United States)

    Gorelick, Steven M.

    1982-01-01

    The waste disposal capacity of a groundwater system can be maximized while maintaining water quality at specified locations by using a groundwater pollutant source management model that is based upon linear programing and numerical simulation. The decision variables of the management model are solute waste disposal rates at various facilities distributed over space. A concentration response matrix is used in the management model to describe transient solute transport and is developed using the U.S. Geological Survey solute transport simulation model. The management model was applied to a complex hypothetical groundwater system. Large-scale management models were formulated as dual linear programing problems to reduce numerical difficulties and computation time. Linear programing problems were solved using a numerically stable, available code. Optimal solutions to problems with successively longer management time horizons indicated that disposal schedules at some sites are relatively independent of the number of disposal periods. Optimal waste disposal schedules exhibited pulsing rather than constant disposal rates. Sensitivity analysis using parametric linear programing showed that a sharp reduction in total waste disposal potential occurs if disposal rates at any site are increased beyond their optimal values.

  4. Plant model of KIPT neutron source facility simulator

    International Nuclear Information System (INIS)

    Cao, Yan; Wei, Thomas Y.; Grelle, Austin L.; Gohar, Yousry

    2016-01-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  5. Plant model of KIPT neutron source facility simulator

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Yan [Argonne National Lab. (ANL), Argonne, IL (United States); Wei, Thomas Y. [Argonne National Lab. (ANL), Argonne, IL (United States); Grelle, Austin L. [Argonne National Lab. (ANL), Argonne, IL (United States); Gohar, Yousry [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-02-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  6. Sources

    International Nuclear Information System (INIS)

    Duffy, L.P.

    1991-01-01

    This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community

  7. Urban nonpoint source pollution buildup and washoff models for simulating storm runoff quality in the Los Angeles County.

    Science.gov (United States)

    Wang, Long; Wei, Jiahua; Huang, Yuefei; Wang, Guangqian; Maqsood, Imran

    2011-07-01

    Many urban nonpoint source pollution models utilize pollutant buildup and washoff functions to simulate storm runoff quality of urban catchments. In this paper, two urban pollutant washoff load models are derived using pollutant buildup and washoff functions. The first model assumes that there is no residual pollutant after a storm event while the second one assumes that there is always residual pollutant after each storm event. The developed models are calibrated and verified with observed data from an urban catchment in the Los Angeles County. The application results show that the developed model with consideration of residual pollutant is more capable of simulating nonpoint source pollution from urban storm runoff than that without consideration of residual pollutant. For the study area, residual pollutant should be considered in pollutant buildup and washoff functions for simulating urban nonpoint source pollution when the total runoff volume is less than 30 mm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Evaluation of Stem Cell-Derived Red Blood Cells as a Transfusion Product Using a Novel Animal Model.

    Directory of Open Access Journals (Sweden)

    Sandeep N Shah

    Full Text Available Reliance on volunteer blood donors can lead to transfusion product shortages, and current liquid storage of red blood cells (RBCs is associated with biochemical changes over time, known as 'the storage lesion'. Thus, there is a need for alternative sources of transfusable RBCs to supplement conventional blood donations. Extracorporeal production of stem cell-derived RBCs (stemRBCs is a potential and yet untapped source of fresh, transfusable RBCs. A number of groups have attempted RBC differentiation from CD34+ cells. However, it is still unclear whether these stemRBCs could eventually be effective substitutes for traditional RBCs due to potential differences in oxygen carrying capacity, viability, deformability, and other critical parameters. We have generated ex vivo stemRBCs from primary human cord blood CD34+ cells and compared them to donor-derived RBCs based on a number of in vitro parameters. In vivo, we assessed stemRBC circulation kinetics in an animal model of transfusion and oxygen delivery in a mouse model of exercise performance. Our novel, chronically anemic, SCID mouse model can evaluate the potential of stemRBCs to deliver oxygen to tissues (muscle under resting and exercise-induced hypoxic conditions. Based on our data, stem cell-derived RBCs have a similar biochemical profile compared to donor-derived RBCs. While certain key differences remain between donor-derived RBCs and stemRBCs, the ability of stemRBCs to deliver oxygen in a living organism provides support for further development as a transfusion product.

  10. Open-source Software for Exoplanet Atmospheric Modeling

    Science.gov (United States)

    Cubillos, Patricio; Blecic, Jasmina; Harrington, Joseph

    2018-01-01

    I will present a suite of self-standing open-source tools to model and retrieve exoplanet spectra implemented for Python. These include: (1) a Bayesian-statistical package to run Levenberg-Marquardt optimization and Markov-chain Monte Carlo posterior sampling, (2) a package to compress line-transition data from HITRAN or Exomol without loss of information, (3) a package to compute partition functions for HITRAN molecules, (4) a package to compute collision-induced absorption, and (5) a package to produce radiative-transfer spectra of transit and eclipse exoplanet observations and atmospheric retrievals.

  11. Algorithm for Financial Derivatives Evaluation in a Generalized Multi-Heston Model

    Directory of Open Access Journals (Sweden)

    Dan Negura

    2013-02-01

    Full Text Available In this paper we show how could a financial derivative be estimated based on an assumed Multi-Heston model support.Keywords: Euler Maruyama discretization method, Monte Carlo simulation, Heston model, Double-Heston model, Multi-Heston model

  12. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    Science.gov (United States)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models

  13. Modeling ramp-hold indentation measurements based on Kelvin-Voigt fractional derivative model

    Science.gov (United States)

    Zhang, Hongmei; zhe Zhang, Qing; Ruan, Litao; Duan, Junbo; Wan, Mingxi; Insana, Michael F.

    2018-03-01

    Interpretation of experimental data from micro- and nano-scale indentation testing is highly dependent on the constitutive model selected to relate measurements to mechanical properties. The Kelvin-Voigt fractional derivative model (KVFD) offers a compact set of viscoelastic features appropriate for characterizing soft biological materials. This paper provides a set of KVFD solutions for converting indentation testing data acquired for different geometries and scales into viscoelastic properties of soft materials. These solutions, which are mostly in closed-form, apply to ramp-hold relaxation, load-unload and ramp-load creep-testing protocols. We report on applications of these model solutions to macro- and nano-indentation testing of hydrogels, gastric cancer cells and ex vivo breast tissue samples using an atomic force microscope (AFM). We also applied KVFD models to clinical ultrasonic breast data using a compression plate as required for elasticity imaging. Together the results show that KVFD models fit a broad range of experimental data with a correlation coefficient typically R 2  >  0.99. For hydrogel samples, estimation of KVFD model parameters from test data using spherical indentation versus plate compression as well as ramp relaxation versus load-unload compression all agree within one standard deviation. Results from measurements made using macro- and nano-scale indentation agree in trend. For gastric cell and ex vivo breast tissue measurements, KVFD moduli are, respectively, 1/3-1/2 and 1/6 of the elasticity modulus found from the Sneddon model. In vivo breast tissue measurements yield model parameters consistent with literature results. The consistency of results found for a broad range of experimental parameters suggest the KVFD model is a reliable tool for exploring intrinsic features of the cell/tissue microenvironments.

  14. Optimum load distribution between heat sources based on the Cournot model

    Science.gov (United States)

    Penkovskii, A. V.; Stennikov, V. A.; Khamisov, O. V.

    2015-08-01

    One of the widespread models of the heat supply of consumers, which is represented in the "Single buyer" format, is considered. The methodological base proposed for its description and investigation presents the use of principles of the theory of games, basic propositions of microeconomics, and models and methods of the theory of hydraulic circuits. The original mathematical model of the heat supply system operating under conditions of the "Single buyer" organizational structure provides the derivation of a solution satisfying the market Nash equilibrium. The distinctive feature of the developed mathematical model is that, along with problems solved traditionally within the bounds of bilateral relations of heat energy sources-heat consumer, it considers a network component with its inherent physicotechnical properties of the heat network and business factors connected with costs of the production and transportation of heat energy. This approach gives the possibility to determine optimum levels of load of heat energy sources. These levels provide the given heat energy demand of consumers subject to the maximum profit earning of heat energy sources and the fulfillment of conditions for formation of minimum heat network costs for a specified time. The practical realization of the search of market equilibrium is considered by the example of a heat supply system with two heat energy sources operating on integrated heat networks. The mathematical approach to the solution search is represented in the graphical form and illustrates computations based on the stepwise iteration procedure for optimization of levels of loading of heat energy sources (groping procedure by Cournot) with the corresponding computation of the heat energy price for consumers.

  15. United States‐Mexican border watershed assessment: Modeling nonpoint source pollution in Ambos Nogales

    Science.gov (United States)

    Norman, Laura M.

    2007-01-01

    Ecological considerations need to be interwoven with economic policy and planning along the United States‐Mexican border. Non‐point source pollution can have significant implications for the availability of potable water and the continued health of borderland ecosystems in arid lands. However, environmental assessments in this region present a host of unique issues and problems. A common obstacle to the solution of these problems is the integration of data with different resolutions, naming conventions, and quality to create a consistent database across the binational study area. This report presents a simple modeling approach to predict nonpoint source pollution that can be used for border watersheds. The modeling approach links a hillslopescale erosion‐prediction model and a spatially derived sediment‐delivery model within a geographic information system to estimate erosion, sediment yield, and sediment deposition across the Ambos Nogales watershed in Sonora, Mexico, and Arizona. This paper discusses the procedures used for creating a watershed database to apply the models and presents an example of the modeling approach applied to a conservation‐planning problem.

  16. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    Science.gov (United States)

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  17. Calculus for cognitive scientists derivatives, integrals and models

    CERN Document Server

    Peterson, James K

    2016-01-01

    This book provides a self-study program on how mathematics, computer science and science can be usefully and seamlessly intertwined. Learning to use ideas from mathematics and computation is essential for understanding approaches to cognitive and biological science. As such the book covers calculus on one variable and two variables and works through a number of interesting first-order ODE models. It clearly uses MatLab in computational exercises where the models cannot be solved by hand, and also helps readers to understand that approximations cause errors – a fact that must always be kept in mind.

  18. Understanding forest-derived biomass supply with GIS modelling

    DEFF Research Database (Denmark)

    Hock, B. K.; Blomqvist, L.; Hall, P.

    2012-01-01

    distribution, and the cost of delivery as forests are frequently remote from energy users. A GIS-based model was developed to predict supply curves of forest biomass material for a site or group of sites, both now and in the future. The GIS biomass supply model was used to assist the New Zealand Energy...... Efficiency and Conservation Authority's development of a national target for biomass use for industrial heat production, to determine potential forest residue volumes for industrial heat and their delivery costs for 19 processing plants of the dairy company Fonterra, and towards investigating options...

  19. Using statistical compatibility to derive advanced probabilistic fatigue models

    Czech Academy of Sciences Publication Activity Database

    Fernández-Canteli, A.; Castillo, E.; López-Aenlle, M.; Seitl, Stanislav

    2010-01-01

    Roč. 2, č. 1 (2010), s. 1131-1140 E-ISSN 1877-7058. [Fatigue 2010. Praha, 06.06.2010-11.06.2010] Institutional research plan: CEZ:AV0Z20410507 Keywords : Fatigue models * Statistical compatibility * Functional equations Subject RIV: JL - Materials Fatigue, Friction Mechanics

  20. Deriving vehicle-to-grid business models from consumer preferences

    NARCIS (Netherlands)

    Bohnsack, René; van den Hoed, Robert; Oude Reimer, Hugo

    2015-01-01

    Combining electric cars with utility services seems to be a natural fit and holds the promise to tackle various mobility as well as electricity challenges at the same time. So far no viable business model for vehicle-to-grid technology has emerged, raising the question which characteristics a

  1. Derivation of Monotone Decision Models from Non-Monotone Data

    NARCIS (Netherlands)

    Daniëls, H.A.M.; Velikova, M.V.

    2003-01-01

    The objective of data mining is the extraction of knowledge from databases. In practice, one often encounters difficulties with models that are constructed purely by search, without incorporation of knowledge about the domain of application.In economic decision making such as credit loan approval or

  2. REE enrichment in granite-derived regolith deposits of the southeast United States: Prospective source rocks and accumulation processes

    Science.gov (United States)

    Foley, Nora K.; Ayuso, Robert A.; Simandl, G.J.; Neetz, M.

    2015-01-01

    The Southeastern United States contains numerous anorogenic, or A-type, granites, which constitute promising source rocks for REE-enriched ion adsorption clay deposits due to their inherently high concentrations of REE. These granites have undergone a long history of chemical weathering, resulting in thick granite-derived regoliths, akin to those of South China, which supply virtually all heavy REE and Y, and a significant portion of light REE to global markets. Detailed comparisons of granite regolith profiles formed on the Stewartsville and Striped Rock plutons, and the Robertson River batholith (Virginia) indicate that REE are mobile and can attain grades comparable to those of deposits currently mined in China. A REE-enriched parent, either A-type or I-type (highly fractionated igneous type) granite, is thought to be critical for generating the high concentrations of REE in regolith profiles. One prominent feature we recognize in many granites and mineralized regoliths is the tetrad behaviour displayed in REE chondrite-normalized patterns. Tetrad patterns in granite and regolith result from processes that promote the redistribution, enrichment, and fractionation of REE, such as late- to post- magmatic alteration of granite and silicate hydrolysis in the regolith. Thus, REE patterns showing tetrad effects may be a key for discriminating highly prospective source rocks and regoliths with potential for REE ion adsorption clay deposits.

  3. A fractal derivative constitutive model for three stages in granite creep

    Directory of Open Access Journals (Sweden)

    R. Wang

    Full Text Available In this paper, by replacing the Newtonian dashpot with the fractal dashpot and considering damage effect, a new constitutive model is proposed in terms of time fractal derivative to describe the full creep regions of granite. The analytic solutions of the fractal derivative creep constitutive equation are derived via scaling transform. The conventional triaxial compression creep tests are performed on MTS 815 rock mechanics test system to verify the efficiency of the new model. The granite specimen is taken from Beishan site, the most potential area for the China’s high-level radioactive waste repository. It is shown that the proposed fractal model can characterize the creep behavior of granite especially in accelerating stage which the classical models cannot predict. The parametric sensitivity analysis is also conducted to investigate the effects of model parameters on the creep strain of granite. Keywords: Beishan granite, Fractal derivative, Damage evolution, Scaling transformation

  4. Strategies to Automatically Derive a Process Model from a Configurable Process Model Based on Event Data

    Directory of Open Access Journals (Sweden)

    Mauricio Arriagada-Benítez

    2017-10-01

    Full Text Available Configurable process models are frequently used to represent business workflows and other discrete event systems among different branches of large organizations: they unify commonalities shared by all branches and describe their differences, at the same time. The configuration of such models is usually done manually, which is challenging. On the one hand, when the number of configurable nodes in the configurable process model grows, the size of the search space increases exponentially. On the other hand, the person performing the configuration may lack the holistic perspective to make the right choice for all configurable nodes at the same time, since choices influence each other. Nowadays, information systems that support the execution of business processes create event data reflecting how processes are performed. In this article, we propose three strategies (based on exhaustive search, genetic algorithms and a greedy heuristic that use event data to automatically derive a process model from a configurable process model that better represents the characteristics of the process in a specific branch. These strategies have been implemented in our proposed framework and tested in both business-like event logs as recorded in a higher educational enterprise resource planning system and a real case scenario involving a set of Dutch municipalities.

  5. TOXICOLOGICAL EVALUATION OF REALISTIC EMISSIONS OF SOURCE AEROSOLS (TERESA): APPLICATION TO POWER PLANT-DERIVED PM2.5

    Energy Technology Data Exchange (ETDEWEB)

    Annette Rohr

    2006-03-01

    TERESA (Toxicological Evaluation of Realistic Emissions of Source Aerosols) involves exposing laboratory rats to realistic coal-fired power plant and mobile source emissions to help determine the relative toxicity of these PM sources. There are three coal-fired power plants in the TERESA program; this report describes the results of fieldwork conducted at the first plant, located in the Upper Midwest. The project was technically challenging by virtue of its novel design and requirement for the development of new techniques. By examining aged, atmospherically transformed aerosol derived from power plant stack emissions, we were able to evaluate the toxicity of PM derived from coal combustion in a manner that more accurately reflects the exposure of concern than existing methodologies. TERESA also involves assessment of actual plant emissions in a field setting--an important strength since it reduces the question of representativeness of emissions. A sampling system was developed and assembled to draw emissions from the stack; stack sampling conducted according to standard EPA protocol suggested that the sampled emissions are representative of those exiting the stack into the atmosphere. Two mobile laboratories were then outfitted for the study: (1) a chemical laboratory in which the atmospheric aging was conducted and which housed the bulk of the analytical equipment; and (2) a toxicological laboratory, which contained animal caging and the exposure apparatus. Animal exposures were carried out from May-November 2004 to a number of simulated atmospheric scenarios. Toxicological endpoints included (1) pulmonary function and breathing pattern; (2) bronchoalveolar lavage fluid cytological and biochemical analyses; (3) blood cytological analyses; (4) in vivo oxidative stress in heart and lung tissue; and (5) heart and lung histopathology. Results indicated no differences between exposed and control animals in any of the endpoints examined. Exposure concentrations for the

  6. A Derivation of Source-based Kinetics Equation with Time Dependent Fission Kernel for Reactor Transient Analyses

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Woo, Myeong Hyun; Shin, Chang Ho; Pyeon, Cheol Ho

    2015-01-01

    In this study, a new balance equation to overcome the problems generated by the previous methods is proposed using source-based balance equation. And then, a simple problem is analyzed with the proposed method. In this study, a source-based balance equation with the time dependent fission kernel was derived to simplify the kinetics equation. To analyze the partial variations of reactor characteristics, two representative methods were introduced in previous studies; (1) quasi-statics method and (2) multipoint technique. The main idea of quasistatics method is to use a low-order approximation for large integration times. To realize the quasi-statics method, first, time dependent flux is separated into the shape and amplitude functions, and shape function is calculated. It is noted that the method has a good accuracy; however, it can be expensive as a calculation cost aspect because the shape function should be fully recalculated to obtain accurate results. To improve the calculation efficiency, multipoint method was proposed. The multipoint method is based on the classic kinetics equation with using Green's function to analyze the flight probability from region r' to r. Those previous methods have been used to analyze the reactor kinetics analysis; however, the previous methods can have some limitations. First, three group variables (r g , E g , t g ) should be considered to solve the time dependent balance equation. This leads a big limitation to apply large system problem with good accuracy. Second, the energy group neutrons should be used to analyze reactor kinetics problems. In time dependent problem, neutron energy distribution can be changed at different time. It can affect the change of the group cross section; therefore, it can lead the accuracy problem. Third, the neutrons in a space-time region continually affect the other space-time regions; however, it is not properly considered in the previous method. Using birth history of the neutron sources

  7. A Derivation of Source-based Kinetics Equation with Time Dependent Fission Kernel for Reactor Transient Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Woo, Myeong Hyun; Shin, Chang Ho [Hanyang University, Seoul (Korea, Republic of); Pyeon, Cheol Ho [Kyoto University, Osaka (Japan)

    2015-10-15

    In this study, a new balance equation to overcome the problems generated by the previous methods is proposed using source-based balance equation. And then, a simple problem is analyzed with the proposed method. In this study, a source-based balance equation with the time dependent fission kernel was derived to simplify the kinetics equation. To analyze the partial variations of reactor characteristics, two representative methods were introduced in previous studies; (1) quasi-statics method and (2) multipoint technique. The main idea of quasistatics method is to use a low-order approximation for large integration times. To realize the quasi-statics method, first, time dependent flux is separated into the shape and amplitude functions, and shape function is calculated. It is noted that the method has a good accuracy; however, it can be expensive as a calculation cost aspect because the shape function should be fully recalculated to obtain accurate results. To improve the calculation efficiency, multipoint method was proposed. The multipoint method is based on the classic kinetics equation with using Green's function to analyze the flight probability from region r' to r. Those previous methods have been used to analyze the reactor kinetics analysis; however, the previous methods can have some limitations. First, three group variables (r{sub g}, E{sub g}, t{sub g}) should be considered to solve the time dependent balance equation. This leads a big limitation to apply large system problem with good accuracy. Second, the energy group neutrons should be used to analyze reactor kinetics problems. In time dependent problem, neutron energy distribution can be changed at different time. It can affect the change of the group cross section; therefore, it can lead the accuracy problem. Third, the neutrons in a space-time region continually affect the other space-time regions; however, it is not properly considered in the previous method. Using birth history of the

  8. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    International Nuclear Information System (INIS)

    Sig Drellack, Lance Prothro

    2007-01-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  9. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  10. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  11. Using Dual Isotopes and a Bayesian Isotope Mixing Model to Evaluate Nitrate Sources of Surface Water in a Drinking Water Source Watershed, East China

    Directory of Open Access Journals (Sweden)

    Meng Wang

    2016-08-01

    Full Text Available A high concentration of nitrate (NO3− in surface water threatens aquatic systems and human health. Revealing nitrate characteristics and identifying its sources are fundamental to making effective water management strategies. However, nitrate sources in multi-tributaries and mix land use watersheds remain unclear. In this study, based on 20 surface water sampling sites for more than two years’ monitoring from April 2012 to December 2014, water chemical and dual isotopic approaches (δ15N-NO3− and δ18O-NO3− were integrated for the first time to evaluate nitrate characteristics and sources in the Huashan watershed, Jianghuai hilly region, China. Nitrate-nitrogen concentrations (ranging from 0.02 to 8.57 mg/L were spatially heterogeneous that were influenced by hydrogeological and land use conditions. Proportional contributions of five potential nitrate sources (i.e., precipitation; manure and sewage, M & S; soil nitrogen, NS; nitrate fertilizer; nitrate derived from ammonia fertilizer and rainfall were estimated by using a Bayesian isotope mixing model. The results showed that nitrate sources contributions varied significantly among different rainfall conditions and land use types. As for the whole watershed, M & S (manure and sewage and NS (soil nitrogen were major nitrate sources in both wet and dry seasons (from 28% to 36% for manure and sewage and from 24% to 27% for soil nitrogen, respectively. Overall, combining a dual isotopes method with a Bayesian isotope mixing model offered a useful and practical way to qualitatively analyze nitrate sources and transformations as well as quantitatively estimate the contributions of potential nitrate sources in drinking water source watersheds, Jianghuai hilly region, eastern China.

  12. A source-controlled data center network model.

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.

  13. A source-controlled data center network model

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925

  14. A Model fot the Sources of the Slow Solar Wind

    Science.gov (United States)

    Antiochos, S. K.; Mikic, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.

    2011-01-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: the slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind also has large angular width, up to approx.60deg, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far from the heliospheric current sheet. We then use an MHD code and MDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind, and magnetic field for a time period preceding the 2008 August 1 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere and propose further tests of the model. Key words: solar wind - Sun: corona - Sun: magnetic topology

  15. A Model for the Sources of the Slow Solar Wind

    Science.gov (United States)

    Antiochos, S. K.; Mikić, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.

    2011-04-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: the slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind also has large angular width, up to ~60°, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far from the heliospheric current sheet. We then use an MHD code and MDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind, and magnetic field for a time period preceding the 2008 August 1 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere and propose further tests of the model.

  16. Deriving forest fire ignition risk with biogeochemical process modelling.

    Science.gov (United States)

    Eastaugh, C S; Hasenauer, H

    2014-05-01

    Climate impacts the growth of trees and also affects disturbance regimes such as wildfire frequency. The European Alps have warmed considerably over the past half-century, but incomplete records make it difficult to definitively link alpine wildfire to climate change. Complicating this is the influence of forest composition and fuel loading on fire ignition risk, which is not considered by purely meteorological risk indices. Biogeochemical forest growth models track several variables that may be used as proxies for fire ignition risk. This study assesses the usefulness of the ecophysiological model BIOME-BGC's 'soil water' and 'labile litter carbon' variables in predicting fire ignition. A brief application case examines historic fire occurrence trends over pre-defined regions of Austria from 1960 to 2008. Results show that summer fire ignition risk is largely a function of low soil moisture, while winter fire ignitions are linked to the mass of volatile litter and atmospheric dryness.

  17. CHARACTERIZING AND PROPAGATING MODELING UNCERTAINTIES IN PHOTOMETRICALLY DERIVED REDSHIFT DISTRIBUTIONS

    International Nuclear Information System (INIS)

    Abrahamse, Augusta; Knox, Lloyd; Schmidt, Samuel; Thorman, Paul; Anthony Tyson, J.; Zhan Hu

    2011-01-01

    The uncertainty in the redshift distributions of galaxies has a significant potential impact on the cosmological parameter values inferred from multi-band imaging surveys. The accuracy of the photometric redshifts measured in these surveys depends not only on the quality of the flux data, but also on a number of modeling assumptions that enter into both the training set and spectral energy distribution (SED) fitting methods of photometric redshift estimation. In this work we focus on the latter, considering two types of modeling uncertainties: uncertainties in the SED template set and uncertainties in the magnitude and type priors used in a Bayesian photometric redshift estimation method. We find that SED template selection effects dominate over magnitude prior errors. We introduce a method for parameterizing the resulting ignorance of the redshift distributions, and for propagating these uncertainties to uncertainties in cosmological parameters.

  18. Source modelling at the dawn of gravitational-wave astronomy

    Science.gov (United States)

    Gerosa, Davide

    2016-09-01

    The age of gravitational-wave astronomy has begun. Gravitational waves are propagating spacetime perturbations ("ripples in the fabric of space-time") predicted by Einstein's theory of General Relativity. These signals propagate at the speed of light and are generated by powerful astrophysical events, such as the merger of two black holes and supernova explosions. The first detection of gravitational waves was performed in 2015 with the LIGO interferometers. This constitutes a tremendous breakthrough in fundamental physics and astronomy: it is not only the first direct detection of such elusive signals, but also the first irrefutable observation of a black-hole binary system. The future of gravitational-wave astronomy is bright and loud: the LIGO experiments will soon be joined by a network of ground-based interferometers; the space mission eLISA has now been fully approved by the European Space Agency with a proof-of-concept mission called LISA Pathfinder launched in 2015. Gravitational-wave observations will provide unprecedented tests of gravity as well as a qualitatively new window on the Universe. Careful theoretical modelling of the astrophysical sources of gravitational-waves is crucial to maximize the scientific outcome of the detectors. In this Thesis, we present several advances on gravitational-wave source modelling, studying in particular: (i) the precessional dynamics of spinning black-hole binaries; (ii) the astrophysical consequences of black-hole recoils; and (iii) the formation of compact objects in the framework of scalar-tensor theories of gravity. All these phenomena are deeply characterized by a continuous interplay between General Relativity and astrophysics: despite being a truly relativistic messenger, gravitational waves encode details of the astrophysical formation and evolution processes of their sources. We work out signatures and predictions to extract such information from current and future observations. At the dawn of a revolutionary

  19. Self-consistent modeling of electron cyclotron resonance ion sources

    International Nuclear Information System (INIS)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lecot, C.

    2004-01-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally

  20. Self-consistent modeling of electron cyclotron resonance ion sources

    Science.gov (United States)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lécot, C.

    2004-05-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally.

  1. Modeling and simulation of RF photoinjectors for coherent light sources

    Science.gov (United States)

    Chen, Y.; Krasilnikov, M.; Stephan, F.; Gjonaj, E.; Weiland, T.; Dohlus, M.

    2018-05-01

    We propose a three-dimensional fully electromagnetic numerical approach for the simulation of RF photoinjectors for coherent light sources. The basic idea consists in incorporating a self-consistent photoemission model within a particle tracking code. The generation of electron beams in the injector is determined by the quantum efficiency (QE) of the cathode, the intensity profile of the driving laser as well as by the accelerating field and magnetic focusing conditions in the gun. The total charge emitted during an emission cycle can be limited by the space charge field at the cathode. Furthermore, the time and space dependent electromagnetic field at the cathode may induce a transient modulation of the QE due to surface barrier reduction of the emitting layer. In our modeling approach, all these effects are taken into account. The beam particles are generated dynamically according to the local QE of the cathode and the time dependent laser intensity profile. For the beam dynamics, a tracking code based on the Lienard-Wiechert retarded field formalism is employed. This code provides the single particle trajectories as well as the transient space charge field distribution at the cathode. As an application, the PITZ injector is considered. Extensive electron bunch emission simulations are carried out for different operation conditions of the injector, in the source limited as well as in the space charge limited emission regime. In both cases, fairly good agreement between measurements and simulations is obtained.

  2. Relating Derived Relations as a Model of Analogical Reasoning: Reaction Times and Event-Related Potentials

    Science.gov (United States)

    Barnes-Holmes, Dermot; Regan, Donal; Barnes-Holmes, Yvonne; Commins, Sean; Walsh, Derek; Stewart, Ian; Smeets, Paul M.; Whelan, Robert; Dymond, Simon

    2005-01-01

    The current study aimed to test a Relational Frame Theory (RFT) model of analogical reasoning based on the relating of derived same and derived difference relations. Experiment 1 recorded reaction time measures of similar-similar (e.g., "apple is to orange as dog is to cat") versus different-different (e.g., "he is to his brother as…

  3. Stochastic Modeling of Wind Derivatives in Energy Markets

    Directory of Open Access Journals (Sweden)

    Fred Espen Benth

    2018-05-01

    Full Text Available We model the logarithm of the spot price of electricity with a normal inverse Gaussian (NIG process and the wind speed and wind power production with two Ornstein–Uhlenbeck processes. In order to reproduce the correlation between the spot price and the wind power production, namely between a pure jump process and a continuous path process, respectively, we replace the small jumps of the NIG process by a Brownian term. We then apply our models to two different problems: first, to study from the stochastic point of view the income from a wind power plant, as the expected value of the product between the electricity spot price and the amount of energy produced; then, to construct and price a European put-type quanto option in the wind energy markets that allows the buyer to hedge against low prices and low wind power production in the plant. Calibration of the proposed models and related price formulas is also provided, according to specific datasets.

  4. Using Annotated Conceptual Models to Derive Information System Implementations

    Directory of Open Access Journals (Sweden)

    Anthony Berglas

    1994-05-01

    Full Text Available Producing production quality information systems from conceptual descriptions is a time consuming process that employs many of the world's programmers. Although most of this programming is fairly routine, the process has not been amenable to simple automation because conceptual models do not provide sufficient parameters to make all the implementation decisions that are required, and numerous special cases arise in practice. Most commercial CASE tools address these problems by essentially implementing a waterfall model in which the development proceeds from analysis through design, layout and coding phases in a partially automated manner, but the analyst/programmer must heavily edit each intermediate stage. This paper demonstrates that by recognising the nature of information systems, it is possible to specify applications completely using a conceptual model that has een annotated with additional parameters that guide automated implementation. More importantly, it will be argued that a manageable number of annotations are sufficient to implement realistic applications, and techniques will be described that enabled the author's commercial CASE tool, the Intelligent Develope to automated implementation without requiring complex theorem proving technology.

  5. Modelling and simulation of [18F]fluoromisonidazole dynamics based on histology-derived microvessel maps

    Science.gov (United States)

    Mönnich, David; Troost, Esther G. C.; Kaanders, Johannes H. A. M.; Oyen, Wim J. G.; Alber, Markus; Thorwarth, Daniela

    2011-04-01

    Hypoxia can be assessed non-invasively by positron emission tomography (PET) using radiotracers such as [18F]fluoromisonidazole (Fmiso) accumulating in poorly oxygenated cells. Typical features of dynamic Fmiso PET data are high signal variability in the first hour after tracer administration and slow formation of a consistent contrast. The purpose of this study is to investigate whether these characteristics can be explained by the current conception of the underlying microscopic processes and to identify fundamental effects. This is achieved by modelling and simulating tissue oxygenation and tracer dynamics on the microscopic scale. In simulations, vessel structures on histology-derived maps act as sources and sinks for oxygen as well as tracer molecules. Molecular distributions in the extravascular space are determined by reaction-diffusion equations, which are solved numerically using a two-dimensional finite element method. Simulated Fmiso time activity curves (TACs), though not directly comparable to PET TACs, reproduce major characteristics of clinical curves, indicating that the microscopic model and the parameter values are adequate. Evidence for dependence of the early PET signal on the vascular fraction is found. Further, possible effects leading to late contrast formation and potential implications on the quantification of Fmiso PET data are discussed.

  6. Modeling Volcanic Eruption Parameters by Near-Source Internal Gravity Waves.

    Science.gov (United States)

    Ripepe, M; Barfucci, G; De Angelis, S; Delle Donne, D; Lacanna, G; Marchetti, E

    2016-11-10

    Volcanic explosions release large amounts of hot gas and ash into the atmosphere to form plumes rising several kilometers above eruptive vents, which can pose serious risk on human health and aviation also at several thousands of kilometers from the volcanic source. However the most sophisticate atmospheric models and eruptive plume dynamics require input parameters such as duration of the ejection phase and total mass erupted to constrain the quantity of ash dispersed in the atmosphere and to efficiently evaluate the related hazard. The sudden ejection of this large quantity of ash can perturb the equilibrium of the whole atmosphere triggering oscillations well below the frequencies of acoustic waves, down to much longer periods typical of gravity waves. We show that atmospheric gravity oscillations induced by volcanic eruptions and recorded by pressure sensors can be modeled as a compact source representing the rate of erupted volcanic mass. We demonstrate the feasibility of using gravity waves to derive eruption source parameters such as duration of the injection and total erupted mass with direct application in constraining plume and ash dispersal models.

  7. Towards a Unified Source-Propagation Model of Cosmic Rays

    Science.gov (United States)

    Taylor, M.; Molla, M.

    2010-07-01

    It is well known that the cosmic ray energy spectrum is multifractal with the analysis of cosmic ray fluxes as a function of energy revealing a first “knee” slightly below 1016 eV, a second knee slightly below 1018 eV and an “ankle” close to 1019 eV. The behaviour of the highest energy cosmic rays around and above the ankle is still a mystery and precludes the development of a unified source-propagation model of cosmic rays from their source origin to Earth. A variety of acceleration and propagation mechanisms have been proposed to explain different parts of the spectrum the most famous of course being Fermi acceleration in magnetised turbulent plasmas (Fermi 1949). Many others have been proposd for energies at and below the first knee (Peters & Cimento (1961); Lagage & Cesarsky (1983); Drury et al. (1984); Wdowczyk & Wolfendale (1984); Ptuskin et al. (1993); Dova et al. (0000); Horandel et al. (2002); Axford (1991)) as well as at higher energies between the first knee and the ankle (Nagano & Watson (2000); Bhattacharjee & Sigl (2000); Malkov & Drury (2001)). The recent fit of most of the cosmic ray spectrum up to the ankle using non-extensive statistical mechanics (NESM) (Tsallis et al. (2003)) provides what may be the strongest evidence for a source-propagation system deviating significantly from Boltmann statistics. As Tsallis has shown (Tsallis et al. (2003)), the knees appear as crossovers between two fractal-like thermal regimes. In this work, we have developed a generalisation of the second order NESM model (Tsallis et al. (2003)) to higher orders and we have fit the complete spectrum including the ankle with third order NESM. We find that, towards the GDZ limit, a new mechanism comes into play. Surprisingly it also presents as a modulation akin to that in our own local neighbourhood of cosmic rays emitted by the sun. We propose that this is due to modulation at the source and is possibly due to processes in the shell of the originating supernova. We

  8. Autonomous learning derived from experimental modeling of physical laws.

    Science.gov (United States)

    Grabec, Igor

    2013-05-01

    This article deals with experimental description of physical laws by probability density function of measured data. The Gaussian mixture model specified by representative data and related probabilities is utilized for this purpose. The information cost function of the model is described in terms of information entropy by the sum of the estimation error and redundancy. A new method is proposed for searching the minimum of the cost function. The number of the resulting prototype data depends on the accuracy of measurement. Their adaptation resembles a self-organized, highly non-linear cooperation between neurons in an artificial NN. A prototype datum corresponds to the memorized content, while the related probability corresponds to the excitability of the neuron. The method does not include any free parameters except objectively determined accuracy of the measurement system and is therefore convenient for autonomous execution. Since representative data are generally less numerous than the measured ones, the method is applicable for a rather general and objective compression of overwhelming experimental data in automatic data-acquisition systems. Such compression is demonstrated on analytically determined random noise and measured traffic flow data. The flow over a day is described by a vector of 24 components. The set of 365 vectors measured over one year is compressed by autonomous learning to just 4 representative vectors and related probabilities. These vectors represent the flow in normal working days and weekends or holidays, while the related probabilities correspond to relative frequencies of these days. This example reveals that autonomous learning yields a new basis for interpretation of representative data and the optimal model structure. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Deriving the nuclear shell model from first principles

    Science.gov (United States)

    Barrett, Bruce R.; Dikmen, Erdal; Vary, James P.; Maris, Pieter; Shirokov, Andrey M.; Lisetskiy, Alexander F.

    2014-09-01

    The results of an 18-nucleon No Core Shell Model calculation, performed in a large basis space using a bare, soft NN interaction, can be projected into the 0 ℏω space, i.e., the sd -shell. Because the 16 nucleons in the 16O core are frozen in the 0 ℏω space, all the correlations of the 18-nucleon system are captured by the two valence, sd -shell nucleons. By the projection, we obtain microscopically the sd -shell 2-body effective interactions, the core energy and the sd -shell s.p. energies. Thus, the input for standard shell-model calculations can be determined microscopically by this approach. If the same procedure is then applied to 19-nucleon systems, the sd -shell 3-body effective interactions can also be obtained, indicating the importance of these 3-body effective interactions relative to the 2-body effective interactions. Applications to A = 19 and heavier nuclei with different intrinsic NN interactions will be presented and discussed. The results of an 18-nucleon No Core Shell Model calculation, performed in a large basis space using a bare, soft NN interaction, can be projected into the 0 ℏω space, i.e., the sd -shell. Because the 16 nucleons in the 16O core are frozen in the 0 ℏω space, all the correlations of the 18-nucleon system are captured by the two valence, sd -shell nucleons. By the projection, we obtain microscopically the sd -shell 2-body effective interactions, the core energy and the sd -shell s.p. energies. Thus, the input for standard shell-model calculations can be determined microscopically by this approach. If the same procedure is then applied to 19-nucleon systems, the sd -shell 3-body effective interactions can also be obtained, indicating the importance of these 3-body effective interactions relative to the 2-body effective interactions. Applications to A = 19 and heavier nuclei with different intrinsic NN interactions will be presented and discussed. Supported by the US NSF under Grant No. 0854912, the US DOE under

  10. Re-derived overclosure bound for the inert doublet model

    Science.gov (United States)

    Biondini, S.; Laine, M.

    2017-08-01

    We apply a formalism accounting for thermal effects (such as modified Sommerfeld effect; Salpeter correction; decohering scatterings; dissociation of bound states), to one of the simplest WIMP-like dark matter models, associated with an "inert" Higgs doublet. A broad temperature range T ˜ M/20 . . . M/104 is considered, stressing the importance and less-understood nature of late annihilation stages. Even though only weak interactions play a role, we find that resummed real and virtual corrections increase the tree-level overclosure bound by 1 . . . 18%, depending on quartic couplings and mass splittings.

  11. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  12. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  13. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  14. Fast temperature optimization of multi-source hyperthermia applicators with reduced-order modeling of 'virtual sources'

    International Nuclear Information System (INIS)

    Cheng, K-S; Stakhursky, Vadim; Craciunescu, Oana I; Stauffer, Paul; Dewhirst, Mark; Das, Shiva K

    2008-01-01

    The goal of this work is to build the foundation for facilitating real-time magnetic resonance image guided patient treatment for heating systems with a large number of physical sources (e.g. antennas). Achieving this goal requires knowledge of how the temperature distribution will be affected by changing each source individually, which requires time expenditure on the order of the square of the number of sources. To reduce computation time, we propose a model reduction approach that combines a smaller number of predefined source configurations (fewer than the number of actual sources) that are most likely to heat tumor. The source configurations consist of magnitude and phase source excitation values for each actual source and may be computed from a CT scan based plan or a simplified generic model of the corresponding patient anatomy. Each pre-calculated source configuration is considered a 'virtual source'. We assume that the actual best source settings can be represented effectively as weighted combinations of the virtual sources. In the context of optimization, each source configuration is treated equivalently to one physical source. This model reduction approach is tested on a patient upper-leg tumor model (with and without temperature-dependent perfusion), heated using a 140 MHz ten-antenna cylindrical mini-annular phased array. Numerical simulations demonstrate that using only a few pre-defined source configurations can achieve temperature distributions that are comparable to those from full optimizations using all physical sources. The method yields close to optimal temperature distributions when using source configurations determined from a simplified model of the tumor, even when tumor position is erroneously assumed to be ∼2.0 cm away from the actual position as often happens in practical clinical application of pre-treatment planning. The method also appears to be robust under conditions of changing, nonlinear, temperature-dependent perfusion. The

  15. Henry's law and accumulation of weak source for crust-derived helium: A case study of Weihe Basin, China

    Directory of Open Access Journals (Sweden)

    Yuhong Li

    2017-12-01

    Full Text Available Crust-derived helium is generated from the radioactive decay of uranium, thorium and other radioactive elements in geological bodies. Compared with conventional natural gas, helium is a typical weak source gas as a result of extremely slow generation rate and absence of helium-generating peak. It is associated with methane or carbon dioxide reservoirs frequently and related to groundwater closely. Helium can meet the industry standard with 0.1% in volume fraction. In order to study the accumulation mechanism of helium, the previous research on Henry's coefficient and solubility of helium, nitrogen and methane are summarized and the key roles of Henry's Law in the helium migration, accumulation and preservation are discussed by simulating calculation taking Weihe Basin as an example. According to the Law, the gas solubility in dilute solution is controlled by the gas partial pressure and the Henry's coefficient. Compared with the carrier gases, the Henry's constant of helium is high, with striking difference at low and high temperature. In addition, the helium partial pressure is greatly different in helium source rocks and gas reservoirs, resulting in the great differences of helium solubility in the two places. The accumulation progresses are as follows. Firstly, helium can dissolve into water and migrate out of helium source rocks due to the high helium solubility, which is caused by high helium partial pressure and high temperature in source rock. Secondly, when dissolved helium is transported to the shallow gas reservoir, it is prone to be out of solution and into reservoir due to the extremely low partial pressure and low temperature. Meanwhile part of carrier gases dissolves into water, as if helium is “replaced” out. Furthermore, the low concentration funnel of dissolved helium is formed near the gas reservoir, then other dissolved helium continues to migrate towards the gas reservoir, which greatly improves the helium accumulation

  16. Unsteady Vibration Aerodynamic Modeling and Evaluation of Dynamic Derivatives Using Computational Fluid Dynamics

    Directory of Open Access Journals (Sweden)

    Xu Liu

    2015-01-01

    Full Text Available Unsteady aerodynamic system modeling is widely used to solve the dynamic stability problems encountering aircraft design. In this paper, single degree-of-freedom (SDF vibration model and forced simple harmonic motion (SHM model for dynamic derivative prediction are developed on the basis of modified Etkin model. In the light of the characteristics of SDF time domain solution, the free vibration identification methods for dynamic stability parameters are extended and applied to the time domain numerical simulation of blunted cone calibration model examples. The dynamic stability parameters by numerical identification are no more than 0.15% deviated from those by experimental simulation, confirming the correctness of SDF vibration model. The acceleration derivatives, rotary derivatives, and combination derivatives of Army-Navy Spinner Rocket are numerically identified by using unsteady N-S equation and solving different SHV patterns. Comparison with the experimental result of Army Ballistic Research Laboratories confirmed the correctness of the SHV model and dynamic derivative identification. The calculation result of forced SHM is better than that by the slender body theory of engineering approximation. SDF vibration model and SHM model for dynamic stability parameters provide a solution to the dynamic stability problem encountering aircraft design.

  17. On (in)stabilities of perturbations in mimetic models with higher derivatives

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Yunlong; Shen, Liuyuan [Department of Physics, Nanjing University, Nanjing 210093 (China); Mou, Yicen; Li, Mingzhe, E-mail: zylakx@163.com, E-mail: sly12271103@163.com, E-mail: moinch@mail.ustc.edu.cn, E-mail: limz@ustc.edu.cn [Interdisciplinary Center for Theoretical Study, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2017-08-01

    Usually when applying the mimetic model to the early universe, higher derivative terms are needed to promote the mimetic field to be dynamical. However such models suffer from the ghost and/or the gradient instabilities and simple extensions cannot cure this pathology. We point out in this paper that it is possible to overcome this difficulty by considering the direct couplings of the higher derivatives of the mimetic field to the curvature of the spacetime.

  18. Chitosan derivatives targeting lipid bilayers: Synthesis, biological activity and interaction with model membranes.

    Science.gov (United States)

    Martins, Danubia Batista; Nasário, Fábio Domingues; Silva-Gonçalves, Laiz Costa; de Oliveira Tiera, Vera Aparecida; Arcisio-Miranda, Manoel; Tiera, Marcio José; Dos Santos Cabrera, Marcia Perez

    2018-02-01

    The antimicrobial activity of chitosan and derivatives to human and plant pathogens represents a high-valued prospective market. Presently, two low molecular weight derivatives, endowed with hydrophobic and cationic character at different ratios were synthesized and characterized. They exhibit antimicrobial activity and increased performance in relation to the intermediate and starting compounds. However, just the derivative with higher cationic character showed cytotoxicity towards human cervical carcinoma cells. Considering cell membranes as targets, the mode of action was investigated through the interaction with model lipid vesicles mimicking bacterial, tumoral and erythrocyte membranes. Intense lytic activity and binding are demonstrated for both derivatives in anionic bilayers. The less charged compound exhibits slightly improved selectivity towards bacterial model membranes, suggesting that balancing its hydrophobic/hydrophilic character may improve efficiency. Observing the aggregation of vesicles, we hypothesize that the "charge cluster mechanism", ascribed to some antimicrobial peptides, could be applied to these chitosan derivatives. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A critical view on temperature modelling for application in weather derivatives markets

    International Nuclear Information System (INIS)

    Šaltytė Benth, Jūratė; Benth, Fred Espen

    2012-01-01

    In this paper we present a stochastic model for daily average temperature. The model contains seasonality, a low-order autoregressive component and a variance describing the heteroskedastic residuals. The model is estimated on daily average temperature records from Stockholm (Sweden). By comparing the proposed model with the popular model of Campbell and Diebold (2005), we point out some important issues to be addressed when modelling the temperature for application in weather derivatives market. - Highlights: ► We present a stochastic model for daily average temperature, containing seasonality, a low-order autoregressive component and a variance describing the heteroskedastic residuals. ► We compare the proposed model with the popular model of Campbell and Diebold (2005). ► Some important issues to be addressed when modelling the temperature for application in weather derivatives market are pointed out.

  20. HYDROLOGY AND SEDIMENT MODELING USING THE BASINS NON-POINT SOURCE MODEL

    Science.gov (United States)

    The Non-Point Source Model (Hydrologic Simulation Program-Fortran, or HSPF) within the EPA Office of Water's BASINS watershed modeling system was used to simulate streamflow and total suspended solids within Contentnea Creek, North Carolina, which is a tributary of the Neuse Rive...

  1. Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode

    Directory of Open Access Journals (Sweden)

    P. Seibert

    2004-01-01

    Full Text Available The possibility to calculate linear-source receptor relationships for the transport of atmospheric trace substances with a Lagrangian particle dispersion model (LPDM running in backward mode is shown and presented with many tests and examples. This mode requires only minor modifications of the forward LPDM. The derivation includes the action of sources and of any first-order processes (transformation with prescribed rates, dry and wet deposition, radioactive decay, etc.. The backward mode is computationally advantageous if the number of receptors is less than the number of sources considered. The combination of an LPDM with the backward (adjoint methodology is especially attractive for the application to point measurements, which can be handled without artificial numerical diffusion. Practical hints are provided for source-receptor calculations with different settings, both in forward and backward mode. The equivalence of forward and backward calculations is shown in simple tests for release and sampling of particles, pure wet deposition, pure convective redistribution and realistic transport over a short distance. Furthermore, an application example explaining measurements of Cs-137 in Stockholm as transport from areas contaminated heavily in the Chernobyl disaster is included.

  2. Modelling and optimisation of fs laser-produced Kα sources

    International Nuclear Information System (INIS)

    Gibbon, P.; Masek, M.; Teubner, U.; Lu, W.; Nicoul, M.; Shymanovich, U.; Tarasevitch, A.; Zhou, P.; Sokolowski-Tinten, K.; Linde, D. von der

    2009-01-01

    Recent theoretical and numerical studies of laser-driven femtosecond K α sources are presented, aimed at understanding a recent experimental campaign to optimize emission from thin coating targets. Particular attention is given to control over the laser-plasma interaction conditions defined by the interplay between a controlled prepulse and the angle of incidence. It is found that the x-ray efficiency for poor-contrast laser systems in which a large preplasma is suspected can be enhanced by using a near-normal incidence geometry even at high laser intensities. With high laser contrast, similar efficiencies can be achieved by going to larger incidence angles, but only at the expense of larger X-ray spot size. New developments in three-dimensional modelling are also reported with the goal of handling interactions with geometrically complex targets and finite resistivity. (orig.)

  3. Modeling in control of the Advanced Light Source

    International Nuclear Information System (INIS)

    Bengtsson, J.; Forest, E.; Nishimura, H.; Schachinger, L.

    1991-05-01

    A software system for control of accelerator physics parameters of the Advanced Light Source (ALS) is being designed and implemented at LBL. Some of the parameters we wish to control are tunes, chromaticities, and closed orbit distortions as well as linear lattice distortions and, possibly, amplitude- and momentum-dependent tune shifts. In all our applications, the goal is to allow the user to adjust physics parameters of the machine, instead of turning knobs that control magnets directly. This control will take place via a highly graphical user interface, with both a model appropriate to the application and any correction algorithm running alongside as separate processes. Many of these applications will run on a Unix workstation, separate from the controls system, but communicating with the hardware database via Remote Procedure Calls (RPCs)

  4. A behavioral choice model of the use of car-sharing and ride-sourcing services

    Energy Technology Data Exchange (ETDEWEB)

    Dias, Felipe F.; Lavieri, Patrícia S.; Garikapati, Venu M.; Astroza, Sebastian; Pendyala, Ram M.; Bhat, Chandra R.

    2017-07-26

    There are a number of disruptive mobility services that are increasingly finding their way into the marketplace. Two key examples of such services are car-sharing services and ride-sourcing services. In an effort to better understand the influence of various exogenous socio-economic and demographic variables on the frequency of use of ride-sourcing and car-sharing services, this paper presents a bivariate ordered probit model estimated on a survey data set derived from the 2014-2015 Puget Sound Regional Travel Study. Model estimation results show that users of these services tend to be young, well-educated, higher-income, working individuals residing in higher-density areas. There are significant interaction effects reflecting the influence of children and the built environment on disruptive mobility service usage. The model developed in this paper provides key insights into factors affecting market penetration of these services, and can be integrated in larger travel forecasting model systems to better predict the adoption and use of mobility-on-demand services.

  5. Crowd Sourcing for Challenging Technical Problems and Business Model

    Science.gov (United States)

    Davis, Jeffrey R.; Richard, Elizabeth

    2011-01-01

    Crowd sourcing may be defined as the act of outsourcing tasks that are traditionally performed by an employee or contractor to an undefined, generally large group of people or community (a crowd) in the form of an open call. The open call may be issued by an organization wishing to find a solution to a particular problem or complete a task, or by an open innovation service provider on behalf of that organization. In 2008, the Space Life Sciences Directorate (SLSD), with the support of Wyle Integrated Science and Engineering, established and implemented pilot projects in open innovation (crowd sourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical challenges. These unsolved technical problems were converted to problem statements, also called "Challenges" or "Technical Needs" by the various open innovation service providers, and were then posted externally to seek solutions. In addition, an open call was issued internally to NASA employees Agency wide (10 Field Centers and NASA HQ) using an open innovation service provider crowd sourcing platform to post NASA challenges from each Center for the others to propose solutions). From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external problems or challenges were posted through three different vendors: InnoCentive, Yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive crowd sourcing platform designed for internal use by an organization. This platform was customized for NASA use and promoted as NASA@Work. The results were significant. Of the seven InnoCentive external challenges, two full and five partial awards were made in complex technical areas such as predicting solar flares and long-duration food packaging. Similarly, the TopCoder challenge yielded an optimization algorithm for designing a lunar medical kit. The Yet2.com challenges yielded many new industry and academic contacts in bone

  6. CHAOS-2-a geomagnetic field model derived from one decade of continuous satellite data

    DEFF Research Database (Denmark)

    Olsen, Nils; Mandea, M.; Sabaka, T.J.

    2009-01-01

    We have derived a model of the near-Earth's magnetic field using more than 10 yr of high-precision geomagnetic measurements from the three satellites Orsted, CHAMP and SAC-C. This model is an update of the two previous models, CHAOS (Olsen et al. 2006) and xCHAOS (Olsen & Mandea 2008). Data...... by minimizing the second time derivative of the squared magnetic field intensity at the core-mantle boundary. The CHAOS-2 model describes rapid time changes, as monitored by the ground magnetic observatories, much better than its predecessors....

  7. Analyzing Korean consumers’ latent preferences for electricity generation sources with a hierarchical Bayesian logit model in a discrete choice experiment

    International Nuclear Information System (INIS)

    Byun, Hyunsuk; Lee, Chul-Yong

    2017-01-01

    Generally, consumers use electricity without considering the source the electricity was generated from. Since different energy sources exert varying effects on society, it is necessary to analyze consumers’ latent preference for electricity generation sources. The present study estimates Korean consumers’ marginal utility and an appropriate generation mix is derived using the hierarchical Bayesian logit model in a discrete choice experiment. The results show that consumers consider the danger posed by the source of electricity as the most important factor among the effects of electricity generation sources. Additionally, Korean consumers wish to reduce the contribution of nuclear power from the existing 32–11%, and increase that of renewable energy from the existing 4–32%. - Highlights: • We derive an electricity mix reflecting Korean consumers’ latent preferences. • We use the discrete choice experiment and hierarchical Bayesian logit model. • The danger posed by the generation source is the most important attribute. • The consumers wish to increase the renewable energy proportion from 4.3% to 32.8%. • Korea's cost-oriented energy supply policy and consumers’ preference differ markedly.

  8. OSGM02: A new model for converting GPS-derived heights to local height datums in Great Britain and Ireland

    DEFF Research Database (Denmark)

    Iliffe, J.C.; Ziebart, M.; Cross, P.A.

    2003-01-01

    The background to the recent computation of a new vertical datum model for the British Isles (OSGM02) is described After giving a brief description of the computational techniques and the data sets used for the derivation of the gravimetric geoid, the paper focuses on the fitting of this surface...... to the GPS and levelling networks in the various regions of the British Isles in such a way that it can be used in conjunction with GPS to form a replacement for the existing system of bench marks. The error sources induced in this procedure are discussed, and the theoretical basis given for the fitting...

  9. On a business cycle model with fractional derivative under narrow-band random excitation

    International Nuclear Information System (INIS)

    Lin, Zifei; Li, Jiaorui; Li, Shuang

    2016-01-01

    This paper analyzes the dynamics of a business cycle model with fractional derivative of order  α (0 < α < 1) subject to narrow-band random excitation, in which fractional derivative describes the memory property of the economic variables. Stochastic dynamical system concepts are integrated into the business cycle model for understanding the economic fluctuation. Firstly, the method of multiple scales is applied to derive the model to obtain the approximate analytical solution. Secondly, the effect of economic policy with fractional derivative on the amplitude of the economic fluctuation and the effect on stationary probability density are studied. The results show macroeconomic regulation and control can lower the stable amplitude of economic fluctuation. While in the process of equilibrium state, the amplitude is magnified. Also, the macroeconomic regulation and control improves the stability of the equilibrium state. Thirdly, how externally stochastic perturbation affects the dynamics of the economy system is investigated.

  10. Development of an emissions inventory model for mobile sources

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, A W; Broderick, B M [Trinity College, Dublin (Ireland). Dept. of Civil, Structural and Environmental Engineering

    2000-07-01

    Traffic represents one of the largest sources of primary air pollutants in urban areas. As a consequence, numerous abatement strategies are being pursued to decrease the ambient concentrations of a wide range of pollutants. A mutual characteristic of most of these strategies is a requirement for accurate data on both the quantity and spatial distribution of emissions to air in the form of an atmospheric emissions inventory database. In the case of traffic pollution, such an inventory must be compiled using activity statistics and emission factors for a wide range of vehicle types. The majority of inventories are compiled using 'passive' data from either surveys or transportation models and by their very nature tend to be out-of-date by the time they are compiled. Current trends are towards integrating urban traffic control systems and assessments of the environmental effects of motor vehicles. In this paper. a methodology for estimating emissions from mobile sources using real-time data is described. This methodology is used to calculate emissions of sulphur dioxide (SO{sub 2}), oxides of nitrogen (NO{sub x}), carbon monoxide (CO). volatile organic compounds (VOC), particulate matter less than 10 {mu}m aerodynamic diameter (PM{sub 10}), 1,3-butadiene (C{sub 4}H{sub 6}) and benzene (C{sub 6}H{sub 6}) at a test junction in Dublin. Traffic data, which are required on a street-by-street basis, is obtained from induction loops and closed circuit televisions (CCTV) as well as statistical data. The observed traffic data are compared to simulated data from a travel demand model. As a test case, an emissions inventory is compiled for a heavily trafficked signalized junction in an urban environment using the measured data. In order that the model may be validated, the predicted emissions are employed in a dispersion model along with local meteorological conditions and site geometry. The resultant pollutant concentrations are compared to average ambient kerbside conditions

  11. Virtual-source diffusion approximation for enhanced near-field modeling of photon-migration in low-albedo medium.

    Science.gov (United States)

    Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng

    2015-01-26

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.

  12. Source term identification in atmospheric modelling via sparse optimization

    Science.gov (United States)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the

  13. Autologous adipocyte derived stem cells favour healing in a minipig model of cutaneous radiation syndrome.

    Directory of Open Access Journals (Sweden)

    Fabien Forcheron

    Full Text Available Cutaneous radiation syndrome (CRS is the delayed consequence of localized skin exposure to high doses of ionizing radiation. Here we examined for the first time in a large animal model the therapeutic potential of autologous adipose tissue-derived stroma cells (ASCs. For experiments, Göttingen minipigs were locally gamma irradiated using a (60Co source at the dose of 50 Gy and grafted (n = 5 or not (n = 8. ASCs were cultured in MEM-alpha with 10% fetal calf serum and basic fibroblast growth factor (2 ng.mL(-1 and post irradiation were intradermally injected on days 25, 46, 67 and finally between days 95 and 115 (50 × 10(6 ASCs each time into the exposed area. All controls exhibited a clinical evolution with final necrosis (day 91. In grafted pigs an ultimate wound healing was observed in four out of five grafted animals (day 130 +/- 28. Immunohistological analysis of cytokeratin expression showed a complete epidermis recovery. Grafted ASCs accumulated at the dermis/subcutis barrier in which they attracted numerous immune cells, and even an increased vasculature in one pig. Globally this study suggests that local injection of ASCs may represent a useful strategy to mitigate CRS.

  14. The Protein Content of Extracellular Vesicles Derived from Expanded Human Umbilical Cord Blood-Derived CD133+ and Human Bone Marrow-Derived Mesenchymal Stem Cells Partially Explains Why both Sources are Advantageous for Regenerative Medicine.

    Science.gov (United States)

    Angulski, Addeli B B; Capriglione, Luiz G; Batista, Michel; Marcon, Bruna H; Senegaglia, Alexandra C; Stimamiglio, Marco A; Correa, Alejandro

    2017-04-01

    Adult stem cells have beneficial effects when exposed to damaged tissue due, at least in part, to their paracrine activity, which includes soluble factors and extracellular vesicles (EVs). Given the multiplicity of signals carried by these vesicles through the horizontal transfer of functional molecules, human mesenchymal stem cell (hMSCs) and CD133 + cell-derived EVs have been tested in various disease models and shown to recover damaged tissues. In this study, we profiled the protein content of EVs derived from expanded human CD133 + cells and bone marrow-derived hMSCs with the intention of better understanding the functions performed by these vesicles/cells and delineating the most appropriate use of each EV in future therapeutic procedures. Using LC-MS/MS analysis, we identified 623 proteins for expanded CD133 + -EVs and 797 proteins for hMSCs-EVs. Although the EVs from both origins were qualitatively similar, when protein abundance was considered, hMSCs-EVs and CD133 + -EVs were different. Gene Ontology (GO) enrichment analysis in CD133 + -EVs revealed proteins involved in a variety of angiogenesis-related functions as well proteins related to the cytoskeleton and highly implicated in cell motility and cellular activation. In contrast, when overrepresented proteins in hMSCs-EVs were analyzed, a GO cluster of immune response-related genes involved with immune response-regulating factors acting on phagocytosis and innate immunity was identified. Together our data demonstrate that from the point of view of protein content, expanded CD133 + -EVs and hMSCs-EVs are in part similar but also sufficiently different to reflect the main beneficial paracrine effects widely reported in pre-clinical studies using expanded CD133 + cells and/or hBM-MSCs.

  15. Source-receiver two-way wave extrapolation for prestack exploding-reflector modelling and migration

    KAUST Repository

    Alkhalifah, Tariq Ali; Fomel, Sergey; Wu, Zedong

    2014-01-01

    or backward in time. This approach has the potential for generating accurate images free of artiefacts associated with conventional approaches. We derive novel high-order partial differential equations in the source-receiver time domain. The fourth

  16. Modeling, analysis, and design of stationary reference frame droop controlled parallel three-phase voltage source inverters

    DEFF Research Database (Denmark)

    Vasquez, Juan Carlos; Guerrero, Josep M.; Savaghebi, Mehdi

    2013-01-01

    Power electronics based MicroGrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of parallel connected three-phase VSIs are derived. The proposed voltage and current inner control loops and the mat......Power electronics based MicroGrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of parallel connected three-phase VSIs are derived. The proposed voltage and current inner control loops...... control restores the frequency and amplitude deviations produced by the primary control. Also, a synchronization algorithm is presented in order to connect the MicroGrid to the grid. Experimental results are provided to validate the performance and robustness of the parallel VSI system control...

  17. The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle

    OpenAIRE

    Laaksonen, Pekka

    2011-01-01

    Laaksonen, Pekka The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle Jyväskylä: Jyväskylän yliopisto, 2011, 42 s. Tietojärjestelmätiede, kandidaatintutkielma Ohjaaja(t): Käkölä, Timo Tässä kandidaatintutkielmassa selvitettiin sitä, miten the eSourcing Capability Model for Service Providers-mallin käytännöt (practices) ovat liittyneet tietä-myksenhallinnan neljään prosessiin: tiedon luominen, varastointi/noutaminen, jakamine...

  18. PHOTOREACTIVITY OF CHROMOPHORIC DISSOLVED ORGANIC MATTER (CDOM) DERIVED FROM DECOMPOSITION OF VARIOUS VASCULAR PLANT AND ALGAL SOURCES

    Science.gov (United States)

    Chromophoric dissolved organic matter (CDOM) in aquatic environments is derived from the microbial decomposition of terrestrial and microbial organic matter. Here we present results of studies of the spectral properties and photoreactivity of the CDOM derived from several organi...

  19. Model of contamination sources of electron for radiotherapy of beams of photons

    International Nuclear Information System (INIS)

    Gonzalez Infantes, W.; Lallena Rojo, A. M.; Anguiano Millan, M.

    2013-01-01

    Proposes a model of virtual sources of electrons, that allows to reproduce the sources to the input parameters of the representation of the patient. To compare performance in depth values and calculated profiles from the full simulation of the heads, with the calculated values using sources model, found that the model is capable of playing depth dose distributions and profiles. (Author)

  20. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  1. Modelling vertical error in LiDAR-derived digital elevation models

    Science.gov (United States)

    Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.

    2010-01-01

    A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings

  2. Comparison of different statistical modelling approaches for deriving spatial air temperature patterns in an urban environment

    Science.gov (United States)

    Straub, Annette; Beck, Christoph; Breitner, Susanne; Cyrys, Josef; Geruschkat, Uta; Jacobeit, Jucundus; Kühlbach, Benjamin; Kusch, Thomas; Richter, Katja; Schneider, Alexandra; Umminger, Robin; Wolf, Kathrin

    2017-04-01

    Frequently spatial variations of air temperature of considerable magnitude occur within urban areas. They correspond to varying land use/land cover characteristics and vary with season, time of day and synoptic conditions. These temperature differences have an impact on human health and comfort directly by inducing thermal stress as well as indirectly by means of affecting air quality. Therefore, knowledge of the spatial patterns of air temperature in cities and the factors causing them is of great importance, e.g. for urban planners. A multitude of studies have shown statistical modelling to be a suitable tool for generating spatial air temperature patterns. This contribution presents a comparison of different statistical modelling approaches for deriving spatial air temperature patterns in the urban environment of Augsburg, Southern Germany. In Augsburg there exists a measurement network for air temperature and humidity currently comprising 48 stations in the city and its rural surroundings (corporately operated by the Institute of Epidemiology II, Helmholtz Zentrum München, German Research Center for Environmental Health and the Institute of Geography, University of Augsburg). Using different datasets for land surface characteristics (Open Street Map, Urban Atlas) area percentages of different types of land cover were calculated for quadratic buffer zones of different size (25, 50, 100, 250, 500 m) around the stations as well for source regions of advective air flow and used as predictors together with additional variables such as sky view factor, ground level and distance from the city centre. Multiple Linear Regression and Random Forest models for different situations taking into account season, time of day and weather condition were applied utilizing selected subsets of these predictors in order to model spatial distributions of mean hourly and daily air temperature deviations from a rural reference station. Furthermore, the different model setups were

  3. Deriving the expected utility of a predictive model when the utilities are uncertain.

    Science.gov (United States)

    Cooper, Gregory F; Visweswaran, Shyam

    2005-01-01

    Predictive models are often constructed from clinical databases with the goal of eventually helping make better clinical decisions. Evaluating models using decision theory is therefore natural. When constructing a model using statistical and machine learning methods, however, we are often uncertain about precisely how the model will be used. Thus, decision-independent measures of classification performance, such as the area under an ROC curve, are popular. As a complementary method of evaluation, we investigate techniques for deriving the expected utility of a model under uncertainty about the model's utilities. We demonstrate an example of the application of this approach to the evaluation of two models that diagnose coronary artery disease.

  4. Derivation of a well-posed and multidimensional drift-flux model for boiling flows

    International Nuclear Information System (INIS)

    Gregoire, O.; Martin, M.

    2005-01-01

    In this note, we derive a multidimensional drift-flux model for boiling flows. Within this framework, the distribution parameter is no longer a scalar but a tensor that might account for the medium anisotropy and the flow regime. A new model for the drift-velocity vector is also derived. It intrinsically takes into account the effect of the friction pressure loss on the buoyancy force. On the other hand, we show that most drift-flux models might exhibit a singularity for large void fraction. In order to avoid this singularity, a remedy based on a simplified three field approach is proposed. (authors)

  5. An extended car-following model considering the acceleration derivative in some typical traffic environments

    Science.gov (United States)

    Zhou, Tong; Chen, Dong; Liu, Weining

    2018-03-01

    Based on the full velocity difference and acceleration car-following model, an extended car-following model is proposed by considering the vehicle’s acceleration derivative. The stability condition is given by applying the control theory. Considering some typical traffic environments, the results of theoretical analysis and numerical simulation show the extended model has a more actual acceleration of string vehicles than that of the previous models in starting process, stopping process and sudden brake. Meanwhile, the traffic jams more easily occur when the coefficient of vehicle’s acceleration derivative increases, which is presented by space-time evolution. The results confirm that the vehicle’s acceleration derivative plays an important role in the traffic jamming transition and the evolution of traffic congestion.

  6. PDX-MI: Minimal Information for Patient-Derived Tumor Xenograft Models

    NARCIS (Netherlands)

    Meehan, Terrence F.; Conte, Nathalie; Goldstein, Theodore; Inghirami, Giorgio; Murakami, Mark A.; Brabetz, Sebastian; Gu, Zhiping; Wiser, Jeffrey A.; Dunn, Patrick; Begley, Dale A.; Krupke, Debra M.; Bertotti, Andrea; Bruna, Alejandra; Brush, Matthew H.; Byrne, Annette T.; Caldas, Carlos; Christie, Amanda L.; Clark, Dominic A.; Dowst, Heidi; Dry, Jonathan R.; Doroshow, James H.; Duchamp, Olivier; Evrard, Yvonne A.; Ferretti, Stephane; Frese, Kristopher K.; Goodwin, Neal C.; Greenawalt, Danielle; Haendel, Melissa A.; Hermans, Els; Houghton, Peter J.; Jonkers, Jos; Kemper, Kristel; Khor, Tin O.; Lewis, Michael T.; Lloyd, K. C. Kent; Mason, Jeremy; Medico, Enzo; Neuhauser, Steven B.; Olson, James M.; Peeper, Daniel S.; Rueda, Oscar M.; Seong, Je Kyung; Trusolino, Livio; Vinolo, Emilie; Wechsler-Reya, Robert J.; Weinstock, David M.; Welm, Alana; Weroha, S. John; Amant, Frédéric; Pfister, Stefan M.; Kool, Marcel; Parkinson, Helen; Butte, Atul J.; Bult, Carol J.

    2017-01-01

    Patient-derived tumor xenograft (PDX) mouse models have emerged as an important oncology research platform to study tumor evolution, mechanisms of drug response and resistance, and tailoring chemotherapeutic approaches for individual patients. The lack of robust standards for reporting on PDX models

  7. Patient-Derived Xenograft Models : An Emerging Platform for Translational Cancer Research

    NARCIS (Netherlands)

    Hidalgo, Manuel; Amant, Frederic; Biankin, Andrew V.; Budinska, Eva; Byrne, Annette T.; Caldas, Carlos; Clarke, Robert B.; de Jong, Steven; Jonkers, Jos; Maelandsmo, Gunhild Mari; Roman-Roman, Sergio; Seoane, Joan; Trusolino, Livio; Villanueva, Alberto

    Recently, there has been an increasing interest in the development and characterization of patient-derived tumor xenograft (PDX) models for cancer research. PDX models mostly retain the principal histologic and genetic characteristics of their donor tumor and remain stable across passages. These

  8. Lagrangian derivation of the two coupled field equations in the Janus cosmological model

    Science.gov (United States)

    Petit, Jean-Pierre; D'Agostini, G.

    2015-05-01

    After a review citing the results obtained in previous articles introducing the Janus Cosmological Model, consisting of a set of two coupled field equations, where one metrics refers to the positive masses and the other to the negative masses, which explains the observed cosmic acceleration and the nature of dark energy, we present the Lagrangian derivation of the model.

  9. A direct derivation of the exact Fisther information matrix of Gaussian vector state space models

    NARCIS (Netherlands)

    Klein, A.A.B.; Neudecker, H.

    2000-01-01

    This paper deals with a direct derivation of Fisher's information matrix of vector state space models for the general case, by which is meant the establishment of the matrix as a whole and not element by element. The method to be used is matrix differentiation, see [4]. We assume the model to be

  10. Comparison of direct and indirect radiation effects on osteoclast formation from progenitor cells derived from different hemopoietic sources.

    Science.gov (United States)

    Scheven, B A; Wassenaar, A M; Kawilarang-de Haas, E W; Nijweide, P J

    1987-07-01

    Hemopoietic stem and progenitor cells from different sources differ in radiosensitivity. Recently, we have demonstrated that the multinucleated cell responsible for bone resorption and marrow cavity formation, the osteoclast, is in fact of hemopoietic lineage. In this investigation we have studied the radiosensitivity of osteoclast formation from two different hemopoietic tissues: fetal liver and adult bone marrow. Development of osteoclasts from hemopoietic progenitors was induced by coculture of hemopoietic cell populations with fetal mouse long bones depleted of their own osteoclast precursor pool. During culture, osteoclasts developed from the exogenous cell population and invaded the calcified hypertrophic cartilage of the long bone model, thereby giving rise to the formation of a primitive marrow cavity. To analyze the radiosensitivity of osteoclast formation, either the hemopoietic cells or the bone rudiments were irradiated before coculture. Fetal liver cells were found to be less radiosensitive than bone marrow cells. The D0, Dq values and extrapolation numbers were 1.69 Gy, 5.30 Gy, and 24.40 for fetal liver cells and 1.01 Gy, 1.85 Gy, and 6.02 for bone marrow cells. Irradiation of the (pre)osteoclast-free long bone rudiments instead of the hemopoietic sources resulted in a significant inhibition of osteoclast formation at doses of 4 Gy or more. This indirect effect appeared to be more prominent in the cocultures with fetal than with adult hemopoietic cells. Furthermore, radiation doses of 8.0-10.0 Gy indirectly affected the appearance of other cell types (e.g., granulocytes) in the newly formed but underdeveloped marrow cavity. The results indicate that osteoclast progenitors from different hemopoietic sources exhibit a distinct sensitivity to ionizing irradiation. Radiation injury to long bone rudiments disturbs the osteoclast-forming capacity as well as the hemopoietic microenvironment.

  11. New Fokker-Planck derivation of heavy gas models for neutron thermalization

    International Nuclear Information System (INIS)

    Larsen, E.W.; Williams, M.M.R.

    1990-01-01

    This paper is concerned with the derivation of new generalized heavy gas models for the infinite medium neutron energy spectrum equation. Our approach is general and can be used to derive improved Fokker-Planck approximations for other types of kinetic equations. In this paper we obtain two distinct heavy gas models, together with estimates for the corresponding errors. The models are shown in a special case to reduce to modified heavy gas models proposed earlier by Corngold (1962). The error estimates show that both of the new models should be more accurate than Corngold's modified heavy gas model, and that the first of the two new models should generally be more accurate than the second. (author)

  12. Modelling surface energy fluxes over a Dehesa ecosystem using a two-source energy balance model.

    Science.gov (United States)

    Andreu, Ana; Kustas, William. P.; Anderson, Martha C.; Carrara, Arnaud; Patrocinio Gonzalez-Dugo, Maria

    2013-04-01

    The Dehesa is the most widespread agroforestry land-use system in Europe, covering more than 3 million hectares in the Iberian Peninsula and Greece (Grove and Rackham, 2001; Papanastasis, 2004). It is an agro-silvo-pastural ecosystem consisting of widely-spaced oak trees (mostly Quercus ilex L.), combined with crops, pasture and Mediterranean shrubs, and it is recognized as an example of sustainable land use and for his importance in the rural economy (Diaz et al., 1997; Plieninger and Wilbrand, 2001). The ecosystem is influenced by a Mediterranean climate, with recurrent and severe droughts. Over the last decades the Dehesa has faced multiple environmental threats, derived from intensive agricultural use and socio-economic changes, which have caused environmental degradation of the area, namely reduction in tree density and stocking rates, changes in soil properties and hydrological processes and an increase of soil erosion (Coelho et al. 2004; Schnabel and Ferreira, 2004; Montoya 1998; Pulido and Díaz, 2005). Understanding the hydrological, atmospheric and physiological processes that affect the functioning of the ecosystem will improve the management and conservation of the Dehesa. One of the key metrics in assessing ecosystem health, particularly in this water-limited environment, is the capability of monitoring evaporation (ET). To make large area assessments requires the use of remote sensing. Thermal-based energy balance techniques that distinguish soil/substrate and vegetation contributions to the radiative temperature and radiation/turbulent fluxes have proven to be reliable in such semi-arid sparse canopy-cover landscapes. In particular, the two-source energy balance (TSEB) model of Norman et al. (1995) and Kustas and Norman (1999) has shown to be robust for a wide range of partially-vegetated landscapes. The TSEB formulation is evaluated at a flux tower site located in center Spain (Majadas del Tietar, Caceres). Its application in this environment is

  13. Interaction of hematoporphyrin derivative, light, and ionizing radiation in a rat glioma model

    International Nuclear Information System (INIS)

    Kostron, H.; Swartz, M.R.; Miller, D.C.; Martuza, R.L.

    1986-01-01

    The effects of hematoporphyrin derivative, light, and cobalt 60 ( 60 Co) irradiation were studied in a rat glioma model using an in vivo and an in vitro clonogenic assay. There was no effect on tumor growth by visible light or by a single dose of 60 Co irradiation at 4 Gy or 8 Gy, whereas 16 Gy inhibited tumor growth to 40% versus the control. Hematoporphyrin derivative alone slightly stimulated growth (P less than 0.1). Light in the presence of 10 mg hematoporphyrin derivative/kg inhibited tumor growth to 32%. 60 Co irradiation in the presence of hematoporphyrin derivative produced a significant tumor growth inhibition (P less than 0.02). This growth inhibition was directly related to the concentration of hematoporphyrin derivative. The addition of 60 Co to light in the presence of hematoporphyrin derivative produced a greater growth inhibition than light or 60 Co irradiation alone. This effect was most pronounced when light was applied 30 minutes before 60 Co irradiation. Our experiments in a subcutaneous rat glioma model suggest a radiosensitizing effect of hematoporphyrin derivative. Furthermore, the photodynamic inactivation is enhanced by the addition of 60 Co irradiation. These findings may be of importance in planning new treatment modalities in malignant brain tumors

  14. Variables influencing the use of derivatives in South Africa – the development of a conceptual model

    Directory of Open Access Journals (Sweden)

    Stefan Schwegler

    2011-03-01

    Full Text Available This paper, which is the first in a two-part series, sets out the development of a conceptual model on the variables influencing investors’ decisions to use derivatives in their portfolios. Investor-specific variables include: the investor’s needs, goals and return expectations, the investor’s knowledge of financial markets, familiarity with different asset classes including derivative instruments, and the investor’s level of wealth and level of risk tolerance. Market-specific variables include: the level of volatility, standardisation, regulation and liquidity in a market, the level of information available on derivatives, the transparency of price determination, taxes, brokerage costs and product availability.

  15. Fractional derivatives of constant and variable orders applied to anomalous relaxation models in heat transfer problems

    Directory of Open Access Journals (Sweden)

    Yang Xiao-Jun

    2017-01-01

    Full Text Available In this paper, we address a class of the fractional derivatives of constant and variable orders for the first time. Fractional-order relaxation equations of constants and variable orders in the sense of Caputo type are modeled from mathematical view of point. The comparative results of the anomalous relaxation among the various fractional derivatives are also given. They are very efficient in description of the complex phenomenon arising in heat transfer.

  16. A fractal derivative model for the characterization of anomalous diffusion in magnetic resonance imaging

    Science.gov (United States)

    Liang, Yingjie; Ye, Allen Q.; Chen, Wen; Gatto, Rodolfo G.; Colon-Perez, Luis; Mareci, Thomas H.; Magin, Richard L.

    2016-10-01

    Non-Gaussian (anomalous) diffusion is wide spread in biological tissues where its effects modulate chemical reactions and membrane transport. When viewed using magnetic resonance imaging (MRI), anomalous diffusion is characterized by a persistent or 'long tail' behavior in the decay of the diffusion signal. Recent MRI studies have used the fractional derivative to describe diffusion dynamics in normal and post-mortem tissue by connecting the order of the derivative with changes in tissue composition, structure and complexity. In this study we consider an alternative approach by introducing fractal time and space derivatives into Fick's second law of diffusion. This provides a more natural way to link sub-voxel tissue composition with the observed MRI diffusion signal decay following the application of a diffusion-sensitive pulse sequence. Unlike previous studies using fractional order derivatives, here the fractal derivative order is directly connected to the Hausdorff fractal dimension of the diffusion trajectory. The result is a simpler, computationally faster, and more direct way to incorporate tissue complexity and microstructure into the diffusional dynamics. Furthermore, the results are readily expressed in terms of spectral entropy, which provides a quantitative measure of the overall complexity of the heterogeneous and multi-scale structure of biological tissues. As an example, we apply this new model for the characterization of diffusion in fixed samples of the mouse brain. These results are compared with those obtained using the mono-exponential, the stretched exponential, the fractional derivative, and the diffusion kurtosis models. Overall, we find that the order of the fractal time derivative, the diffusion coefficient, and the spectral entropy are potential biomarkers to differentiate between the microstructure of white and gray matter. In addition, we note that the fractal derivative model has practical advantages over the existing models from the

  17. Impacts of supersymmetric higher derivative terms on inflation models in supergravity

    International Nuclear Information System (INIS)

    Aoki, Shuntaro; Yamada, Yusuke

    2015-01-01

    We show the effects of supersymmetric higher derivative terms on inflation models in supergravity. The results show that such terms generically modify the effective kinetic coefficient of the inflaton during inflation if the cut off scale of the higher derivative operators is sufficiently small. In such a case, the η-problem in supergravity does not occur, and we find that the effective potential of the inflaton generically becomes a power type potential with a power smaller than two

  18. Temporal variability in terrestrially-derived sources of particulate organic carbon in the lower Mississippi River and its upper tributaries

    Science.gov (United States)

    Bianchi, Thomas S.; Wysocki, Laura A.; Stewart, Mike; Filley, Timothy R.; McKee, Brent A.

    2007-09-01

    In this study, we examined the temporal changes of terrestrially-derived particulate organic carbon (POC) in the lower Mississippi River (MR) and in a very limited account, the upper tributaries (Upper MR, Ohio River, and Missouri River). We used for the first time a combination of lignin-phenols, bulk stable carbon isotopes, and compound-specific isotope analyses (CSIA) to examine POC in the lower MR and upper tributaries. A lack of correlation between POC and lignin phenol abundances ( Λ8) was likely due to dilution effects from autochthonous production in the river, which has been shown to be considerably higher than previously expected. The range of δ 13C values for p-hydroxycinnamic and ferulic acids in POC in the lower river do support that POM in the lower river does have a significant component of C 4 in addition to C 3 source materials. A strong correlation between δ 13C values of p-hydroxycinnamic, ferulic, and vanillyl phenols suggests a consistent input of C 3 and C 4 carbon to POC lignin while a lack of correlation between these same phenols and POC bulk δ 13C further indicates the considerable role of autochthonous carbon in the lower MR POC budget. Our estimates indicate an annual flux of POC of 9.3 × 10 8 kg y -1 to the Gulf of Mexico. Total lignin fluxes, based on Λ8 values of POC, were estimated to be 1.2 × 10 5 kg y -1. If we include the total dissolved organic carbon (DOC) flux (3.1 × 10 9 kg y -1) reported by [Bianchi T. S., Filley T., Dria K. and Hatcher, P. (2004) Temporal variability in sources of dissolved organic carbon in the lower Mississippi River. Geochim. Cosmochim. Acta68, 959-967.], we get a total organic carbon flux of 4.0 × 10 9 kg y -1. This represents 0.82% of the annual total organic carbon supplied to the oceans by rivers (4.9 × 10 11 kg).

  19. An equilibrium pricing model for weather derivatives in a multi-commodity setting

    International Nuclear Information System (INIS)

    Lee, Yongheon; Oren, Shmuel S.

    2009-01-01

    Many industries are exposed to weather risk. Weather derivatives can play a key role in hedging and diversifying such risk because the uncertainty in a company's profit function can be correlated to weather condition which affects diverse industry sectors differently. Unfortunately the weather derivatives market is a classical example of an incomplete market that is not amenable to standard methodologies used for derivative pricing in complete markets. In this paper, we develop an equilibrium pricing model for weather derivatives in a multi-commodity setting. The model is constructed in the context of a stylized economy where agents optimize their hedging portfolios which include weather derivatives that are issued in a fixed quantity by a financial underwriter. The supply and demand resulting from hedging activities and the supply by the underwriter are combined in an equilibrium pricing model under the assumption that all agents maximize some risk averse utility function. We analyze the gains due to the inclusion of weather derivatives in hedging portfolios and examine the components of that gain attributable to hedging and to risk sharing. (author)

  20. Benzothiazole, benzotriazole, and their derivates in clothing textiles--a potential source of environmental pollutants and human exposure.

    Science.gov (United States)

    Avagyan, Rozanna; Luongo, Giovanna; Thorsén, Gunnar; Östman, Conny

    2015-04-01

    Textiles play an important role in our daily life, and textile production is one of the oldest industries. In the manufacturing chain from natural and/or synthetic fibers to the final clothing products, the use of many different chemicals is ubiquitous. A lot of research has focused on chemicals in textile wastewater, but the knowledge of the actual content of harmful chemicals in clothes sold on the retail market is limited. In this paper, we have focused on eight benzothiazole and benzotriazole derivatives, compounds rated as high production volume chemicals. Twenty-six clothing samples of various textile materials and colors manufactured in 14 different countries were analyzed in textile clothing using liquid chromatography tandem mass spectrometry. Among the investigated textile products, 11 clothes were for babies, toddlers, and children. Eight of the 11 compounds included in the investigation were detected in the textiles. Benzothiazole was present in 23 of 26 investigated garments in concentrations ranging from 0.45 to 51 μg/g textile. The garment with the highest concentration of benzothiazole contained a total amount of 8.3 mg of the chemical. The third highest concentration of benzothiazole (22 μg/g) was detected in a baby body made from "organic cotton" equipped with the "Nordic Ecolabel" ("Svanenmärkt"). It was also found that concentrations of benzothiazoles in general were much higher than those for benzotriazoles. This study implicates that clothing textiles can be a possible route for human exposure to harmful chemicals by skin contact, as well as being a potential source of environmental pollutants via laundering and release to household wastewater.

  1. Use of rat mature adipocyte-derived dedifferentiated fat cells as a cell source for periodontal tissue regeneration

    Directory of Open Access Journals (Sweden)

    Daisuke eAkita

    2016-02-01

    Full Text Available Lipid-free fibroblast-like cells, known as dedifferentiated fat (DFAT cells, can be generated from mature adipocytes with a large single lipid droplet. DFAT cells can re-establish their active proliferation ability and can transdifferentiate into various cell types under appropriate culture conditions. The first objective of this study was to compare the multilineage differentiation potential of DFAT cells with that of adipose-derived stem cells (ASCs on mesenchymal stem cellsWe obtained DFAT cells and ASCs from inbred rats and found that rat DFAT cells possess higher osteogenic differentiation potential than rat ASCs. On the other hand, DFAT cells show similar adipogenic differentiation, and chondrogenic differentiation potential in comparison with ASCs. The second objective of this study was to assess the regenerative potential of DFAT cells combined with novel solid scaffolds composed of PLGA (Poly d, l-lactic-co-glycolic acid on periodontal tissue, and to compare this with the regenerative potential of ASCs combined with PLGA scaffolds. Cultured DFAT cells and ASCs were seeded onto PLGA scaffolds (DFAT/PLGA and ASCs/PLGA and transplanted into periodontal fenestration defects in rat mandible. Micro computed tomography analysis revealed a significantly higher amount of bone regeneration in the DFAT/PLGA group compared with that of ASCs/PLGA and PLGA-alone groups at 2, 3 and 5 weeks after transplantation. Similarly, histomorphometric analysis showed that DFAT/PLGA groups had significantly greater width of cementum, periodontal ligament and alveolar bone than ASCs/PLGA and PLGA-alone groups. In addition, transplanted fluorescent-labeled DFAT cells were observed in the periodontal ligament beside the newly formed bone and cementum. These findings suggest that DFAT cells have a greater potential for enhancing periodontal tissue regeneration than ASCs. Therefore, DFAT cells are a promising cell source for periodontium regeneration.

  2. Evaluation of ES-derived neural progenitors as a potential source for cell replacement therapy in the gut

    Directory of Open Access Journals (Sweden)

    Sasselli Valentina

    2012-06-01

    Full Text Available Abstract Background Stem cell-based therapy has recently been explored for the treatment of disorders of the enteric nervous system (ENS. Pluripotent embryonic stem (ES cells represent an attractive cell source; however, little or no information is currently available on how ES cells will respond to the gut environment. In this study, we investigated the ability of ES cells to respond to environmental cues derived from the ENS and related tissues, both in vitro and in vivo. Methods Neurospheres were generated from mouse ES cells (ES-NS and co-cultured with organotypic preparations of gut tissue consisting of the longitudinal muscle layers with the adherent myenteric plexus (LM-MP. Results LM-MP co-culture led to a significant increase in the expression of pan-neuronal markers (βIII-tubulin, PGP 9.5 as well as more specialized markers (peripherin, nNOS in ES-NS, both at the transcriptional and protein level. The increased expression was not associated with increased proliferation, thus confirming a true neurogenic effect. LM-MP preparations exerted also a myogenic effect on ES-NS, although to a lesser extent. After transplantation in vivo into the mouse pylorus, grafted ES-NS failed to acquire a distinct phenotype al least 1 week following transplantation. Conclusions This is the first study reporting that the gut explants can induce neuronal differentiation of ES cells in vitro and induce the expression of nNOS, a key molecule in gastrointestinal motility regulation. The inability of ES-NS to adopt a neuronal phenotype after transplantation in the gastrointestinal tract is suggestive of the presence of local inhibitory influences that prevent ES-NS differentiation in vivo.

  3. Skeletal myogenic differentiation of human urine-derived cells as a potential source for skeletal muscle regeneration.

    Science.gov (United States)

    Chen, Wei; Xie, Minkai; Yang, Bin; Bharadwaj, Shantaram; Song, Lujie; Liu, Guihua; Yi, Shanhong; Ye, Gang; Atala, Anthony; Zhang, Yuanyuan

    2017-02-01

    Stem cells are regarded as possible cell therapy candidates for skeletal muscle regeneration. However, invasive harvesting of those cells can cause potential harvest-site morbidity. The goal of this study was to assess whether human urine-derived stem cells (USCs), obtained through non-invasive procedures, can differentiate into skeletal muscle linage cells (Sk-MCs) and potentially be used for skeletal muscle regeneration. In this study, USCs were harvested from six healthy individuals aged 25-55. Expression profiles of cell-surface markers were assessed by flow cytometry. To optimize the myogenic differentiation medium, we selected two from four different types of myogenic differentiation media to induce the USCs. Differentiated USCs were identified with myogenic markers by gene and protein expression. USCs were implanted into the tibialis anterior muscles of nude mice for 1 month. The results showed that USCs displayed surface markers with positive staining for CD24, CD29, CD44, CD73, CD90, CD105, CD117, CD133, CD146, SSEA-4 and STRO-1, and negative staining for CD14, CD31, CD34 and CD45. After myogenic differentiation, a change in morphology was observed from 'rice-grain'-like cells to spindle-shaped cells. The USCs expressed specific Sk-MC transcripts and protein markers (myf5, myoD, myosin, and desmin) after being induced with different myogenic culture media. Implanted cells expressed Sk-MC markers stably in vivo. Our findings suggest that USCs are able to differentiate into the Sk-MC lineage in vitro and after being implanted in vivo. Thus, they might be a potential source for cell injection therapy in the use of skeletal muscle regeneration. Copyright © 2014 John Wiley & Sons, Ltd. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Impact of an extended source in laser ablation using pulsed digital holographic interferometry and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Amer, E., E-mail: eynas.amer@ltu.se [Lulea University of Technology, Department of Applied Physics and Mechanical Engineering, SE-971 87 Lulea (Sweden); Gren, P.; Kaplan, A.F.H.; Sjoedahl, M. [Lulea University of Technology, Department of Applied Physics and Mechanical Engineering, SE-971 87 Lulea (Sweden)

    2009-08-15

    Pulsed digital holographic interferometry has been used to study the effect of the laser spot diameter on the shock wave generated in the ablation process of an Nd:YAG laser pulse on a Zn target under atmospheric pressure. For different laser spot diameters and time delays, the propagation of the expanding vapour and of the shock wave were recorded by intensity maps calculated using the recorded digital holograms. From the latter, the phase maps, the refractive index and the density field can be derived. A model was developed that approaches the density distribution, in particular the ellipsoidal expansion characteristics. The induced shock wave has an ellipsoid shape that approaches a sphere for decreasing spot diameter. The ellipsoidal shock waves have almost the same centre offset towards the laser beam and the same aspect ratio for different time steps. The model facilitates the derivation of the particle velocity field. The method provides valuable quantitative results that are discussed, in particular in comparison with the simpler point source explosion theory.

  5. Reliability model of SNS linac (spallation neutron source-ORNL)

    International Nuclear Information System (INIS)

    Pitigoi, A.; Fernandez, P.

    2015-01-01

    A reliability model of SNS LINAC (Spallation Neutron Source at Oak Ridge National Laboratory) has been developed using risk spectrum reliability analysis software and the analysis of the accelerator system's reliability has been performed. The analysis results have been evaluated by comparing them with the SNS operational data. This paper presents the main results and conclusions focusing on the definition of design weaknesses and provides recommendations to improve reliability of the MYRRHA ( linear accelerator. The reliability results show that the most affected SNS LINAC parts/systems are: 1) SCL (superconducting linac), front-end systems: IS, LEBT (low-energy beam transport line), MEBT (medium-energy beam transport line), diagnostics and controls; 2) RF systems (especially the SCL RF system); 3) power supplies and PS controllers. These results are in line with the records in the SNS logbook. The reliability issue that needs to be enforced in the linac design is the redundancy of the systems, subsystems and components most affected by failures. For compensation purposes, there is a need for intelligent fail-over redundancy implementation in controllers. Enough diagnostics has to be implemented to allow reliable functioning of the redundant solutions and to ensure the compensation function

  6. Modeling the explosion-source region: An overview

    International Nuclear Information System (INIS)

    Glenn, L.A.

    1993-01-01

    The explosion-source region is defined as the region surrounding an underground explosion that cannot be described by elastic or anelastic theory. This region extends typically to ranges up to 1 km/(kt) 1/3 but for some purposes, such as yield estimation via hydrodynamic means (CORRTEX and HYDRO PLUS), the maximum range of interest is less by an order of magnitude. For the simulation or analysis of seismic signals, however, what is required is the time resolved motion and stress state at the inelastic boundary. Various analytic approximations have been made for these boundary conditions, but since they rely on near-field empirical data they cannot be expected to reliably extrapolate to different explosion sites. More important, without some knowledge of the initial energy density and the characteristics of the medium immediately surrounding the explosion, these simplified models are unable to distinguish chemical from nuclear explosions, identify cavity decoupling, or account for such phenomena as anomalous dissipation via pore collapse

  7. Modeling the Interest Rate Term Structure: Derivatives Contracts Dynamics and Evaluation

    Directory of Open Access Journals (Sweden)

    Pedro L. Valls Pereira

    2005-06-01

    Full Text Available This article deals with a model for the term structure of interest rates and the valuation of derivative contracts directly dependent on it. The work is of a theoretical nature and deals, exclusively, with continuous time models, making ample use of stochastic calculus results and presents original contributions that we consider relevant to the development of the fixed income market modeling. We develop a new multifactorial model of the term structure of interest rates. The model is based on the decomposition of the yield curve into the factors level, slope, curvature, and the treatment of their collective dynamics. We show that this model may be applied to serve various objectives: analysis of bond price dynamics, valuation of derivative contracts and also market risk management and formulation of operational strategies which is presented in another article.

  8. Soil-landscape modelling using fuzzy c-means clustering of attribute data derived from a Digital Elevation Model (DEM).

    NARCIS (Netherlands)

    Bruin, de S.; Stein, A.

    1998-01-01

    This study explores the use of fuzzy c-means clustering of attribute data derived from a digital elevation model to represent transition zones in the soil-landscape. The conventional geographic model used for soil-landscape description is not able to properly deal with these. Fuzzy c-means

  9. Conference Innovations in Derivatives Market : Fixed Income Modeling, Valuation Adjustments, Risk Management, and Regulation

    CERN Document Server

    Grbac, Zorana; Scherer, Matthias; Zagst, Rudi

    2016-01-01

    This book presents 20 peer-reviewed chapters on current aspects of derivatives markets and derivative pricing. The contributions, written by leading researchers in the field as well as experienced authors from the financial industry, present the state of the art in: • Modeling counterparty credit risk: credit valuation adjustment, debit valuation adjustment, funding valuation adjustment, and wrong way risk. • Pricing and hedging in fixed-income markets and multi-curve interest-rate modeling. • Recent developments concerning contingent convertible bonds, the measuring of basis spreads, and the modeling of implied correlations. The recent financial crisis has cast tremendous doubts on the classical view on derivative pricing. Now, counterparty credit risk and liquidity issues are integral aspects of a prudent valuation procedure and the reference interest rates are represented by a multitude of curves according to their different periods and maturities. A panel discussion included in the book (featuring D...

  10. Neonatal Transplantation Confers Maturation of PSC-Derived Cardiomyocytes Conducive to Modeling Cardiomyopathy

    Directory of Open Access Journals (Sweden)

    Gun-Sik Cho

    2017-01-01

    Full Text Available Summary: Pluripotent stem cells (PSCs offer unprecedented opportunities for disease modeling and personalized medicine. However, PSC-derived cells exhibit fetal-like characteristics and remain immature in a dish. This has emerged as a major obstacle for their application for late-onset diseases. We previously showed that there is a neonatal arrest of long-term cultured PSC-derived cardiomyocytes (PSC-CMs. Here, we demonstrate that PSC-CMs mature into adult CMs when transplanted into neonatal hearts. PSC-CMs became similar to adult CMs in morphology, structure, and function within a month of transplantation into rats. The similarity was further supported by single-cell RNA-sequencing analysis. Moreover, this in vivo maturation allowed patient-derived PSC-CMs to reveal the disease phenotype of arrhythmogenic right ventricular cardiomyopathy, which manifests predominantly in adults. This study lays a foundation for understanding human CM maturation and pathogenesis and can be instrumental in PSC-based modeling of adult heart diseases. : Pluripotent stem cell (PSC-derived cells remain fetal like, and this has become a major impediment to modeling adult diseases. Cho et al. find that PSC-derived cardiomyocytes mature into adult cardiomyocytes when transplanted into neonatal rat hearts. This method can serve as a tool to understand maturation and pathogenesis in human cardiomyocytes. Keywords: cardiomyocyte, maturation, iPS, cardiac progenitor, neonatal, disease modeling, cardiomyopathy, ARVC, T-tubule, calcium transient, sarcomere shortening

  11. TOXICOLOGICAL EVALUATION OF REALISTIC EMISSIONS OF SOURCE AEROSOLS (TERESA): APPLICATION TO POWER PLANT-DERIVED PM2.5

    Energy Technology Data Exchange (ETDEWEB)

    Annette Rohr

    2004-12-02

    This report documents progress made on the subject project during the period of March 1, 2004 through August 31, 2004. The TERESA Study is designed to investigate the role played by specific emissions sources and components in the induction of adverse health effects by examining the relative toxicity of coal combustion and mobile source (gasoline and/or diesel engine) emissions and their oxidative products. The study involves on-site sampling, dilution, and aging of coal combustion emissions at three coal-fired power plants, as well as mobile source emissions, followed by animal exposures incorporating a number of toxicological endpoints. The DOE-EPRI Cooperative Agreement (henceforth referred to as ''the Agreement'') for which this technical progress report has been prepared covers the analysis and interpretation of the field data collected at the first power plant (henceforth referred to as Plant 0, and located in the Upper Midwest), followed by the performance and analysis of similar field experiments at two additional coal-fired power plants (Plants 1 and 2) utilizing different coal types and with different plant configurations. Significant progress was made on the Project during this reporting period, with field work being initiated at Plant 0. Initial testing of the stack sampling system and reaction apparatus revealed that primary particle concentrations were lower than expected in the emissions entering the mobile chemical laboratory. Initial animal exposures to primary emissions were carried out (Scenario 1) to ensure successful implementation of all study methodologies and toxicological assessments. Results indicated no significant toxicological effects in response to primary emissions exposures. Exposures were then carried out to diluted, oxidized, neutralized emissions with the addition of secondary organic aerosol (Scenario 5), both during the day and also at night when primary particle concentrations in the sampled stack emissions

  12. Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City

    Directory of Open Access Journals (Sweden)

    M. Zavala

    2009-01-01

    Full Text Available The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3, carbon monoxide (CO and nitrogen oxides (NOx suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio.

    This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM and the standard Brute Force Method (BFM in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with

  13. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA.

    Science.gov (United States)

    Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M

    2017-10-01

    Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  14. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    Science.gov (United States)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  15. Modeling and reliability analysis of three phase z-source AC-AC converter

    Directory of Open Access Journals (Sweden)

    Prasad Hanuman

    2017-12-01

    Full Text Available This paper presents the small signal modeling using the state space averaging technique and reliability analysis of a three-phase z-source ac-ac converter. By controlling the shoot-through duty ratio, it can operate in buck-boost mode and maintain desired output voltage during voltage sag and surge condition. It has faster dynamic response and higher efficiency as compared to the traditional voltage regulator. Small signal analysis derives different control transfer functions and this leads to design a suitable controller for a closed loop system during supply voltage variation. The closed loop system of the converter with a PID controller eliminates the transients in output voltage and provides steady state regulated output. The proposed model designed in the RT-LAB and executed in a field programming gate array (FPGA-based real-time digital simulator at a fixedtime step of 10 μs and a constant switching frequency of 10 kHz. The simulator was developed using very high speed integrated circuit hardware description language (VHDL, making it versatile and moveable. Hardware-in-the-loop (HIL simulation results are presented to justify the MATLAB simulation results during supply voltage variation of the three phase z-source ac-ac converter. The reliability analysis has been applied to the converter to find out the failure rate of its different components.

  16. A Semianalytical Solution of the Fractional Derivative Model and Its Application in Financial Market

    Directory of Open Access Journals (Sweden)

    Lina Song

    2018-01-01

    Full Text Available Fractional differential equation has been introduced to the financial theory, which presents new ideas and tools for the theoretical researches and the practical applications. In the work, an approximate semianalytical solution of the time-fractional European option pricing model is derived using the method of combining the enhanced technique of Adomian decomposition method with the finite difference method. And then the result is introduced in China’s financial market. The work makes every effort to test the feasibility of the fractional derivative model in the actual financial market.

  17. Versatile Markovian models for networks with asymmetric TCP sources

    NARCIS (Netherlands)

    van Foreest, N.D.; Haverkort, Boudewijn R.H.M.; Mandjes, M.R.H.; Scheinhardt, Willem R.W.

    2004-01-01

    In this paper we use Stochastic Petri Nets (SPNs) to study the interaction of multiple TCP sources that share one or two buffers, thereby considerably extending earlier work. We first consider two sources sharing a buffer and investigate the consequences of two popular assumptions for the loss

  18. A discriminative syntactic model for source permutation via tree transduction

    NARCIS (Netherlands)

    Khalilov, M.; Sima'an, K.; Wu, D.

    2010-01-01

    A major challenge in statistical machine translation is mitigating the word order differences between source and target strings. While reordering and lexical translation choices are often conducted in tandem, source string permutation prior to translation is attractive for studying reordering using

  19. Construction of integrable model Kohn-Sham potentials by analysis of the structure of functional derivatives

    International Nuclear Information System (INIS)

    Gaiduk, Alex P.; Staroverov, Viktor N.

    2011-01-01

    A directly approximated exchange-correlation potential should, by construction, be a functional derivative of some density functional in order to avoid unphysical results. Using generalized gradient approximations (GGAs) as an example, we show that functional derivatives of explicit density functionals have a very rigid inner structure, the knowledge of which allows one to build the entire functional derivative from a small part. Based on this analysis, we develop a method for direct construction of integrable Kohn-Sham potentials. As an illustration, we transform the model potential of van Leeuwen and Baerends (which is not a functional derivative) into a semilocal exchange potential that has a parent GGA, yields accurate energies, and is free from the artifacts inherent in existing semilocal potential approximations.

  20. A model for the derivation of new transport limits for non-fixed contamination

    International Nuclear Information System (INIS)

    Thierfeldt, S.; Lorenz, B.; Hesse, J.

    2004-01-01

    The IAEA Regulations for the Safe Transport of Radioactive Material contain requirements for contamination limits on packages and conveyances used for the transport of radioactive material. Current contamination limits for packages and conveyances under routine transport conditions have been derived from a model proposed by Fairbairn more than 40 years ago. This model has proven effective if used with pragmatism, but is based on very conservative as well as extremely simple assumptions which is in no way appropriate any more and which is not compatible with ICRP recommendations regarding radiation protection standards. Therefore, a new model has now been developed which reflects all steps of the transport process. The derivation of this model has been fostered by the IAEA by initiating a Co-ordinated Research Project. The results of the calculations using this model could be directly applied as new nuclide specific transport limits for the non-fixed contamination

  1. Discovery of Antibiotics-derived Polymers for Gene Delivery using Combinatorial Synthesis and Cheminformatics Modeling

    Science.gov (United States)

    Potta, Thrimoorthy; Zhen, Zhuo; Grandhi, Taraka Sai Pavan; Christensen, Matthew D.; Ramos, James; Breneman, Curt M.; Rege, Kaushal

    2014-01-01

    We describe the combinatorial synthesis and cheminformatics modeling of aminoglycoside antibiotics-derived polymers for transgene delivery and expression. Fifty-six polymers were synthesized by polymerizing aminoglycosides with diglycidyl ether cross-linkers. Parallel screening resulted in identification of several lead polymers that resulted in high transgene expression levels in cells. The role of polymer physicochemical properties in determining efficacy of transgene expression was investigated using Quantitative Structure-Activity Relationship (QSAR) cheminformatics models based on Support Vector Regression (SVR) and ‘building block’ polymer structures. The QSAR model exhibited high predictive ability, and investigation of descriptors in the model, using molecular visualization and correlation plots, indicated that physicochemical attributes related to both, aminoglycosides and diglycidyl ethers facilitated transgene expression. This work synergistically combines combinatorial synthesis and parallel screening with cheminformatics-based QSAR models for discovery and physicochemical elucidation of effective antibiotics-derived polymers for transgene delivery in medicine and biotechnology. PMID:24331709

  2. A model for the derivation of new transport limits for non-fixed contamination

    Energy Technology Data Exchange (ETDEWEB)

    Thierfeldt, S. [Brenk Systemplanung GmbH, Aachen (Germany); Lorenz, B. [GNS Gesellschaft fuer Nuklearservice, Essen (Germany); Hesse, J. [RWE Power AG, Essen (Germany)

    2004-07-01

    The IAEA Regulations for the Safe Transport of Radioactive Material contain requirements for contamination limits on packages and conveyances used for the transport of radioactive material. Current contamination limits for packages and conveyances under routine transport conditions have been derived from a model proposed by Fairbairn more than 40 years ago. This model has proven effective if used with pragmatism, but is based on very conservative as well as extremely simple assumptions which is in no way appropriate any more and which is not compatible with ICRP recommendations regarding radiation protection standards. Therefore, a new model has now been developed which reflects all steps of the transport process. The derivation of this model has been fostered by the IAEA by initiating a Co-ordinated Research Project. The results of the calculations using this model could be directly applied as new nuclide specific transport limits for the non-fixed contamination.

  3. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  4. Muticriteria decision making model for chosing between open source and non-open source software

    Directory of Open Access Journals (Sweden)

    Edmilson Alves de Moraes

    2008-09-01

    Full Text Available This article proposes the use of a multicriterio method for supporting decision on a problem where the intent is to chose for software given the options of open source and not-open source. The study shows how a method for decison making can be used to provide problem structuration and simplify the decision maker job. The method Analytic Hierarchy Process-AHP is described step-by-step and its benefits and flaws are discussed. Followin the theoretical discussion, a muliple case study is presented, where two companies are to use the decison making method. The analysis was supported by Expert Choice, a software developed based on AHP framework.

  5. Laboratory Plasma Source as an MHD Model for Astrophysical Jets

    Science.gov (United States)

    Mayo, Robert M.

    1997-01-01

    The significance of the work described herein lies in the demonstration of Magnetized Coaxial Plasma Gun (MCG) devices like CPS-1 to produce energetic laboratory magneto-flows with embedded magnetic fields that can be used as a simulation tool to study flow interaction dynamic of jet flows, to demonstrate the magnetic acceleration and collimation of flows with primarily toroidal fields, and study cross field transport in turbulent accreting flows. Since plasma produced in MCG devices have magnetic topology and MHD flow regime similarity to stellar and extragalactic jets, we expect that careful investigation of these flows in the laboratory will reveal fundamental physical mechanisms influencing astrophysical flows. Discussion in the next section (sec.2) focuses on recent results describing collimation, leading flow surface interaction layers, and turbulent accretion. The primary objectives for a new three year effort would involve the development and deployment of novel electrostatic, magnetic, and visible plasma diagnostic techniques to measure plasma and flow parameters of the CPS-1 device in the flow chamber downstream of the plasma source to study, (1) mass ejection, morphology, and collimation and stability of energetic outflows, (2) the effects of external magnetization on collimation and stability, (3) the interaction of such flows with background neutral gas, the generation of visible emission in such interaction, and effect of neutral clouds on jet flow dynamics, and (4) the cross magnetic field transport of turbulent accreting flows. The applicability of existing laboratory plasma facilities to the study of stellar and extragalactic plasma should be exploited to elucidate underlying physical mechanisms that cannot be ascertained though astrophysical observation, and provide baseline to a wide variety of proposed models, MHD and otherwise. The work proposed herin represents a continued effort on a novel approach in relating laboratory experiments to

  6. Near Source 2007 Peru Tsunami Runup Observations and Modeling

    Science.gov (United States)

    Borrero, J. C.; Fritz, H. M.; Kalligeris, N.; Broncano, P.; Ortega, E.

    2008-12-01

    On 15 August 2007 an earthquake with moment magnitude (Mw) of 8.0 centered off the coast of central Peru, generated a tsunami with locally focused runup heights of up to 10 m. A reconnaissance team was deployed two weeks after the event and investigated the tsunami effects at 51 sites. Three tsunami fatalities were reported south of the Paracas Peninsula in a sparsely populated desert area where the largest tsunami runup heights and massive inundation distances up to 2 km were measured. Numerical modeling of the earthquake source and tsunami suggest that a region of high slip near the coastline was primarily responsible for the extreme runup heights. The town of Pisco was spared by the Paracas Peninsula, which blocked tsunami waves from propagating northward from the high slip region. As with all near field tsunamis, the waves struck within minutes of the massive ground shaking. Spontaneous evacuations coordinated by the Peruvian Coast Guard minimized the fatalities and illustrate the importance of community-based education and awareness programs. The residents of the fishing village Lagunilla were unaware of the tsunami hazard after an earthquake and did not evacuate, which resulted in 3 fatalities. Despite the relatively benign tsunami effects at Pisco from this event, the tsunami hazard for this city (and its liquefied natural gas terminal) cannot be underestimated. Between 1687 and 1868, the city of Pisco was destroyed 4 times by tsunami waves. Since then, two events (1974 and 2007) have resulted in partial inundation and moderate damage. The fact that potentially devastating tsunami runup heights were observed immediately south of the peninsula only serves to underscore this point.

  7. Modeling neurodegenerative diseases with patient-derived induced pluripotent cells: Possibilities and challenges.

    Science.gov (United States)

    Poon, Anna; Zhang, Yu; Chandrasekaran, Abinaya; Phanthong, Phetcharat; Schmid, Benjamin; Nielsen, Troels T; Freude, Kristine K

    2017-10-25

    The rising prevalence of progressive neurodegenerative diseases coupled with increasing longevity poses an economic burden at individual and societal levels. There is currently no effective cure for the majority of neurodegenerative diseases and disease-affected tissues from patients have been difficult to obtain for research and drug discovery in pre-clinical settings. While the use of animal models has contributed invaluable mechanistic insights and potential therapeutic targets, the translational value of animal models could be further enhanced when combined with in vitro models derived from patient-specific induced pluripotent stem cells (iPSCs) and isogenic controls generated using CRISPR-Cas9 mediated genome editing. The iPSCs are self-renewable and capable of being differentiated into the cell types affected by the diseases. These in vitro models based on patient-derived iPSCs provide the opportunity to model disease development, uncover novel mechanisms and test potential therapeutics. Here we review findings from iPSC-based modeling of selected neurodegenerative diseases, including Alzheimer's disease, frontotemporal dementia and spinocerebellar ataxia. Furthermore, we discuss the possibilities of generating three-dimensional (3D) models using the iPSCs-derived cells and compare their advantages and disadvantages to conventional two-dimensional (2D) models. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. TOXICOLOGICAL EVALUATION OF REALISTIC EMISSIONS OF SOURCE AEROSOLS (TERESA): APPLICATION TO POWER PLANT-DERIVED PM2.5

    Energy Technology Data Exchange (ETDEWEB)

    Annette Rohr

    2006-03-31

    This report documents progress made on the subject project during the period of September 1, 2005 through February 28, 2006. The TERESA Study is designed to investigate the role played by specific emissions sources and components in the induction of adverse health effects by examining the relative toxicity of coal combustion and mobile source (gasoline and/or diesel engine) emissions and their oxidative products. The study involves on-site sampling, dilution, and aging of coal combustion emissions at three coal-fired power plants, as well as mobile source emissions, followed by animal exposures incorporating a number of toxicological endpoints. The DOE-EPRI Cooperative Agreement (henceforth referred to as ''the Agreement'') for which this technical progress report has been prepared covers the performance and analysis of field experiments at the first TERESA plant, located in the Upper Midwest and henceforth referred to as Plant 0, and at two additional coal-fired power plants (Plants 1 and 2) utilizing different coal types and with different plant configurations. During this reporting period, data processing and analyses were completed for exposure and toxicological data collected during the field campaign at Plant 1, located in the Southeast. To recap from the previous progress report, Stage I toxicological assessments were carried out in normal Sprague-Dawley rats, and Stage II assessments were carried out in a compromised model (myocardial infarction-MI-model). Normal rats were exposed to the following atmospheric scenarios: (1) primary particles; (2) oxidized emissions; (3) oxidized emissions + SOA--this scenario was repeated; and (4) oxidized emissions + ammonia + SOA. Compromised animals were exposed to oxidized emissions + SOA (this scenario was also conducted in replicate). Mass concentrations in exposure atmospheres ranged from 13.9 {micro}g/m{sup 3} for the primary particle scenario (P) to 385 {micro}g/m{sup 3} for one of the oxidized

  9. Hydrologic Derivatives for Modeling and Analysis—A new global high-resolution database

    Science.gov (United States)

    Verdin, Kristine L.

    2017-07-17

    The U.S. Geological Survey has developed a new global high-resolution hydrologic derivative database. Loosely modeled on the HYDRO1k database, this new database, entitled Hydrologic Derivatives for Modeling and Analysis, provides comprehensive and consistent global coverage of topographically derived raster layers (digital elevation model data, flow direction, flow accumulation, slope, and compound topographic index) and vector layers (streams and catchment boundaries). The coverage of the data is global, and the underlying digital elevation model is a hybrid of three datasets: HydroSHEDS (Hydrological data and maps based on SHuttle Elevation Derivatives at multiple Scales), GMTED2010 (Global Multi-resolution Terrain Elevation Data 2010), and the SRTM (Shuttle Radar Topography Mission). For most of the globe south of 60°N., the raster resolution of the data is 3 arc-seconds, corresponding to the resolution of the SRTM. For the areas north of 60°N., the resolution is 7.5 arc-seconds (the highest resolution of the GMTED2010 dataset) except for Greenland, where the resolution is 30 arc-seconds. The streams and catchments are attributed with Pfafstetter codes, based on a hierarchical numbering system, that carry important topological information. This database is appropriate for use in continental-scale modeling efforts. The work described in this report was conducted by the U.S. Geological Survey in cooperation with the National Aeronautics and Space Administration Goddard Space Flight Center.

  10. A Systems Thinking Model for Open Source Software Development in Social Media

    OpenAIRE

    Mustaquim, Moyen

    2010-01-01

    In this paper a social media model, based on systems thinking methodology is proposed to understand the behavior of the open source software development community working in social media.The proposed model is focused on relational influences of two different systems- social media and the open source community. This model can be useful for taking decisions which are complicated and where solutions are not apparent.Based on the proposed model, an efficient way of working in open source developm...

  11. Intimal smooth muscle cells are a source but not a sensor of anti-inflammatory CYP450 derived oxylipins

    Energy Technology Data Exchange (ETDEWEB)

    Thomson, Scott [Comparative Biomedical Sciences, Royal Veterinary College, Royal College Street, London NW1 0TU (United Kingdom); Edin, Matthew L.; Lih, Fred B. [Division of Intramural Research, NIEHS/NIH, Research Triangle Park, NC 27709 (United States); Davies, Michael [Comparative Biomedical Sciences, Royal Veterinary College, Royal College Street, London NW1 0TU (United Kingdom); Yaqoob, Muhammad M. [Barts and the London, Queen Mary University, Charterhouse Square, London EC1M 6BQ (United Kingdom); Hammock, Bruce D. [Department of Entomology and Comprehensive Cancer Center, University of California, Davies, CA 95616-8584 (United States); Gilroy, Derek [University College London, University Street, London (United Kingdom); Zeldin, Darryl C. [Division of Intramural Research, NIEHS/NIH, Research Triangle Park, NC 27709 (United States); Bishop-Bailey, David, E-mail: dbishopbailey@rvc.ac.uk [Comparative Biomedical Sciences, Royal Veterinary College, Royal College Street, London NW1 0TU (United Kingdom)

    2015-08-07

    Vascular pathologies are associated with changes in the presence and expression of morphologically distinct vascular smooth muscle cells. In particular, in complex human vascular lesions and models of disease in pigs and rodents, an intimal smooth muscle cell (iSMC) which exhibits a stable epithelioid or rhomboid phenotype in culture is often found to be present in high numbers, and may represent the reemergence of a distinct developmental vascular smooth muscle cell phenotype. The CYP450-oxylipin - soluble epoxide hydrolase (sEH) pathway is currently of great interest in targeting for cardiovascular disease. sEH inhibitors limit the development of hypertension, diabetes, atherosclerosis and aneurysm formation in animal models. We have investigated the expression of CYP450-oxylipin-sEH pathway enzymes and their metabolites in paired intimal (iSMC) and medial (mSMC) cells isolated from rat aorta. iSMC basally released significantly larger amounts of epoxy-oxylipin CYP450 products from eicosapentaenoic acid > docosahexaenoic acid > arachidonic acid > linoleic acid, and expressed higher levels of CYP2C12, CYP2B1, but not CYP2J mRNA compared to mSMC. When stimulated with the pro-inflammatory TLR4 ligand LPS, epoxy-oxylipin production did not change greatly in iSMC. In contrast, LPS induced epoxy-oxylipin products in mSMC and induced CYP2J4. iSMC and mSMC express sEH which metabolizes primary epoxy-oxylipins to their dihydroxy-counterparts. The sEH inhibitors TPPU or AUDA inhibited LPS-induced NFκB activation and iNOS induction in mSMC, but had no effect on NFκB nuclear localization or inducible nitric oxide synthase in iSMC; effects which were recapitulated in part by addition of authentic epoxy-oxylipins. iSMCs are a rich source but not a sensor of anti-inflammatory epoxy-oxylipins. Complex lesions that contain high levels of iSMCs may be more resistant to the protective effects of sEH inhibitors. - Highlights: • We examined oxylipin production in different

  12. Source apportionment of airborne particulates through receptor modeling: Indian scenario

    Science.gov (United States)

    Banerjee, Tirthankar; Murari, Vishnu; Kumar, Manish; Raju, M. P.

    2015-10-01

    Airborne particulate chemistry mostly governed by associated sources and apportionment of specific sources is extremely essential to delineate explicit control strategies. The present submission initially deals with the publications (1980s-2010s) of Indian origin which report regional heterogeneities of particulate concentrations with reference to associated species. Such meta-analyses clearly indicate the presence of reservoir of both primary and secondary aerosols in different geographical regions. Further, identification of specific signatory molecules for individual source category was also evaluated in terms of their scientific merit and repeatability. Source signatures mostly resemble international profile while, in selected cases lack appropriateness. In India, source apportionment (SA) of airborne particulates was initiated way back in 1985 through factor analysis, however, principal component analysis (PCA) shares a major proportion of applications (34%) followed by enrichment factor (EF, 27%), chemical mass balance (CMB, 15%) and positive matrix factorization (PMF, 9%). Mainstream SA analyses identify earth crust and road dust resuspensions (traced by Al, Ca, Fe, Na and Mg) as a principal source (6-73%) followed by vehicular emissions (traced by Fe, Cu, Pb, Cr, Ni, Mn, Ba and Zn; 5-65%), industrial emissions (traced by Co, Cr, Zn, V, Ni, Mn, Cd; 0-60%), fuel combustion (traced by K, NH4+, SO4-, As, Te, S, Mn; 4-42%), marine aerosols (traced by Na, Mg, K; 0-15%) and biomass/refuse burning (traced by Cd, V, K, Cr, As, TC, Na, K, NH4+, NO3-, OC; 1-42%). In most of the cases, temporal variations of individual source contribution for a specific geographic region exhibit radical heterogeneity possibly due to unscientific orientation of individual tracers for specific source and well exaggerated by methodological weakness, inappropriate sample size, implications of secondary aerosols and inadequate emission inventories. Conclusively, a number of challenging

  13. Kernel integration scatter model for parallel beam gamma camera and SPECT point source response

    International Nuclear Information System (INIS)

    Marinkovic, P.M.

    2001-01-01

    Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)

  14. Amniotic Fluid Stem Cells: A Novel Source for Modeling of Human Genetic Diseases

    Directory of Open Access Journals (Sweden)

    Ivana Antonucci

    2016-04-01

    Full Text Available In recent years, great interest has been devoted to the use of Induced Pluripotent Stem cells (iPS for modeling of human genetic diseases, due to the possibility of reprogramming somatic cells of affected patients into pluripotent cells, enabling differentiation into several cell types, and allowing investigations into the molecular mechanisms of the disease. However, the protocol of iPS generation still suffers from technical limitations, showing low efficiency, being expensive and time consuming. Amniotic Fluid Stem cells (AFS represent a potential alternative novel source of stem cells for modeling of human genetic diseases. In fact, by means of prenatal diagnosis, a number of fetuses affected by chromosomal or Mendelian diseases can be identified, and the amniotic fluid collected for genetic testing can be used, after diagnosis, for the isolation, culture and differentiation of AFS cells. This can provide a useful stem cell model for the investigation of the molecular basis of the diagnosed disease without the necessity of producing iPS, since AFS cells show some features of pluripotency and are able to differentiate in cells derived from all three germ layers “in vitro”. In this article, we describe the potential benefits provided by using AFS cells in the modeling of human genetic diseases.

  15. Statistical modelling approach to derive quantitative nanowastes classification index; estimation of nanomaterials exposure

    CSIR Research Space (South Africa)

    Ntaka, L

    2013-08-01

    Full Text Available . In this work, statistical inference approach specifically the non-parametric bootstrapping and linear model were applied. Data used to develop the model were sourced from the literature. 104 data points with information on aggregation, natural organic matter...

  16. Studies and modeling of cold neutron sources; Etude et modelisation des sources froides de neutron

    Energy Technology Data Exchange (ETDEWEB)

    Campioni, G

    2004-11-15

    With the purpose of updating knowledge in the fields of cold neutron sources, the work of this thesis has been run according to the 3 following axes. First, the gathering of specific information forming the materials of this work. This set of knowledge covers the following fields: cold neutron, cross-sections for the different cold moderators, flux slowing down, different measurements of the cold flux and finally, issues in the thermal analysis of the problem. Secondly, the study and development of suitable computation tools. After an analysis of the problem, several tools have been planed, implemented and tested in the 3-dimensional radiation transport code Tripoli-4. In particular, a module of uncoupling, integrated in the official version of Tripoli-4, can perform Monte-Carlo parametric studies with a spare factor of Cpu time fetching 50 times. A module of coupling, simulating neutron guides, has also been developed and implemented in the Monte-Carlo code McStas. Thirdly, achieving a complete study for the validation of the installed calculation chain. These studies focus on 3 cold sources currently functioning: SP1 from Orphee reactor and 2 other sources (SFH and SFV) from the HFR at the Laue Langevin Institute. These studies give examples of problems and methods for the design of future cold sources.

  17. Evaluation of model-simulated source contributions to tropospheric ozone with aircraft observations in the factor-projected space

    Directory of Open Access Journals (Sweden)

    Y. Yoshida

    2008-03-01

    Full Text Available Trace gas measurements of TOPSE and TRACE-P experiments and corresponding global GEOS-Chem model simulations are analyzed with the Positive Matrix Factorization (PMF method for model evaluation purposes. Specially, we evaluate the model simulated contributions to O3 variability from stratospheric transport, intercontinental transport, and production from urban/industry and biomass burning/biogenic sources. We select a suite of relatively long-lived tracers, including 7 chemicals (O3, NOy, PAN, CO, C3H8, CH3Cl, and 7Be and 1 dynamic tracer (potential temperature. The largest discrepancy is found in the stratospheric contribution to 7Be. The model underestimates this contribution by a factor of 2–3, corresponding well to a reduction of 7Be source by the same magnitude in the default setup of the standard GEOS-Chem model. In contrast, we find that the simulated O3 contributions from stratospheric transport are in reasonable agreement with those derived from the measurements. However, the springtime increasing trend over North America derived from the measurements are largely underestimated in the model, indicating that the magnitude of simulated stratospheric O3 source is reasonable but the temporal distribution needs improvement. The simulated O3 contributions from long-range transport and production from urban/industry and biomass burning/biogenic emissions are also in reasonable agreement with those derived from the measurements, although significant discrepancies are found for some regions.

  18. Spectral-element Method for 3D Marine Controlled-source EM Modeling

    Science.gov (United States)

    Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.

    2017-12-01

    As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).

  19. RANS modeling of scalar dispersion from localized sources within a simplified urban-area model

    Science.gov (United States)

    Rossi, Riccardo; Capra, Stefano; Iaccarino, Gianluca

    2011-11-01

    The dispersion of a passive scalar downstream a localized source within a simplified urban-like geometry is examined by means of RANS scalar flux models. The computations are conducted under conditions of neutral stability and for three different incoming wind directions (0°, 45°, 90°) at a roughness Reynolds number of Ret = 391. A Reynolds stress transport model is used to close the flow governing equations whereas both the standard eddy-diffusivity closure and algebraic flux models are employed to close the transport equation for the passive scalar. The comparison with a DNS database shows improved reliability from algebraic scalar flux models towards predicting both the mean concentration and the plume structure. Since algebraic flux models do not increase substantially the computational effort, the results indicate that the use of tensorial-diffusivity can be promising tool for dispersion simulations for the urban environment.

  20. TOXICOLOGICAL EVALUATION OF REALISTIC EMISSIONS OF SOURCE AEROSOLS (TERESA): APPLICATION TO POWER PLANT-DERIVED PM2.5

    Energy Technology Data Exchange (ETDEWEB)

    Annette Rohr

    2005-09-30

    This report documents progress made on the subject project during the period of March 1, 2005 through August 31, 2005. The TERESA Study is designed to investigate the role played by specific emissions sources and components in the induction of adverse health effects by examining the relative toxicity of coal combustion and mobile source (gasoline and/or diesel engine) emissions and their oxidative products. The study involves on-site sampling, dilution, and aging of coal combustion emissions at three coal-fired power plants, as well as mobile source emissions, followed by animal exposures incorporating a number of toxicological endpoints. The DOE-EPRI Cooperative Agreement (henceforth referred to as ''the Agreement'') for which this technical progress report has been prepared covers the performance and analysis of field experiments at the first TERESA plant, located in the Upper Midwest and henceforth referred to as Plant 0, and at two additional coal-fired power plants (Plants 1 and 2) utilizing different coal types and with different plant configurations. During this reporting period, fieldwork was completed at Plant 1, located in the Southeast. Stage I toxicological assessments were carried out in normal Sprague-Dawley rats, and Stage II assessments were carried out in a compromised model (myocardial infarction-MI-model). Normal rats were exposed to the following atmospheric scenarios: (1) primary particles; (2) oxidized emissions; (3) oxidized emissions + secondary organic aerosol (SOA)--this scenario was repeated; and (4) oxidized emissions + ammonia + SOA. Compromised animals were exposed to oxidized emissions + SOA (this scenario was also conducted in replicate). Stage I assessment endpoints included breathing pattern/pulmonary function; in vivo chemiluminescence (an indicator of oxidative stress); blood cytology; bronchoalveolar lavage (BAL) fluid analysis; and histopathology. Stage II assessments included continuous ECG monitoring via

  1. TOXICOLOGICAL EVALUATION OF REALISTIC EMISSIONS OF SOURCE AEROSOLS (TERESA): APPLICATION TO POWER PLANT-DERIVED PM2.5

    Energy Technology Data Exchange (ETDEWEB)

    Annette Rohr

    2005-03-31

    This report documents progress made on the subject project during the period of September 1, 2004 through February 28, 2005. The TERESA Study is designed to investigate the role played by specific emissions sources and components in the induction of adverse health effects by examining the relative toxicity of coal combustion and mobile source (gasoline and/or diesel engine) emissions and their oxidative products. The study involves on-site sampling, dilution, and aging of coal combustion emissions at three coal-fired power plants, as well as mobile source emissions, followed by animal exposures incorporating a number of toxicological endpoints. The DOE-EPRI Cooperative Agreement (henceforth referred to as ''the Agreement'') for which this technical progress report has been prepared covers the performance and analysis of field experiments at the first TERESA plant, located in the Upper Midwest and henceforth referred to as Plant 0, and at two additional coal-fired power plants (Plants 1 and 2) utilizing different coal types and with different plant configurations. During this reporting period, all fieldwork at Plant 0 was completed. Stack sampling was conducted in October to determine if there were significant differences between the in-stack PM concentrations and the diluted concentrations used for the animal exposures. Results indicated no significant differences and therefore confidence that the revised stack sampling methodology described in the previous semiannual report is appropriate for use in the Project. Animal exposures to three atmospheric scenarios were carried out. From October 4-7, we conducted exposures to oxidized emissions with the addition of secondary organic aerosol (SOA). Later in October, exposures to the most complex scenario (oxidized, neutralized emissions plus SOA) were repeated to ensure comparability with the results of the June/July exposures where a different stack sampling setup was employed. In November, exposures

  2. Modelling of novel light sources based on asymmetric heterostructures

    International Nuclear Information System (INIS)

    Afonenko, A.A.; Kononenko, V.K.; Manak, I.S.

    1995-01-01

    For asymmetric quantum-well heterojunction laser sources, processes of carrier injection into quantum wells are considered. In contrast to ordinary quantum-well light sources, active layers in the novel nanocrystalline systems have different thickness and/or compositions. In addition, wide-band gap barrier layers separating the quantum wells may have a linear or parabolic energy potential profile. For various kinds of the structures, mathematical simulation of dynamic response has been carried out. (author). 8 refs, 5 figs

  3. Modeling, analysis, and design of stationary reference frame droop controlled parallel three-phase voltage source inverters

    DEFF Research Database (Denmark)

    Vasquez, Juan Carlos; Guerrero, Josep M.; Savaghebi, Mehdi

    2011-01-01

    and discussed. Experimental results are provided to validate the performance and robustness of the VSIs functionality during Islanded and grid-connected operations, allowing a seamless transition between these modes through control hierarchies by regulating frequency and voltage, main-grid interactivity......Power electronics based microgrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of three-phase VSIs are derived. The proposed voltage and current inner control loops and the mathematical models...

  4. Modeling Aerobic Carbon Source Degradation Processes using Titrimetric Data and Combined Respirometric-Titrimetric Data: Structural and Practical Identifiability

    DEFF Research Database (Denmark)

    Gernaey, Krist; Petersen, B.; Dochain, D.

    2002-01-01

    The structural and practical identifiability of a model for description of respirometric-titrimetric data derived from aerobic batch substrate degradation experiments of a CxHyOz carbon source with activated sludge was evaluated. The model processes needed to describe titrimetric data included su...... the initial substrate concentration S-S(O) is known. The values found correspond to values reported in literature, but, interestingly, also seem able to reflect the occurrence of storage processes when pulses of acetate and dextrose are added. (C) 2002 Wiley Periodicals, Inc....

  5. Modeling of a three-phase reactor for bitumen-derived gas oil hydrotreating

    International Nuclear Information System (INIS)

    Chacon, R.; Canale, A.; Bouza, A.; Sanchez, Y.

    2012-01-01

    A three-phase reactor model for describing the hydrotreating reactions of bitumen-derived gas oil was developed. The model incorporates the mass-transfer resistance at the gas-liquid and liquid-solid interfaces and a kinetic rate expression based on a Langmuir-Hinshelwood-type model. We derived three correlations for determining the solubility of hydrogen (H 2 ), hydrogen sulfide (H 2 S) and ammonia (NH 3 ) in hydrocarbon mixtures and the calculation of the catalyst effectiveness factor was included. Experimental data taken from the literature were used to determine the kinetic parameters (stoichiometric coefficients, reaction orders, reaction rate and adsorption constants for hydrodesulfuration (HDS) and hydrodenitrogenation (HDN)) and to validate the model under various operating conditions. Finally, we studied the effect of operating conditions such as pressure, temperature, LHSV, H 2 /feed ratio and the inhibiting effect of H 2 S on HDS and NH 3 on HDN. (author)

  6. Algebraic Bethe ansatz for a quantum integrable derivative nonlinear Schroedinger model

    International Nuclear Information System (INIS)

    Basu-Mallick, B.; Bhattacharyya, Tanaya

    2002-01-01

    We find that the quantum monodromy matrix associated with a derivative nonlinear Schroedinger (DNLS) model exhibits U(2) or U(1,1) symmetry depending on the sign of the related coupling constant. By using a variant of quantum inverse scattering method which is directly applicable to field theoretical models, we derive all possible commutation relations among the operator valued elements of such monodromy matrix. Thus, we obtain the commutation relation between creation and annihilation operators of quasi-particles associated with DNLS model and find out the S-matrix for two-body scattering. We also observe that, for some special values of the coupling constant, there exists an upper bound on the number of quasi-particles which can form a soliton state for the quantum DNLS model

  7. Source apportionment of fine particulate matter in China in 2013 using a source-oriented chemical transport model.

    Science.gov (United States)

    Shi, Zhihao; Li, Jingyi; Huang, Lin; Wang, Peng; Wu, Li; Ying, Qi; Zhang, Hongliang; Lu, Li; Liu, Xuejun; Liao, Hong; Hu, Jianlin

    2017-12-01

    China has been suffering high levels of fine particulate matter (PM 2.5 ). Designing effective PM 2.5 control strategies requires information about the contributions of different sources. In this study, a source-oriented Community Multiscale Air Quality (CMAQ) model was applied to quantitatively estimate the contributions of different source sectors to PM 2.5 in China. Emissions of primary PM 2.5 and gas pollutants of SO 2 , NO x , and NH 3 , which are precursors of particulate sulfate, nitrate, and ammonium (SNA, major PM 2.5 components in China), from eight source categories (power plants, residential sources, industries, transportation, open burning, sea salt, windblown dust and agriculture) were separately tracked to determine their contributions to PM 2.5 in 2013. Industrial sector is the largest source of SNA in Beijing, Xi'an and Chongqing, followed by agriculture and power plants. Residential emissions are also important sources of SNA, especially in winter when severe pollution events often occur. Nationally, the contributions of different source sectors to annual total PM 2.5 from high to low are industries, residential sources, agriculture, power plants, transportation, windblown dust, open burning and sea salt. Provincially, residential sources and industries are the major anthropogenic sources of primary PM 2.5 , while industries, agriculture, power plants and transportation are important for SNA in most provinces. For total PM 2.5 , residential and industrial emissions are the top two sources, with a combined contribution of 40-50% in most provinces. The contributions of power plants and agriculture to total PM 2.5 are about 10%, respectively. Secondary organic aerosol accounts for about 10% of annual PM 2.5 in most provinces, with higher contributions in southern provinces such as Yunnan (26%), Hainan (25%) and Taiwan (21%). Windblown dust is an important source in western provinces such as Xizang (55% of total PM 2.5 ), Qinghai (74%), Xinjiang (59

  8. Derivation and Numerical Approximation of the Quantum Drift Diffusion Model for Semiconductors

    International Nuclear Information System (INIS)

    Ohnmar Nwe

    2004-06-01

    This paper is concerned with the study of the quantum drift diffusion equation for semiconductors. Derivation of the mathematical model, which describes the electeon flow through a semiconductor device due to the application of a voltage, is considered and studied in numerical point of view by using some methods

  9. The use of quantum chemically derived descriptors for QSAR modelling of reductive dehalogenation of aromatic compounds

    NARCIS (Netherlands)

    Rorije E; Richter J; Peijnenburg WJGM; ECO; IHE Delft

    1994-01-01

    In this study, quantum-chemically derived parameters are developed for a limited number of halogenated aromatic compounds to model the anaerobic reductive dehalogenation reaction rate constants of these compounds. It is shown that due to the heterogeneity of the set of compounds used, no single

  10. A generalized one-factor term structure model and pricing of interest rate derivative securities

    NARCIS (Netherlands)

    Jiang, George J.

    1997-01-01

    The purpose of this paper is to propose a nonparametric interest rate term structure model and investigate its implications on term structure dynamics and prices of interest rate derivative securities. The nonparametric spot interest rate process is estimated from the observed short-term interest

  11. A new fractional derivative without singular kernel: Application to the modelling of the steady heat flow

    Directory of Open Access Journals (Sweden)

    Yang Xiao-Jun

    2016-01-01

    Full Text Available In this article we propose a new fractional derivative without singular kernel. We consider the potential application for modeling the steady heat-conduction problem. The analytical solution of the fractional-order heat flow is also obtained by means of the Laplace transform.

  12. Market segment derivation and profiling via a finite mixture model framework

    NARCIS (Netherlands)

    Wedel, M; Desarbo, WS

    The Marketing literature has shown how difficult it is to profile market segments derived with finite mixture models. especially using traditional descriptor variables (e.g., demographics). Such profiling is critical for the proper implementation of segmentation strategy. we propose a new finite

  13. Bone marrow-derived mesenchymal stem cells influence early tendon-healing in a rabbit achilles tendon model.

    Science.gov (United States)

    Chong, Alphonsus K S; Ang, Abel D; Goh, James C H; Hui, James H P; Lim, Aymeric Y T; Lee, Eng Hin; Lim, Beng Hai

    2007-01-01

    A repaired tendon needs to be protected for weeks until it has accrued enough strength to handle physiological loads. Tissue-engineering techniques have shown promise in the treatment of tendon and ligament defects. The present study tested the hypothesis that bone marrow-derived mesenchymal stem cells can accelerate tendon-healing after primary repair of a tendon injury in a rabbit model. Fifty-seven New Zealand White rabbits were used as the experimental animals, and seven others were used as the source of bone marrow-derived mesenchymal stem cells. The injury model was a sharp complete transection through the midsubstance of the Achilles tendon. The transected tendon was immediately repaired with use of a modified Kessler suture and a running epitendinous suture. Both limbs were used, and each side was randomized to receive either bone marrow-derived mesenchymal stem cells in a fibrin carrier or fibrin carrier alone (control). Postoperatively, the rabbits were not immobilized. Specimens were harvested at one, three, six, and twelve weeks for analysis, which included evaluation of gross morphology (sixty-two specimens), cell tracing (twelve specimens), histological assessment (forty specimens), immunohistochemistry studies (thirty specimens), morphometric analysis (forty specimens), and mechanical testing (sixty-two specimens). There were no differences between the two groups with regard to the gross morphology of the tendons. The fibrin had degraded by three weeks. Cell tracing showed that labeled bone marrow-derived mesenchymal stem cells remained viable and present in the intratendinous region for at least six weeks, becoming more diffuse at later time-periods. At three weeks, collagen fibers appeared more organized and there were better morphometric nuclear parameters in the treatment group (p tendon repair can improve histological and biomechanical parameters in the early stages of tendon-healing.

  14. Evolution of air pollution source contributions over one decade, derived by PM10 and PM2.5 source apportionment in two metropolitan urban areas in Greece

    Science.gov (United States)

    Diapouli, E.; Manousakas, M.; Vratolis, S.; Vasilatou, V.; Maggos, Th; Saraga, D.; Grigoratos, Th; Argyropoulos, G.; Voutsa, D.; Samara, C.; Eleftheriadis, K.

    2017-09-01

    Metropolitan Urban areas in Greece have been known to suffer from poor air quality, due to variety of emission sources, topography and climatic conditions favouring the accumulation of pollution. While a number of control measures have been implemented since the 1990s, resulting in reductions of atmospheric pollution and changes in emission source contributions, the financial crisis which started in 2009 has significantly altered this picture. The present study is the first effort to assess the contribution of emission sources to PM10 and PM2.5 concentration levels and their long-term variability (over 5-10 years), in the two largest metropolitan urban areas in Greece (Athens and Thessaloniki). Intensive measurement campaigns were conducted during 2011-2012 at suburban, urban background and urban traffic sites in these two cities. In addition, available datasets from previous measurements in Athens and Thessaloniki were used in order to assess the long-term variability of concentrations and sources. Chemical composition analysis of the 2011-2012 samples showed that carbonaceous matter was the most abundant component for both PM size fractions. Significant increase of carbonaceous particle concentrations and of OC/EC ratio during the cold period, especially in the residential urban background sites, pointed towards domestic heating and more particularly wood (biomass) burning as a significant source. PMF analysis further supported this finding. Biomass burning was the largest contributing source at the two urban background sites (with mean contributions for the two size fractions in the range of 24-46%). Secondary aerosol formation (sulphate, nitrate & organics) was also a major contributing source for both size fractions at the suburban and urban background sites. At the urban traffic site, vehicular traffic (exhaust and non-exhaust emissions) was the source with the highest contributions, accounting for 44% of PM10 and 37% of PM2.5, respectively. The long

  15. Modeling of the influence of transparency of the derivatives market on financial depth

    Directory of Open Access Journals (Sweden)

    Irina Burdenko

    2016-07-01

    Full Text Available The market of derivative tools becomes an integral part of the financial market, the functions which are carrying out in it peculiar only to it: hedging, distribution of risks, ensuring liquidity of basic assets, information support of future movement of the prices, decrease in asymmetry of information in the financial markets. However, the insufficiency or lack of transparent information can lead to emergence of the crisis phenomena, shocks in the financial market and growth of system risk. Emergence of need for strengthening of information function of the market of derivatives changes of requirements to transparency of information had been caused by financial crisis of 2008-2009. In this article the attempt of an assessment of influence was made by means of autoregressive models the change of requirements to standard transparency, such as qualitative characteristic of the derivatives market, on quantitative indices of the financial market, in particular financial depth. The results of research demonstrate that reforming of the legislation concerning strengthening of transparency in the derivatives market positively influences the growth of financial depth. The research of this question will promote the best understanding of importance of reforming of regulation of the derivatives market, in particular strengthening of requirements to transparency. Recommendations of the further researches concern the needs of input of reforms of financial regulation in the derivatives market in Ukraine, and, thus, to provide the corresponding conditions for his development

  16. Interaction of Coxiella burnetii Strains of Different Sources and Genotypes with Bovine and Human Monocyte-Derived Macrophages

    Directory of Open Access Journals (Sweden)

    Katharina Sobotta

    2018-01-01

    Full Text Available Most human Q fever infections originate from small ruminants. By contrast, highly prevalent shedding of Coxiella (C. burnetii by bovine milk rarely results in human disease. We hypothesized that primary bovine and human monocyte-derived macrophages (MDM represent a suitable in vitro model for the identification of strain-specific virulence properties at the cellular level. Twelve different C. burnetii strains were selected to represent different host species and multiple loci variable number of tandem repeat analysis (MLVA genotypes. Infection efficiency and replication of C. burnetii were monitored by cell culture re-titration and qPCR. Expression of immunoregulatory factors after MDM infection was measured by qRT-PCR and flow cytometry. Invasion, replication and MDM response differed between C. burnetii strains but not between MDMs of the two hosts. Strains isolated from ruminants were less well internalized than isolates from humans and rodents. Internalization of MLVA group I strains was lower compared to other genogroups. Replication efficacy of C. burnetii in MDM ranged from low (MLVA group III to high (MLVA group IV. Infected human and bovine MDM responded with a principal up-regulation of pro-inflammatory cytokines such as IL-1β, IL-12, and TNF-α. However, MLVA group IV strains induced a pronounced host response whereas infection with group I strains resulted in a milder response. C. burnetii infection marginally affected polarization of MDM. Only one C. burnetii strain of MLVA group IV caused a substantial up-regulation of activation markers (CD40, CD80 on the surface of bovine and human MDM. The study showed that replication of C. burnetii in MDM and the subsequent host cell response is genotype-specific rather than being determined by the host species pointing to a clear distinction in C. burnetii virulence between the genetic groups.

  17. Modeling the diurnal tide with dissipation derived from UARS/HRDI measurements

    Directory of Open Access Journals (Sweden)

    M. A. Geller

    1997-09-01

    Full Text Available This paper uses dissipation values derived from UARS/HRDI observations in a recently published diurnal-tide model. These model structures compare quite well with the UARS/HRDI observations with respect to the annual variation of the diurnal tidal amplitudes and the size of the amplitudes themselves. It is suggested that the annual variation of atmospheric dissipation in the mesosphere-lower thermosphere is a major controlling factor in determining the annual variation of the diurnal tide.

  18. Source apportionment of PM2.5 in North India using source-oriented air quality models

    International Nuclear Information System (INIS)

    Guo, Hao; Kota, Sri Harsha; Sahu, Shovan Kumar; Hu, Jianlin; Ying, Qi; Gao, Aifang; Zhang, Hongliang

    2017-01-01

    In recent years, severe pollution events were observed frequently in India especially at its capital, New Delhi. However, limited studies have been conducted to understand the sources to high pollutant concentrations for designing effective control strategies. In this work, source-oriented versions of the Community Multi-scale Air Quality (CMAQ) model with Emissions Database for Global Atmospheric Research (EDGAR) were applied to quantify the contributions of eight source types (energy, industry, residential, on-road, off-road, agriculture, open burning and dust) to fine particulate matter (PM 2.5 ) and its components including primary PM (PPM) and secondary inorganic aerosol (SIA) i.e. sulfate, nitrate and ammonium ions, in Delhi and three surrounding cities, Chandigarh, Lucknow and Jaipur in 2015. PPM mass is dominated by industry and residential activities (>60%). Energy (∼39%) and industry (∼45%) sectors contribute significantly to PPM at south o