WorldWideScience

Sample records for ar-mog source model

  1. Modeling Frequency Comb Sources

    Directory of Open Access Journals (Sweden)

    Li Feng

    2016-06-01

    Full Text Available Frequency comb sources have revolutionized metrology and spectroscopy and found applications in many fields. Stable, low-cost, high-quality frequency comb sources are important to these applications. Modeling of the frequency comb sources will help the understanding of the operation mechanism and optimization of the design of such sources. In this paper,we review the theoretical models used and recent progress of the modeling of frequency comb sources.

  2. Open source molecular modeling.

    Science.gov (United States)

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io.

  3. Acoustic emission source modeling

    Directory of Open Access Journals (Sweden)

    Hora P.

    2010-07-01

    Full Text Available The paper deals with the acoustic emission (AE source modeling by means of FEM system COMSOL Multiphysics. The following types of sources are used: the spatially concentrated force and the double forces (dipole. The pulse excitation is studied in both cases. As a material is used steel. The computed displacements are compared with the exact analytical solution of point sources under consideration.

  4. Mesoscale, Sources and Models: Sources for Nitrogen in the Atmosphere

    DEFF Research Database (Denmark)

    Hertel, O.

    1994-01-01

    Projektet Mesoscales, Sources and Models: Sources for Nitrogen in the Atmosphere er opdelt i 3 delprojekter: Sources - farmland, Sources - sea og Sources - biogenic nitrogen.......Projektet Mesoscales, Sources and Models: Sources for Nitrogen in the Atmosphere er opdelt i 3 delprojekter: Sources - farmland, Sources - sea og Sources - biogenic nitrogen....

  5. Cluster banding heat source model

    Institute of Scientific and Technical Information of China (English)

    Zhang Liguo; Ji Shude; Yang Jianguo; Fang Hongyuan; Li Yafan

    2006-01-01

    Concept of cluster banding heat source model is put forward for the problem of overmany increment steps in the process of numerical simulation of large welding structures, and expression of cluster banding heat source model is deduced based on energy conservation law.Because the expression of cluster banding heat source model deduced is suitable for random weld width, quantitative analysis of welding stress field for large welding structures which have regular welds can be made quickly.

  6. Photovoltaic sources modeling

    CERN Document Server

    Petrone, Giovanni; Spagnuolo, Giovanni

    2016-01-01

    This comprehensive guide surveys all available models for simulating a photovoltaic (PV) generator at different levels of granularity, from cell to system level, in uniform as well as in mismatched conditions. Providing a thorough comparison among the models, engineers have all the elements needed to choose the right PV array model for specific applications or environmental conditions matched with the model of the electronic circuit used to maximize the PV power production.

  7. Animal models of source memory.

    Science.gov (United States)

    Crystal, Jonathon D

    2016-01-01

    Source memory is the aspect of episodic memory that encodes the origin (i.e., source) of information acquired in the past. Episodic memory (i.e., our memories for unique personal past events) typically involves source memory because those memories focus on the origin of previous events. Source memory is at work when, for example, someone tells a favorite joke to a person while avoiding retelling the joke to the friend who originally shared the joke. Importantly, source memory permits differentiation of one episodic memory from another because source memory includes features that were present when the different memories were formed. This article reviews recent efforts to develop an animal model of source memory using rats. Experiments are reviewed which suggest that source memory is dissociated from other forms of memory. The review highlights strengths and weaknesses of a number of animal models of episodic memory. Animal models of source memory may be used to probe the biological bases of memory. Moreover, these models can be combined with genetic models of Alzheimer's disease to evaluate pharmacotherapies that ultimately have the potential to improve memory.

  8. Source model for blasting vibration

    Institute of Scientific and Technical Information of China (English)

    DING; Hua(丁桦); ZHENG; Zhemin(郑哲敏)

    2002-01-01

    By analyzing and comparing the experimental data, the point source moment theory and the cavity theory, it is concluded that the vibrating signals away from the blasting explosive come mainly from the natural vibrations of the geological structures near the broken blasting area. The source impulses are not spread mainly by the inelastic properties (such as through media damping, as believed to be the case by many researchers) of the medium in the propagation pass, but by this structure. Then an equivalent source model for the blasting vibrations of a fragmenting blasting is proposed, which shows the important role of the impulse of the source's time function under certain conditions. For the purpose of numerical simulation, the model is realized in FEM, The finite element results are in good agreement with the experimental data.

  9. Bayesian kinematic earthquake source models

    Science.gov (United States)

    Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.

    2009-12-01

    Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high

  10. [Review of urban nonpoint source pollution models].

    Science.gov (United States)

    Wang, Long; Huang, Yue-Fei; Wang, Guang-Qian

    2010-10-01

    The development history of urban nonpoint source pollution models is reviewed. Features, applicability and limitations of seven popular urban nonpoint source pollution models (SWMM, STORM, SLAMM, HSPF, DR3M-QUAL, MOUSE, and HydroWorks) are discussed. The methodology and research findings of uncertainty in urban nonpoint source pollution modeling are presented. Analytical probabilistic models for estimation of urban nonpoint sources are also presented. The research achievements of urban nonpoint source pollution models in China are summarized. The shortcomings and gaps of approaches on urban nonpoint source pollution models are pointed out. Improvements in modeling of pollutants buildup and washoff, sediments and pollutants transport, and pollutants biochemical reactions are desired for those seven popular models. Most of the models developed by researchers in China are empirical models, so that they can only applied for specific small areas and have inadequate accuracy. Future approaches include improving capability in fate and transport simulation of sediments and pollutants, exploring methodologies of modeling urban nonpoint source pollution in regions with little data or incomplete information, developing stochastic models for urban nonpoint source pollution simulation, and applying GIS to facilitate urban nonpoint source pollution simulation.

  11. Photovoltaic sources modeling and emulation

    CERN Document Server

    Piazza, Maria Carmela Di

    2012-01-01

    This book offers an extensive introduction to the modeling of photovoltaic generators and their emulation by means of power electronic converters will aid in understanding and improving design and setup of new PV plants.

  12. The Commercial Open Source Business Model

    Science.gov (United States)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  13. New Source Model for Chemical Explosions

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiaoning [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-03

    With sophisticated inversion scheme, we recover characteristics of SPE explosions such as corner frequency fc and moment M0, which are used to develop a new source model for chemical explosions.

  14. Characterization and modeling of the heat source

    Energy Technology Data Exchange (ETDEWEB)

    Glickstein, S.S.; Friedman, E.

    1993-10-01

    A description of the input energy source is basic to any numerical modeling formulation designed to predict the outcome of the welding process. The source is fundamental and unique to each joining process. The resultant output of any numerical model will be affected by the initial description of both the magnitude and distribution of the input energy of the heat source. Thus, calculated weld shape, residual stresses, weld distortion, cooling rates, metallurgical structure, material changes due to excessive temperatures and potential weld defects are all influenced by the initial characterization of the heat source. Understandings of both the physics and the mathematical formulation of these sources are essential for describing the input energy distribution. This section provides a brief review of the physical phenomena that influence the input energy distributions and discusses several different models of heat sources that have been used in simulating arc welding, high energy density welding and resistance welding processes. Both simplified and detailed models of the heat source are discussed.

  15. System level modelling with open source tools

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Koefoed; Madsen, Jan; Niaki, Seyed Hosein Attarzadeh;

    , called ForSyDe. ForSyDe is available under the open Source approach, which allows small and medium enterprises (SME) to get easy access to advanced modeling capabilities and tools. We give an introduction to the design methodology through the system level modeling of a simple industrial use case, and we...

  16. Probabilistic forward model for electroencephalography source analysis

    Energy Technology Data Exchange (ETDEWEB)

    Plis, Sergey M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); George, John S [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Jun, Sung C [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Ranken, Doug M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Volegov, Petr L [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Schmidt, David M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

    2007-09-07

    Source localization by electroencephalography (EEG) requires an accurate model of head geometry and tissue conductivity. The estimation of source time courses from EEG or from EEG in conjunction with magnetoencephalography (MEG) requires a forward model consistent with true activity for the best outcome. Although MRI provides an excellent description of soft tissue anatomy, a high resolution model of the skull (the dominant resistive component of the head) requires CT, which is not justified for routine physiological studies. Although a number of techniques have been employed to estimate tissue conductivity, no present techniques provide the noninvasive 3D tomographic mapping of conductivity that would be desirable. We introduce a formalism for probabilistic forward modeling that allows the propagation of uncertainties in model parameters into possible errors in source localization. We consider uncertainties in the conductivity profile of the skull, but the approach is general and can be extended to other kinds of uncertainties in the forward model. We and others have previously suggested the possibility of extracting conductivity of the skull from measured electroencephalography data by simultaneously optimizing over dipole parameters and the conductivity values required by the forward model. Using Cramer-Rao bounds, we demonstrate that this approach does not improve localization results nor does it produce reliable conductivity estimates. We conclude that the conductivity of the skull has to be either accurately measured by an independent technique, or that the uncertainties in the conductivity values should be reflected in uncertainty in the source location estimates.

  17. Modeling huge sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modeling point sources, line sources, and surface sources is presented. Line and surface sources are modeled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces of the room. Point sources are modeled using a hybrid calculation...... method combining this ray-tracing method with image source modeling. With these three source types it is possible to model huge and complex sound sources in industrial environments. Compared to a calculation with only point sources, the use of extended sound sources is shown to improve the agreement...

  18. Analytical models of volcanic ellipsoidal expansion sources

    Directory of Open Access Journals (Sweden)

    Antonella Amoruso

    2013-11-01

    Full Text Available Modeling non-double-couple earthquakes and surficial deformation in volcanic and geothermal areas usually involves expansion sources. Given an ensemble of ellipsoidal or tensile expansion sources and double-couple ones, it is straightforward to obtain the equivalent single moment tensor under the far-field approximation. On the contrary, the moment tensor interpretation is by no means unique or unambiguous. If the far-field approximation is unsatisfied, the single moment tensor representation is inappropriate. Here we focus on the volume change estimate in the case of single sources, in particular finite pressurized ellipsoidal sources, presenting the expressions for the computation of the volume change and surficial displacement in a closed analytical form. We discuss the implications of different domains of the moment-tensor eigenvalue ratios in terms of volume change computation. We also discuss how the volume change of each source can be obtained from the isotropic component of the total moment tensor, in few cases of coupled sources where the total volume change is null. The new expressions for the computation of the volume change and surficial displacement in case of finite pressurized ellipsoidal sources should make their use easier with respect to the already published formulations.

  19. Modeling Large sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modelling point, line and surface sources is presented. Line and surface sources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room. Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image source modelling. With these three source types, it is possible to model large and complex sound sources in workrooms....

  20. Modeling Large sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modelling point, line and surface sources is presented. Line and surface sources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room. Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image source modelling. With these three source types, it is possible to model large and complex sound sources in workrooms....

  1. Source coding model for repeated snapshot imaging

    CERN Document Server

    Li, Junhui; Yang, Dongyue; wu, Guohua; Yin, Longfei; Guo, Hong

    2016-01-01

    Imaging based on successive repeated snapshot measurement is modeled as a source coding process in information theory. The necessary number of measurement to maintain a certain level of error rate is depicted as the rate-distortion function of the source coding. Quantitative formula of the error rate versus measurement number relation is derived, based on the information capacity of imaging system. Second order fluctuation correlation imaging (SFCI) experiment with pseudo-thermal light verifies this formula, which paves the way for introducing information theory into the study of ghost imaging (GI), both conventional and computational.

  2. Developing a Successful Open Source Training Model

    Directory of Open Access Journals (Sweden)

    Belinda Lopez

    2010-01-01

    Full Text Available Training programs for open source software provide a tangible, and sellable, product. A successful training program not only builds revenue, it also adds to the overall body of knowledge available for the open source project. By gathering best practices and taking advantage of the collective expertise within a community, it may be possible for a business to partner with an open source project to build a curriculum that promotes the project and supports the needs of the company's training customers. This article describes the initial approach used by Canonical, the commercial sponsor of the Ubuntu Linux operating system, to engage the community in the creation of its training offerings. We then discuss alternate curriculum creation models and some of the conditions that are necessary for successful collaboration between creators of existing documentation and commercial training providers.

  3. Cosmogenic photons strongly constrain UHECR source models

    CERN Document Server

    van Vliet, Arjen

    2016-01-01

    With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.

  4. Cosmogenic photons strongly constrain UHECR source models

    Science.gov (United States)

    van Vliet, Arjen

    2017-03-01

    With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.

  5. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  6. Modelling of skin exposure from distributed sources

    DEFF Research Database (Denmark)

    Fogh, C.L.; Andersson, Kasper Grann

    2000-01-01

    A simple model of indoor air pollution concentrations was used together with experimental results on deposition velocities to skin to calculate the skin dose from an outdoor plume of contaminants, The primary pathway was considered to be direct deposition to the skin from a homogeneously distribu...... distributed air source. The model has been used to show that skin deposition was a significant dose contributor for example when compared to inhalation dose. (C) 2000 British Occupational Hygiene Society, Published by Elsevier Science Ltd. All rights reserved....

  7. Light source modeling for automotive lighting devices

    Science.gov (United States)

    Zerhau-Dreihoefer, Harald; Haack, Uwe; Weber, Thomas; Wendt, Dierk

    2002-08-01

    Automotive lighting devices generally have to meet high standards. For example to avoid discomfort glare for the oncoming traffic, luminous intensities of a low beam headlight must decrease by more than one order of magnitude within a fraction of a degree along the horizontal cutoff-line. At the same time, a comfortable homogeneous illumination of the road requires slowly varying luminous intensities below the cutoff line. All this has to be realized taking into account both, the legal requirements and the customer's stylistic specifications. In order to be able to simulate and optimize devices with a good optical performance different light source models are required. In the early stage of e.g. reflector development simple unstructured models allow a very fast development of the reflectors shape. On the other hand the final simulation of a complex headlamp or signal light requires a sophisticated model of the spectral luminance. In addition to theoretical models based on the light source's geometry, measured luminance data can also be used in the simulation and optimization process.

  8. Markov source model for printed music decoding

    Science.gov (United States)

    Kopec, Gary E.; Chou, Philip A.; Maltz, David A.

    1995-03-01

    This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.

  9. Modeling of renewable hybrid energy sources

    Directory of Open Access Journals (Sweden)

    Dumitru Cristian Dragos

    2009-12-01

    Full Text Available Recent developments and trends in the electric power consumption indicate an increasing use of renewable energy. Renewable energy technologies offer the promise of clean, abundant energy gathered from self-renewing resources such as the sun, wind, earth and plants. Virtually all regions of the world have renewable resources of one type or another. By this point of view studies on renewable energies focuses more and more attention. The present paper intends to present different mathematical models related to different types of renewable energy sources such as: solar energy and wind energy. It is also presented the validation and adaptation of such models to hybrid systems working in geographical and meteorological conditions specific to central part of Transylvania region. The conclusions based on validation of such models are also shown.

  10. Modeling a neutron rich nuclei source

    Energy Technology Data Exchange (ETDEWEB)

    Mirea, M.; Bajeat, O.; Clapier, F.; Ibrahim, F.; Mueller, A.C.; Pauwels, N.; Proust, J. [Institut de Physique Nucleaire, IN2P3/CNRS, 91 - Orsay (France); Mirea, M. [Institute of Physics and Nuclear Engineering, Tandem Lab., Bucharest (Romania)

    2000-07-01

    The deuteron break-up process in a suitable converter gives rise to intense neutron beams. A source of neutron rich nuclei based on the neutron induced fission can be realised using these beams. A theoretical optimization of such a facility as a function of the incident deuteron energy is reported. The model used to determine the fission products takes into account the excitation energy of the target nucleus and the evaporation of prompt neutrons. Results are presented in connection with a converter-target specific geometry. (author000.

  11. The Open Source Snowpack modelling ecosystem

    Science.gov (United States)

    Bavay, Mathias; Fierz, Charles; Egger, Thomas; Lehning, Michael

    2016-04-01

    As a large number of numerical snow models are available, a few stand out as quite mature and widespread. One such model is SNOWPACK, the Open Source model that is developed at the WSL Institute for Snow and Avalanche Research SLF. Over the years, various tools have been developed around SNOWPACK in order to expand its use or to integrate additional features. Today, the model is part of a whole ecosystem that has evolved to both offer seamless integration and high modularity so each tool can easily be used outside the ecosystem. Many of these Open Source tools experience their own, autonomous development and are successfully used in their own right in other models and applications. There is Alpine3D, the spatially distributed version of SNOWPACK, that forces it with terrain-corrected radiation fields and optionally with blowing and drifting snow. This model can be used on parallel systems (either with OpenMP or MPI) and has been used for applications ranging from climate change to reindeer herding. There is the MeteoIO pre-processing library that offers fully integrated data access, data filtering, data correction, data resampling and spatial interpolations. This library is now used by several other models and applications. There is the SnopViz snow profile visualization library and application that supports both measured and simulated snow profiles (relying on the CAAML standard) as well as time series. This JavaScript application can be used standalone without any internet connection or served on the web together with simulation results. There is the OSPER data platform effort with a data management service (build on the Global Sensor Network (GSN) platform) as well as a data documenting system (metadata management as a wiki). There are several distributed hydrological models for mountainous areas in ongoing development that require very little information about the soil structure based on the assumption that in step terrain, the most relevant information is

  12. Asteroid Models from Multiple Data Sources

    CERN Document Server

    Durech, J; Delbo, M; Kaasalainen, M; Viikinkoski, M

    2015-01-01

    In the past decade, hundreds of asteroid shape models have been derived using the lightcurve inversion method. At the same time, a new framework of 3-D shape modeling based on the combined analysis of widely different data sources such as optical lightcurves, disk-resolved images, stellar occultation timings, mid-infrared thermal radiometry, optical interferometry, and radar delay-Doppler data, has been developed. This multi-data approach allows the determination of most of the physical and surface properties of asteroids in a single, coherent inversion, with spectacular results. We review the main results of asteroid lightcurve inversion and also recent advances in multi-data modeling. We show that models based on remote sensing data were confirmed by spacecraft encounters with asteroids, and we discuss how the multiplication of highly detailed 3-D models will help to refine our general knowledge of the asteroid population. The physical and surface properties of asteroids, i.e., their spin, 3-D shape, densit...

  13. Modelling Large sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modelling point, line and surface sources is presented. Line and surfacesources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room.Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image sourcemodelling. With these three source types, it is possible to model large and complex sound sources in workrooms....

  14. Modelling Large sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modelling point, line and surface sources is presented. Line and surfacesources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room.Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image sourcemodelling. With these three source types, it is possible to model large and complex sound sources in workrooms....

  15. SOFOMORE: Combined EEG source and forward model reconstruction

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole;

    2009-01-01

    We propose a new EEG source localization method that simultaneously performs source and forward model reconstruction (SOFOMORE) in a hierarchical Bayesian framework. Reconstruction of the forward model is motivated by the many uncertainties involved in the forward model, including the representat......We propose a new EEG source localization method that simultaneously performs source and forward model reconstruction (SOFOMORE) in a hierarchical Bayesian framework. Reconstruction of the forward model is motivated by the many uncertainties involved in the forward model, including...... the representation of the cortical surface, conductivity distribution, and electrode positions. We demonstrate in both simulated and real EEG data that reconstruction of the forward model improves localization of the underlying sources....

  16. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  17. An open source business model for malaria.

    Science.gov (United States)

    Årdal, Christine; Røttingen, John-Arne

    2015-01-01

    Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new malaria

  18. An open source business model for malaria.

    Directory of Open Access Journals (Sweden)

    Christine Årdal

    Full Text Available Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related

  19. Modelling of H.264 MPEG2 TS Traffic Source

    Directory of Open Access Journals (Sweden)

    Stanislav Klucik

    2013-01-01

    Full Text Available This paper deals with IPTV traffic source modelling. Traffic sources are used for simulation, emulation and real network testing. This model is made as a derivation of known recorded traffic sources that are analysed and statistically processed. As the results show the proposed model causes in comparison to the known traffic source very similar network traffic parameters when used in a simulated network.

  20. Constraining Emission Models of Luminous Blazar Sources

    Energy Technology Data Exchange (ETDEWEB)

    Sikora, Marek; /Warsaw, Copernicus Astron. Ctr.; Stawarz, Lukasz; /Kipac, Menlo Park /Jagiellonian U., Astron. Observ. /SLAC; Moderski, Rafal; Nalewajko, Krzysztof; /Warsaw, Copernicus Astron. Ctr.; Madejski, Greg; /KIPAC, Menlo Park /SLAC

    2009-10-30

    Many luminous blazars which are associated with quasar-type active galactic nuclei display broad-band spectra characterized by a large luminosity ratio of their high-energy ({gamma}-ray) and low-energy (synchrotron) spectral components. This large ratio, reaching values up to 100, challenges the standard synchrotron self-Compton models by means of substantial departures from the minimum power condition. Luminous blazars have also typically very hard X-ray spectra, and those in turn seem to challenge hadronic scenarios for the high energy blazar emission. As shown in this paper, no such problems are faced by the models which involve Comptonization of radiation provided by a broad-line-region, or dusty molecular torus. The lack or weakness of bulk Compton and Klein-Nishina features indicated by the presently available data favors production of {gamma}-rays via up-scattering of infrared photons from hot dust. This implies that the blazar emission zone is located at parsec-scale distances from the nucleus, and as such is possibly associated with the extended, quasi-stationary reconfinement shocks formed in relativistic outflows. This scenario predicts characteristic timescales for flux changes in luminous blazars to be days/weeks, consistent with the variability patterns observed in such systems at infrared, optical and {gamma}-ray frequencies. We also propose that the parsec-scale blazar activity can be occasionally accompanied by dissipative events taking place at sub-parsec distances and powered by internal shocks and/or reconnection of magnetic fields. These could account for the multiwavelength intra-day flares occasionally observed in powerful blazars sources.

  1. Cosine-Gaussian Schell-model sources.

    Science.gov (United States)

    Mei, Zhangrong; Korotkova, Olga

    2013-07-15

    We introduce a new class of partially coherent sources of Schell type with cosine-Gaussian spectral degree of coherence and confirm that such sources are physically genuine. Further, we derive the expression for the cross-spectral density function of a beam generated by the novel source propagating in free space and analyze the evolution of the spectral density and the spectral degree of coherence. It is shown that at sufficiently large distances from the source the degree of coherence of the propagating beam assumes Gaussian shape while the spectral density takes on the dark-hollow profile.

  2. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  3. The IMAGINE source model for railway noise prediction

    NARCIS (Netherlands)

    Dittrich, M.G.

    2007-01-01

    The IMAGINE railway traffic noise source model is described, which is a further elaboration and completion of the Harmonoise model. Within the EU project Harmonoise, a model was proposed including most of the main railway noise sources. In the IMAGINE project, complete formulation was put forward

  4. The IMAGINE source model for railway noise prediction

    NARCIS (Netherlands)

    Dittrich, M.G.

    2007-01-01

    The IMAGINE railway traffic noise source model is described, which is a further elaboration and completion of the Harmonoise model. Within the EU project Harmonoise, a model was proposed including most of the main railway noise sources. In the IMAGINE project, complete formulation was put forward ta

  5. Computational model of Amersham I-125 source model 6711 and Prosper Pd-103 source model MED3633 using MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Menezes, Artur F.; Reis Junior, Juraci P.; Silva, Ademir X., E-mail: ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear; Rosa, Luiz A.R. da, E-mail: lrosa@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Facure, Alessandro [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil); Cardoso, Simone C., E-mail: Simone@if.ufrj.b [Universidade Federal do Rio de Janeiro (IF/UFRJ), RJ (Brazil). Inst. de Fisica. Dept. de Fisica Nuclear

    2011-07-01

    Brachytherapy is used in cancer treatment at shorter distances through the use of small encapsulated source of ionizing radiation. In such treatment, a radiation source is positioned directly into or near the target volume to be treated. In this study the Monte Carlo based MCNP code was used to model and simulate the I-125 Amersham Health source model 6711 and the Pd-103 Prospera source model MED3633 in order to obtain the dosimetric parameter dose rate constant ({Lambda}) . The sources geometries were modeled and implemented in MCNPX code. The dose rate constant is an important parameter prostate LDR brachytherapy's treatments planning. This study was based on American Association of Physicists in Medicine (AAPM) recommendations which were produced by its Task Group 43. The results obtained were 0.941 and 0.65 for the dose rate constants of I-125 and Pd-103 sources, respectively. They present good agreement with the literature values based on different Monte Carlo codes. (author)

  6. A Simple Double-Source Model for Interference of Capillaries

    Science.gov (United States)

    Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua

    2012-01-01

    A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An…

  7. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    Science.gov (United States)

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  8. Validation of a rodent model of source memory.

    Science.gov (United States)

    Crystal, Jonathon D; Alford, Wesley T

    2014-03-01

    Source memory represents the origin (source) of information. Recently, we proposed that rats (Rattus norvegicus) remember the source of information. However, an alternative to source memory is the possibility that rats selectively encoded some, but not all, information rather than retrieving an episodic memory. We directly tested this 'encoding failure' hypothesis. Here, we show that rats remember the source of information, under conditions that cannot be attributed to encoding failure. Moreover, source memory lasted at least seven days but was no longer present 14 days after studying. Our findings suggest that long-lasting source memory may be modelled in non-humans. Our model should facilitate attempts to elucidate the biological underpinnings of source memory impairments in human memory disorders such as Alzheimer's disease.

  9. Receptor modeling application framework for particle source apportionment.

    Science.gov (United States)

    Watson, John G; Zhu, Tan; Chow, Judith C; Engelbrecht, Johann; Fujita, Eric M; Wilson, William E

    2002-12-01

    Receptor models infer contributions from particulate matter (PM) source types using multivariate measurements of particle chemical and physical properties. Receptor models complement source models that estimate concentrations from emissions inventories and transport meteorology. Enrichment factor, chemical mass balance, multiple linear regression, eigenvector. edge detection, neural network, aerosol evolution, and aerosol equilibrium models have all been used to solve particulate air quality problems, and more than 500 citations of their theory and application document these uses. While elements, ions, and carbons were often used to apportion TSP, PM10, and PM2.5 among many source types, many of these components have been reduced in source emissions such that more complex measurements of carbon fractions, specific organic compounds, single particle characteristics, and isotopic abundances now need to be measured in source and receptor samples. Compliance monitoring networks are not usually designed to obtain data for the observables, locations, and time periods that allow receptor models to be applied. Measurements from existing networks can be used to form conceptual models that allow the needed monitoring network to be optimized. The framework for using receptor models to solve air quality problems consists of: (1) formulating a conceptual model; (2) identifying potential sources; (3) characterizing source emissions; (4) obtaining and analyzing ambient PM samples for major components and source markers; (5) confirming source types with multivariate receptor models; (6) quantifying source contributions with the chemical mass balance; (7) estimating profile changes and the limiting precursor gases for secondary aerosols; and (8) reconciling receptor modeling results with source models, emissions inventories, and receptor data analyses.

  10. Modeling a Common-Source Amplifier Using a Ferroelectric Transistor

    Science.gov (United States)

    Sayyah, Rana; Hunt, Mitchell; MacLeond, Todd C.; Ho, Fat D.

    2010-01-01

    This paper presents a mathematical model characterizing the behavior of a common-source amplifier using a FeFET. The model is based on empirical data and incorporates several variables that affect the output, including frequency, load resistance, and gate-to-source voltage. Since the common-source amplifier is the most widely used amplifier in MOS technology, understanding and modeling the behavior of the FeFET-based common-source amplifier will help in the integration of FeFETs into many circuits.

  11. Data Sources Available for Modeling Environmental Exposures in Older Adults

    Science.gov (United States)

    This report, “Data Sources Available for Modeling Environmental Exposures in Older Adults,” focuses on information sources and data available for modeling environmental exposures in the older U.S. population, defined here to be people 60 years and older, with an emphasis on those...

  12. Blind source separation based on generalized gaussian model

    Institute of Scientific and Technical Information of China (English)

    YANG Bin; KONG Wei; ZHOU Yue

    2007-01-01

    Since in most blind source separation (BSS) algorithms the estimations of probability density function (pdf) of sources are fixed or can only switch between one sup-Gaussian and other sub-Gaussian model,they may not be efficient to separate sources with different distributions. So to solve the problem of pdf mismatch and the separation of hybrid mixture in BSS, the generalized Gaussian model (GGM) is introduced to model the pdf of the sources since it can provide a general structure of univariate distributions. Its great advantage is that only one parameter needs to be determined in modeling the pdf of different sources, so it is less complex than Gaussian mixture model. By using maximum likelihood (ML) approach, the convergence of the proposed algorithm is improved. The computer simulations show that it is more efficient and valid than conventional methods with fixed pdf estimation.

  13. Source Term Model for an Array of Vortex Generator Vanes

    Science.gov (United States)

    Buning, P. G. (Technical Monitor); Waithe, Kenrick A.

    2003-01-01

    A source term model was developed for numerical simulations of an array of vortex generators. The source term models the side force created by a vortex generator being modeled. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on a local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low-profile vortex generator vane, which is only a fraction of the boundary layer thickness, over a flat plate. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data.

  14. Modeling and Mapping of Human Source Data

    Science.gov (United States)

    2011-03-08

    developing a functional framework (and identifying relevant models and algorithms) is to follow the Joint Directors of Laboratories ( JDL ) data fusion...process model ([32], [18]) and explore an analog between traditional fusion processing (at the JDL level 0 and level 1 sub-processes) for physical sensors...this work is shown below. A diagrammatic summary of the analysis is shown in Figures 5 and 6. Figure 4: Information Fusion Hierarchy ( JDL Levels

  15. Nuisance Source Population Modeling for Radiation Detection System Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P; Lange, D; Nelson, K; Wheeler, R

    2009-10-05

    A major challenge facing the prospective deployment of radiation detection systems for homeland security applications is the discrimination of radiological or nuclear 'threat sources' from radioactive, but benign, 'nuisance sources'. Common examples of such nuisance sources include naturally occurring radioactive material (NORM), medical patients who have received radioactive drugs for either diagnostics or treatment, and industrial sources. A sensitive detector that cannot distinguish between 'threat' and 'benign' classes will generate false positives which, if sufficiently frequent, will preclude it from being operationally deployed. In this report, we describe a first-principles physics-based modeling approach that is used to approximate the physical properties and corresponding gamma ray spectral signatures of real nuisance sources. Specific models are proposed for the three nuisance source classes - NORM, medical and industrial. The models can be validated against measured data - that is, energy spectra generated with the model can be compared to actual nuisance source data. We show by example how this is done for NORM and medical sources, using data sets obtained from spectroscopic detector deployments for cargo container screening and urban area traffic screening, respectively. In addition to capturing the range of radioactive signatures of individual nuisance sources, a nuisance source population model must generate sources with a frequency of occurrence consistent with that found in actual movement of goods and people. Measured radiation detection data can indicate these frequencies, but, at present, such data are available only for a very limited set of locations and time periods. In this report, we make more general estimates of frequencies for NORM and medical sources using a range of data sources such as shipping manifests and medical treatment statistics. We also identify potential data sources for industrial

  16. Model predictive control for Z-source power converter

    DEFF Research Database (Denmark)

    Mo, W.; Loh, P.C.; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of impedance-source (commonly known as Z-source) power converter. Output voltage control and current control for Z-source inverter are analyzed and simulated. With MPC's ability of multi- system variables regulation, load current and voltage...... of variable switching frequency as well as robustness of transient response can be obtained at the same time with a formulated Z-source network model. Operating steady state and transient state simulation of MPC are going to be presented, which shows good reference tracking ability of this control method....

  17. MODEL OF LASER-TIG HYBRID WELDING HEAT SOURCE

    Institute of Scientific and Technical Information of China (English)

    Chen Yanbin; Li Liqun; Feng Xiaosong; Fang Junfei

    2004-01-01

    The welding mechanism of laser-TIG hybrid welding process is analyzed. With the variation of arc current, the welding process is divided into two patterns: deep-penetration welding and heat conductive welding. The heat flow model of hybrid welding is presented. As to deep-penetration welding, the heat source includes a surface heat flux and a volume heat flux. The heat source of heat conductive welding is composed of two Gaussian distribute surface heat sources. With this heat source model, a temperature field is calculated. The finite element code MARC is employed for this purpose. The calculation results show a good agreement with the experimental data.

  18. Application of source-receptor models to determine source areas of biological components (pollen and butterflies)

    OpenAIRE

    Alarcón, M.; M. Àvila; Belmonte, J.; Stefanescu, C.; Izquierdo, R.

    2010-01-01

    The source-receptor models allow the establishment of relationships between a receptor point (sampling point) and the probable source areas (regions of emission) through the association of concentration values at the receptor point with the corresponding atmospheric back-trajectories, and, together with other techniques, to interpret transport phenomena on a synoptic scale. These models are generally used in air pollution studies to determine the areas of origin of chemical compounds measured...

  19. Complex source rate estimation for atmospheric transport and dispersion models

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, L.L.

    1993-09-13

    The accuracy associated with assessing the environmental consequences of an accidental atmospheric release of radioactivity is highly dependent on our knowledge of the source release rate which is generally poorly known. This paper reports on a technique that integrates the radiological measurements with atmospheric dispersion modeling for more accurate source term estimation. We construct a minimum least squares methodology for solving the inverse problem with no a priori information about the source rate.

  20. On source models for (192)Ir HDR brachytherapy dosimetry using model based algorithms.

    Science.gov (United States)

    Pantelis, Evaggelos; Zourari, Kyveli; Zoros, Emmanouil; Lahanas, Vasileios; Karaiskos, Pantelis; Papagiannis, Panagiotis

    2016-06-07

    A source model is a prerequisite of all model based dose calculation algorithms. Besides direct simulation, the use of pre-calculated phase space files (phsp source models) and parameterized phsp source models has been proposed for Monte Carlo (MC) to promote efficiency and ease of implementation in obtaining photon energy, position and direction. In this work, a phsp file for a generic (192)Ir source design (Ballester et al 2015) is obtained from MC simulation. This is used to configure a parameterized phsp source model comprising appropriate probability density functions (PDFs) and a sampling procedure. According to phsp data analysis 15.6% of the generated photons are absorbed within the source, and 90.4% of the emergent photons are primary. The PDFs for sampling photon energy and direction relative to the source long axis, depend on the position of photon emergence. Photons emerge mainly from the cylindrical source surface with a constant probability over  ±0.1 cm from the center of the 0.35 cm long source core, and only 1.7% and 0.2% emerge from the source tip and drive wire, respectively. Based on these findings, an analytical parameterized source model is prepared for the calculation of the PDFs from data of source geometry and materials, without the need for a phsp file. The PDFs from the analytical parameterized source model are in close agreement with those employed in the parameterized phsp source model. This agreement prompted the proposal of a purely analytical source model based on isotropic emission of photons generated homogeneously within the source core with energy sampled from the (192)Ir spectrum, and the assignment of a weight according to attenuation within the source. Comparison of single source dosimetry data obtained from detailed MC simulation and the proposed analytical source model show agreement better than 2% except for points lying close to the source longitudinal axis.

  1. EnergyPlus Air Source Integrated Heat Pump Model

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Energy and Transportation Science Division; Adams, Mark B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Energy and Transportation Science Division; New, Joshua Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Energy and Transportation Science Division

    2016-03-30

    This report summarizes the development of the EnergyPlus air-source integrated heat pump model. It introduces its physics, sub-models, working modes, and control logic. In addition, inputs and outputs of the new model are described, and input data file (IDF) examples are given.

  2. Data Sources for NetZero Ft Carson Model

    Data.gov (United States)

    U.S. Environmental Protection Agency — Table of values used to parameterize and evaluate the Ft Carson NetZero integrated Model with published reference sources for each value. This dataset is associated...

  3. Alternative modeling methods for plasma-based Rf ion sources

    Energy Technology Data Exchange (ETDEWEB)

    Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com; Beckwith, Kristian R. C., E-mail: beckwith@txcorp.com [Tech-X Corporation, Boulder, Colorado 80303 (United States)

    2016-02-15

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two

  4. Alternative modeling methods for plasma-based Rf ion sources

    Science.gov (United States)

    Veitzer, Seth A.; Kundrapu, Madhusudhan; Stoltz, Peter H.; Beckwith, Kristian R. C.

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H- source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H- ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models

  5. Source detection in astronomical images by Bayesian model comparison

    Science.gov (United States)

    Frean, Marcus; Friedlander, Anna; Johnston-Hollitt, Melanie; Hollitt, Christopher

    2014-12-01

    The next generation of radio telescopes will generate exabytes of data on hundreds of millions of objects, making automated methods for the detection of astronomical objects ("sources") essential. Of particular importance are faint, diffuse objects embedded in noise. There is a pressing need for source finding software that identifies these sources, involves little manual tuning, yet is tractable to calculate. We first give a novel image discretisation method that incorporates uncertainty about how an image should be discretised. We then propose a hierarchical prior for astronomical images, which leads to a Bayes factor indicating how well a given region conforms to a model of source that is exceptionally unconstrained, compared to a model of background. This enables the efficient localisation of regions that are "suspiciously different" from the background distribution, so our method looks not for brightness but for anomalous distributions of intensity, which is much more general. The model of background can be iteratively improved by removing the influence on it of sources as they are discovered. The approach is evaluated by identifying sources in real and simulated data, and performs well on these measures: the Bayes factor is maximized at most real objects, while returning only a moderate number of false positives. In comparison to a catalogue constructed by widely-used source detection software with manual post-processing by an astronomer, our method found a number of dim sources that were missing from the "ground truth" catalogue.

  6. A localization model to localize multiple sources using Bayesian inference

    Science.gov (United States)

    Dunham, Joshua Rolv

    Accurate localization of a sound source in a room setting is important in both psychoacoustics and architectural acoustics. Binaural models have been proposed to explain how the brain processes and utilizes the interaural time differences (ITDs) and interaural level differences (ILDs) of sound waves arriving at the ears of a listener in determining source location. Recent work shows that applying Bayesian methods to this problem is proving fruitful. In this thesis, pink noise samples are convolved with head-related transfer functions (HRTFs) and compared to combinations of one and two anechoic speech signals convolved with different HRTFs or binaural room impulse responses (BRIRs) to simulate room positions. Through exhaustive calculation of Bayesian posterior probabilities and using a maximal likelihood approach, model selection will determine the number of sources present, and parameter estimation will result in azimuthal direction of the source(s).

  7. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole;

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface......, and electrode positions. We first present a hierarchical Bayesian framework for EEG source localization that jointly performs source and forward model reconstruction (SOFOMORE). Secondly, we evaluate the SOFOMORE model by comparison with source reconstruction methods that use fixed forward models. Simulated...... and real EEG data demonstrate that invoking a stochastic forward model leads to improved source estimates....

  8. PHARAO Laser Source Flight Model: Design and Performances

    CERN Document Server

    Lévèque, Thomas; Esnault, François-Xavier; Delaroche, Christophe; Massonnet, Didier; Grosjean, Olivier; Buffe, Fabrice; Torresi, Patrizia; Bomer, Thierry; Pichon, Alexandre; Béraud, Pascal; Lelay, Jean-Pierre; Thomin, Stéphane; Laurent, Philippe

    2015-01-01

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  9. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  10. Using cryptology models for protecting PHP source code

    Science.gov (United States)

    Jevremović, Aleksandar; Ristić, Nenad; Veinović, Mladen

    2013-10-01

    Protecting PHP scripts from unwanted use, copying and modifications is a big issue today. Existing solutions on source code level are mostly working as obfuscators, they are free, and they are not providing any serious protection. Solutions that encode opcode are more secure, but they are commercial and require closed-source proprietary PHP interpreter's extension. Additionally, encoded opcode is not compatible with future versions of interpreters which imply re-buying encoders from the authors. Finally, if extension source-code is compromised, all scripts encoded with that solution are compromised too. In this paper, we will present a new model for free and open-source PHP script protection solution. Protection level provided by the proposed solution is equal to protection level of commercial solutions. Model is based on conclusions from use of standard cryptology models for analysis of strengths and weaknesses of the existing solutions, when a scripts protection is seen as secure communication channel in the cryptology.

  11. Application of source-receptor models to determine source areas of biological components (pollen and butterflies

    Directory of Open Access Journals (Sweden)

    M. Alarcón

    2010-01-01

    Full Text Available The source-receptor models allow the establishment of relationships between a receptor point (sampling point and the probable source areas (regions of emission through the association of concentration values at the receptor point with the corresponding atmospheric back-trajectories, and, together with other techniques, to interpret transport phenomena on a synoptic scale. These models are generally used in air pollution studies to determine the areas of origin of chemical compounds measured at a sampling point, and thus be able to target actions to reduce pollutants. However, until now, few studies have applied these types of models to describe the source areas of biological organisms. In Catalonia there are very complete records of pollen (data from the Xarxa Aerobiològica de Catalunya, Aerobiology Network of Catalonia and butterflies (data from the Catalan Butterfly Monitoring Scheme, a biological material that is also liable to be transported long distances and whose areas of origin could be interesting to know. This work presents the results of the use of the Seibert et al. model applied to the study of the source regions of: (1 certain pollen of an allergic nature, observed in Catalonia and the Canary Islands, and (2 the migratory butterfly Vanessa cardui, observed in Catalonia. Based on the results obtained we can corroborate the suitability of these models to determine the area of origin of several species, both chemical and biological, therefore expanding the possibilities of applying the original model to the wider field of Aerobiology.

  12. The Unfolding of Value Sources During Online Business Model Transformation

    Directory of Open Access Journals (Sweden)

    Nadja Hoßbach

    2016-12-01

    Full Text Available Purpose: In the magazine publishing industry, viable online business models are still rare to absent. To prepare for the ‘digital future’ and safeguard their long-term survival, many publishers are currently in the process of transforming their online business model. Against this backdrop, this study aims to develop a deeper understanding of (1 how the different building blocks of an online business model are transformed over time and (2 how sources of value creation unfold during this transformation process. Methodology: To answer our research question, we conducted a longitudinal case study with a leading German business magazine publisher (called BIZ. Data was triangulated from multiple sources including interviews, internal documents, and direct observations. Findings: Based on our case study, we nd that BIZ used the transformation process to differentiate its online business model from its traditional print business model along several dimensions, and that BIZ’s online business model changed from an efficiency- to a complementarity- to a novelty-based model during this process. Research implications: Our findings suggest that different business model transformation phases relate to different value sources, questioning the appropriateness of value source-based approaches for classifying business models. Practical implications: The results of our case study highlight the need for online-offline business model differentiation and point to the important distinction between service and product differentiation. Originality: Our study contributes to the business model literature by applying a dynamic and holistic perspective on the link between online business model changes and unfolding value sources.

  13. Modeling water demand when households have multiple sources of water

    Science.gov (United States)

    Coulibaly, Lassina; Jakus, Paul M.; Keith, John E.

    2014-07-01

    A significant portion of the world's population lives in areas where public water delivery systems are unreliable and/or deliver poor quality water. In response, people have developed important alternatives to publicly supplied water. To date, most water demand research has been based on single-equation models for a single source of water, with very few studies that have examined water demand from two sources of water (where all nonpublic system water sources have been aggregated into a single demand). This modeling approach leads to two outcomes. First, the demand models do not capture the full range of alternatives, so the true economic relationship among the alternatives is obscured. Second, and more seriously, economic theory predicts that demand for a good becomes more price-elastic as the number of close substitutes increases. If researchers artificially limit the number of alternatives studied to something less than the true number, the price elasticity estimate may be biased downward. This paper examines water demand in a region with near universal access to piped water, but where system reliability and quality is such that many alternative sources of water exist. In extending the demand analysis to four sources of water, we are able to (i) demonstrate why households choose the water sources they do, (ii) provide a richer description of the demand relationships among sources, and (iii) calculate own-price elasticity estimates that are more elastic than those generally found in the literature.

  14. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  15. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  16. MCNP model for the many KE-Basin radiation sources

    Energy Technology Data Exchange (ETDEWEB)

    Rittmann, P.D.

    1997-05-21

    This document presents a model for the location and strength of radiation sources in the accessible areas of KE-Basin which agrees well with data taken on a regular grid in September of 1996. This modelling work was requested to support dose rate reduction efforts in KE-Basin. Anticipated fuel removal activities require lower dose rates to minimize annual dose to workers. With this model, the effects of component cleanup or removal can be estimated in advance to evaluate their effectiveness. In addition, the sources contributing most to the radiation fields in a given location can be identified and dealt with.

  17. Model predictive control for Z-source power converter

    DEFF Research Database (Denmark)

    Mo, W.; Loh, P.C.; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of impedance-source (commonly known as Z-source) power converter. Output voltage control and current control for Z-source inverter are analyzed and simulated. With MPC's ability of multi- system variables regulation, load current and voltage...... regulations, impedance network inductor current, capacitor voltage as well as switching frequency fixation, transient reservation and null state penalization are all regulated as subjecting to constraints of this control method. The quality of output waveform, stability of impedance-network, level constraint...... of variable switching frequency as well as robustness of transient response can be obtained at the same time with a formulated Z-source network model. Operating steady state and transient state simulation of MPC are going to be presented, which shows good reference tracking ability of this control method....

  18. Open Source Software Reliability Growth Model by Considering Change- Point

    Directory of Open Access Journals (Sweden)

    Mashaallah Basirzadeh

    2012-01-01

    Full Text Available The modeling technique for Software Reliability is reaching its prosperity. Software reliability growth models have been used extensively for closed source software. The design and development of open source software (OSS is different from closed source software. We observed some basic characteristics for open source software like (i more instructions execution and code coverage taking place with respect to time, (ii release early, release often (iii frequent addition of patches (iv heterogeneity in fault density and effort expenditure (v Frequent release activities seem to have changed the bug dynamics significantly (vi Bug reporting on bug tracking system drastically increases and decreases. Due to this reason bug reported on bug tracking system keeps an irregular state and fluctuations. Therefore, fault detection/removal process can not be smooth and may be changed at some time point called change-point. In this paper, an instructions executed dependent software reliability growth model has been developed by considering change-point in order to cater diverse and huge user profile, irregular state of bug tracking system and heterogeneity in fault distribution. We have analyzed actual software failure count data to show numerical examples of software reliability assessment for the OSS. We also compare our model with the conventional in terms of goodness-of-fit for actual data. We have shown that the proposed model can assist improvement of quality for OSS systems developed under the open source project.

  19. MODEL OF A PERSONWALKING AS A STRUCTURE BORNE SOUND SOURCE

    DEFF Research Database (Denmark)

    Lievens, Matthias; Brunskog, Jonas

    2007-01-01

    The behaviour of a person walking as a source of impact sound or walking sound is not yet fully understood. Especially for lightweight structures the coupling between the human body and the floor will determine the power flow into the floor, and therefore the mobility of both source and receiver...... has to be considered and the contact history must be integrated in the model. This is complicated by the fact that nonlinearities occur at different stages in the system either on the source or receiver side. ot only lightweight structures but also soft floor coverings would benefit from an accurate...

  20. Siberian Arctic black carbon sources constrained by model and observation

    Science.gov (United States)

    Winiger, Patrik; Andersson, August; Eckhardt, Sabine; Stohl, Andreas; Semiletov, Igor P.; Dudarev, Oleg V.; Charkin, Alexander; Shakhova, Natalia; Klimont, Zbigniew; Heyes, Chris; Gustafsson, Örjan

    2017-02-01

    Black carbon (BC) in haze and deposited on snow and ice can have strong effects on the radiative balance of the Arctic. There is a geographic bias in Arctic BC studies toward the Atlantic sector, with lack of observational constraints for the extensive Russian Siberian Arctic, spanning nearly half of the circum-Arctic. Here, 2 y of observations at Tiksi (East Siberian Arctic) establish a strong seasonality in both BC concentrations (8 ngṡm-3 to 302 ngṡm-3) and dual-isotope-constrained sources (19 to 73% contribution from biomass burning). Comparisons between observations and a dispersion model, coupled to an anthropogenic emissions inventory and a fire emissions inventory, give mixed results. In the European Arctic, this model has proven to simulate BC concentrations and source contributions well. However, the model is less successful in reproducing BC concentrations and sources for the Russian Arctic. Using a Bayesian approach, we show that, in contrast to earlier studies, contributions from gas flaring (6%), power plants (9%), and open fires (12%) are relatively small, with the major sources instead being domestic (35%) and transport (38%). The observation-based evaluation of reported emissions identifies errors in spatial allocation of BC sources in the inventory and highlights the importance of improving emission distribution and source attribution, to develop reliable mitigation strategies for efficient reduction of BC impact on the Russian Arctic, one of the fastest-warming regions on Earth.

  1. Wind-Wave Model with an Optimized Source Function

    CERN Document Server

    Polnikov, Vladislav

    2010-01-01

    On the basis of the author's earlier results, a new source function for a numerical wind-wave model optimized by the criterion of accuracy and speed of calculation is substantiated. The proposed source function includes (a) an optimized version of the discrete interaction approximation for parametrization of the nonlinear evolution mechanism, (b) a generalized empirical form of the input term modified by adding a special block of the dynamic boundary layer of the atmosphere, and (c) a dissipation term quadratic in the wave spectrum. Particular attention is given to a theoretical substantiation of the least investigated dissipation term. The advantages of the proposed source function are discussed by its comparison to the analogues used in the widespread models of the third generation WAM and WAVEWATCH. At the initial stage of assessing the merits of the proposed model, the results of its testing by the system of academic tests are presented. In the course of testing, some principals of this procedure are form...

  2. Network infection source identification under the SIRI model

    CERN Document Server

    Hu, Wuhua; Harilal, Athul; Xiao, Gaoxi

    2014-01-01

    We study the problem of identifying a single infection source in a network under the susceptible-infected-recovered-infected (SIRI) model. We describe the infection model via a state-space model, and utilizing a state propagation approach, we derive an algorithm based on dynamic message passing (DMP), which we call DMP+, to infer the infection source. The DMP+ algorithm uses the partial or complete observations of node states at a particular time, where the elapsed time from the start of the infection is unknown. It is able to incorporate side information (if any) of the observed states of a subset of nodes at different times, and of the prior probability of each infected or recovered node to be the infection source. Simulation results suggest that the DMP+ estimator outperforms the DMP and Jordan center estimators over a wide range of infection and reinfection rates.

  3. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysi...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.......Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...

  4. Extended Nonnegative Tensor Factorisation Models for Musical Sound Source Separation

    Directory of Open Access Journals (Sweden)

    Derry FitzGerald

    2008-01-01

    Full Text Available Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  5. Atmospheric Model Effects on Infrasound Source Inversion from the Source Physics Experiments

    Science.gov (United States)

    Preston, L. A.; Aur, K. A.

    2016-12-01

    The Source Physics Experiments (SPE) consist of a series of underground explosive shots at the Nevada National Security Site (NNSS) designed to gain an improved understanding of the generation and propagation of physical signals in the near and far field. Characterizing the acoustic and infrasound source mechanism from underground explosions is of great importance in non-proliferation activities. To this end we perform full waveform source inversion of infrasound data collected from SPE shots at distances from 300 m to 1 km and frequencies up to 20 Hz. Our method requires estimating the state of the atmosphere at the time of each shot, computing Green's functions through these atmospheric models, and subsequently inverting these signals in the frequency domain to obtain a source time function. To estimate the state of the atmosphere at the time of the shot, we utilize two different datasets: North American Regional Reanalysis data, a comprehensive but lower resolution dataset, and locally obtained sonde and surface weather observations. We synthesize Green's functions through these atmospheric models using Sandia's moving media acoustic propagation simulation suite. These models include 3-D variations in topography, temperature, pressure, and wind. We will compare and contrast the atmospheric models derived from the two weather datasets and discuss how these differences affect computed source waveforms and contribute to modeling uncertainty. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  6. Numerical modeling of the SNS H- ion source

    Science.gov (United States)

    Veitzer, Seth A.; Beckwith, Kristian R. C.; Kundrapu, Madhusudhan; Stoltz, Peter H.

    2015-04-01

    Ion source rf antennas that produce H- ions can fail when plasma heating causes ablation of the insulating coating due to small structural defects such as cracks. Reducing antenna failures that reduce the operating capabilities of the Spallation Neutron Source (SNS) accelerator is one of the top priorities of the SNS H- Source Program at ORNL. Numerical modeling of ion sources can provide techniques for optimizing design in order to reduce antenna failures. There are a number of difficulties in developing accurate models of rf inductive plasmas. First, a large range of spatial and temporal scales must be resolved in order to accurately capture the physics of plasma motion, including the Debye length, rf frequencies on the order of tens of MHz, simulation time scales of many hundreds of rf periods, large device sizes on tens of cm, and ion motions that are thousands of times slower than electrons. This results in large simulation domains with many computational cells for solving plasma and electromagnetic equations, short time steps, and long-duration simulations. In order to reduce the computational requirements, one can develop implicit models for both fields and particle motions (e.g. divergence-preserving ADI methods), various electrostatic models, or magnetohydrodynamic models. We have performed simulations using all three of these methods and have found that fluid models have the greatest potential for giving accurate solutions while still being fast enough to perform long timescale simulations in a reasonable amount of time. We have implemented a number of fluid models with electromagnetics using the simulation tool USim and applied them to modeling the SNS H- ion source. We found that a reduced, single-fluid MHD model with an imposed magnetic field due to the rf antenna current and the confining multi-cusp field generated increased bulk plasma velocities of > 200 m/s in the region of the antenna where ablation is often observed in the SNS source. We report

  7. A model for managing sources of groundwater pollution.

    Science.gov (United States)

    Gorelick, S.M.

    1982-01-01

    The waste disposal capacity of a groundwater system can be maximized while maintaining water quality at specified locations by using a groundwater pollutant source management model that is based upon linear programing and numerical simulation. The decision variables of the management model are solute waste disposal rates at various facilities distributed over space. A concentration response matrix is used in the management model to describe transient solute transport and is developed using the US Geological Survey solute transport simulation model. The management model was applied to a complex hypothetical groundwater system. -from Author

  8. Monitoring alert and drowsy states by modeling EEG source nonstationarity

    Science.gov (United States)

    Hsu, Sheng-Hsiou; Jung, Tzyy-Ping

    2017-10-01

    Objective. As a human brain performs various cognitive functions within ever-changing environments, states of the brain characterized by recorded brain activities such as electroencephalogram (EEG) are inevitably nonstationary. The challenges of analyzing the nonstationary EEG signals include finding neurocognitive sources that underlie different brain states and using EEG data to quantitatively assess the state changes. Approach. This study hypothesizes that brain activities under different states, e.g. levels of alertness, can be modeled as distinct compositions of statistically independent sources using independent component analysis (ICA). This study presents a framework to quantitatively assess the EEG source nonstationarity and estimate levels of alertness. The framework was tested against EEG data collected from 10 subjects performing a sustained-attention task in a driving simulator. Main results. Empirical results illustrate that EEG signals under alert versus drowsy states, indexed by reaction speeds to driving challenges, can be characterized by distinct ICA models. By quantifying the goodness-of-fit of each ICA model to the EEG data using the model deviation index (MDI), we found that MDIs were significantly correlated with the reaction speeds (r  =  ‑0.390 with alertness models and r  =  0.449 with drowsiness models) and the opposite correlations indicated that the two models accounted for sources in the alert and drowsy states, respectively. Based on the observed source nonstationarity, this study also proposes an online framework using a subject-specific ICA model trained with an initial (alert) state to track the level of alertness. For classification of alert against drowsy states, the proposed online framework achieved an averaged area-under-curve of 0.745 and compared favorably with a classic power-based approach. Significance. This ICA-based framework provides a new way to study changes of brain states and can be applied to

  9. Fired Models of Air-gun Source and Its Application

    Institute of Scientific and Technical Information of China (English)

    Luo Guichun; Ge Hongkui; Wang Baoshan; Hu Ping; Mu Hongwang; Chen Yong

    2008-01-01

    Air-gun is an important active seismic source. With the development of the theory about air-gun array, the technique for air-gun array design becomes mature and is widely used in petroleum exploration and geophysics. In order to adapt it to different research domains,different combination and fired models are needed. At the present time, there are two firedmodels of air-gun source, namely, reinforced initial pulse and reinforced first bubble pulse.The fired time, space between single guns, frequency and resolution of the two models are different. This comparison can supply the basis for its extensive application.

  10. Secondary neutron source modelling using MCNPX and ALEPH codes

    Science.gov (United States)

    Trakas, Christos; Kerkar, Nordine

    2014-06-01

    Monitoring the subcritical state and divergence of reactors requires the presence of neutron sources. But mainly secondary neutrons from these sources feed the ex-core detectors (SRD, Source Range Detector) whose counting rate is correlated with the level of the subcriticality of reactor. In cycle 1, primary neutrons are provided by sources activated outside of the reactor (e.g. Cf252); part of this source can be used for the divergence of cycle 2 (not systematic). A second family of neutron sources is used for the second cycle: the spontaneous neutrons of actinides produced after irradiation of fuel in the first cycle. Both families of sources are not sufficient to efficiently monitor the divergence of the second cycles and following ones, in most reactors. Secondary sources cluster (SSC) fulfil this role. In the present case, the SSC [Sb, Be], after activation in the first cycle (production of Sb124, unstable), produces in subsequent cycles a photo-neutron source by gamma (from Sb124)-neutron (on Be9) reaction. This paper presents the model of the process between irradiation in cycle 1 and cycle 2 results for SRD counting rate at the beginning of cycle 2, using the MCNPX code and the depletion chain ALEPH-V1 (coupling of MCNPX and ORIGEN codes). The results of this simulation are compared with two experimental results of the PWR 1450 MWe-N4 reactors. A good agreement is observed between these results and the simulations. The subcriticality of the reactors is about at -15,000 pcm. Discrepancies on the SRD counting rate between calculations and measurements are in the order of 10%, lower than the combined uncertainty of measurements and code simulation. This comparison validates the AREVA methodology, which allows having an SRD counting rate best-estimate for cycles 2 and next ones and optimizing the position of the SSC, depending on the geographic location of sources, main parameter for optimal monitoring of subcritical states.

  11. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  12. Open source Modeling and optimization tools for Planning

    Energy Technology Data Exchange (ETDEWEB)

    Peles, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-10

    Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward to complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.

  13. Selection of models to calculate the LLW source term

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, T.M. (Brookhaven National Lab., Upton, NY (United States))

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab.

  14. An improved source model for aircraft interior noise studies

    Science.gov (United States)

    Mahan, J. R.; Fuller, C. R.

    1985-01-01

    There is concern that advanced turboprop engines currently being developed may produce excessive aircraft cabin noise level. This concern has stimulated renewed interest in developing aircraft interior noise reduction methods that do not significnatly increase take off weight. An existing analytical model for noise transmission into aircraft cabins was utilized to investigate the behavior of an improved propeller source model for use in aircraft interior noise studies. The new source model, a virtually rotating dipole, is shown to adequately match measured fuselage sound pressure distributions, including the correct phase relationships, for published data. The virtually rotating dipole is used to study the sensitivity of synchrophasing effectiveness to the fuselage sound pressure trace velocity distribution. Results of calculations are presented which reveal the importance of correctly modeling the surface pressure phase relations in synchrophasing and other aircraft interior noise studies.

  15. OSeMOSYS: The Open Source Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    Howells, Mark, E-mail: mark.i.howells@gmail.com [Royal Institute of Technology (KTH) (Sweden); Rogner, Holger [Planning and Economic Studies Section, International Atomic Energy Agency (Austria); Strachan, Neil [Energy Institute, University College London (United Kingdom); Heaps, Charles [Stockholm Environmental Institute (SEI) (United States); Huntington, Hillard [Stanford University (United States); Kypreos, Socrates [Paul Scherrer Institute (Switzerland); Hughes, Alison [Energy Research Centre, University of Cape Town (South Africa); Silveira, Semida [Royal Institute of Technology (KTH) (Sweden); DeCarolis, Joe [North Carolina State University (United States); Bazillian, Morgan [United Nations Industrial Development Organization (UNIDO) (Austria); Roehrl, Alexander [United Nations Department of Economic and Social Affairs (UNDESA) (United States)

    2011-10-15

    This paper discusses the design and development of the Open Source Energy Modeling System (OSeMOSYS). It describes the model's formulation in terms of a 'plain English' description, algebraic formulation, implementation-in terms of its full source code, as well as a detailed description of the model inputs, parameters, and outputs. A key feature of the OSeMOSYS implementation is that it is contained in less than five pages of documented, easily accessible code. Other existing energy system models that do not have this emphasis on compactness and openness makes the barrier to entry by new users much higher, as well as making the addition of innovative new functionality very difficult. The paper begins by describing the rationale for the development of OSeMOSYS and its structure. The current preliminary implementation of the model is then demonstrated for a discrete example. Next, we explain how new development efforts will build on the existing OSeMOSYS codebase. The paper closes with thoughts regarding the organization of the OSeMOSYS community, associated capacity development efforts, and linkages to other open source efforts including adding functionality to the LEAP model. - Highlights: > OSeMOSYS is a new free and open source energy systems. > This model is written in a simple, open, flexible and transparent manner to support teaching. > OSeMOSYS is based on free software and optimizes using a free solver. > This model replicates the results of many popular tools, such as MARKAL. > A link between OSeMOSYS and LEAP has been developed.

  16. Modeling of an autonomous microgrid for renewable energy sources integration

    DEFF Research Database (Denmark)

    Serban, I.; Teodorescu, Remus; Guerrero, Josep M.

    2009-01-01

    The frequency stability analysis in an autonomous microgrid (MG) with renewable energy sources (RES) is a continuously studied issue. This paper presents an original method for modeling an autonomous MG with a battery energy storage system (BESS) and a wind power plant (WPP), with the purpose...

  17. JSim, an open-source modeling system for data analysis.

    Science.gov (United States)

    Butterworth, Erik; Jardine, Bartholomew E; Raymond, Gary M; Neal, Maxwell L; Bassingthwaighte, James B

    2013-01-01

    JSim is a simulation system for developing models, designing experiments, and evaluating hypotheses on physiological and pharmacological systems through the testing of model solutions against data. It is designed for interactive, iterative manipulation of the model code, handling of multiple data sets and parameter sets, and for making comparisons among different models running simultaneously or separately. Interactive use is supported by a large collection of graphical user interfaces for model writing and compilation diagnostics, defining input functions, model runs, selection of algorithms solving ordinary and partial differential equations, run-time multidimensional graphics, parameter optimization (8 methods), sensitivity analysis, and Monte Carlo simulation for defining confidence ranges. JSim uses Mathematical Modeling Language (MML) a declarative syntax specifying algebraic and differential equations. Imperative constructs written in other languages (MATLAB, FORTRAN, C++, etc.) are accessed through procedure calls. MML syntax is simple, basically defining the parameters and variables, then writing the equations in a straightforward, easily read and understood mathematical form. This makes JSim good for teaching modeling as well as for model analysis for research.   For high throughput applications, JSim can be run as a batch job.  JSim can automatically translate models from the repositories for Systems Biology Markup Language (SBML) and CellML models. Stochastic modeling is supported. MML supports assigning physical units to constants and variables and automates checking dimensional balance as the first step in verification testing. Automatic unit scaling follows, e.g. seconds to minutes, if needed. The JSim Project File sets a standard for reproducible modeling analysis: it includes in one file everything for analyzing a set of experiments: the data, the models, the data fitting, and evaluation of parameter confidence ranges. JSim is open source; it

  18. Alternative source models of very low frequency events

    Science.gov (United States)

    Gomberg, Joan S.; Agnew, D.C.; Schwartz, S.Y.

    2016-01-01

    We present alternative source models for very low frequency (VLF) events, previously inferred to be radiation from individual slow earthquakes that partly fill the period range between slow slip events lasting thousands of seconds and low-frequency earthquakes (LFE) with durations of tenths of a second. We show that VLF events may emerge from bandpass filtering a sum of clustered, shorter duration, LFE signals, believed to be the components of tectonic tremor. Most published studies show VLF events occurring concurrently with tremor bursts and LFE signals. Our analysis of continuous data from Costa Rica detected VLF events only when tremor was also occurring, which was only 7% of the total time examined. Using analytic and synthetic models, we show that a cluster of LFE signals produces the distinguishing characteristics of VLF events, which may be determined by the cluster envelope. The envelope may be diagnostic of a single, dynamic, slowly slipping event that propagates coherently over kilometers or represents a narrowly band-passed version of nearly simultaneous arrivals of radiation from slip on multiple higher stress drop and/or faster propagating slip patches with dimensions of tens of meters (i.e., LFE sources). Temporally clustered LFE sources may be triggered by single or multiple distinct aseismic slip events or represent the nearly simultaneous chance occurrence of background LFEs. Given the nonuniqueness in possible source durations, we suggest it is premature to draw conclusions about VLF event sources or how they scale.

  19. Developing seismogenic source models based on geologic fault data

    Science.gov (United States)

    Haller, Kathleen M.; Basili, Roberto

    2011-01-01

    Calculating seismic hazard usually requires input that includes seismicity associated with known faults, historical earthquake catalogs, geodesy, and models of ground shaking. This paper will address the input generally derived from geologic studies that augment the short historical catalog to predict ground shaking at time scales of tens, hundreds, or thousands of years (e.g., SSHAC 1997). A seismogenic source model, terminology we adopt here for a fault source model, includes explicit three-dimensional faults deemed capable of generating ground motions of engineering significance within a specified time frame of interest. In tectonically active regions of the world, such as near plate boundaries, multiple seismic cycles span a few hundred to a few thousand years. In contrast, in less active regions hundreds of kilometers from the nearest plate boundary, seismic cycles generally are thousands to tens of thousands of years long. Therefore, one should include sources having both longer recurrence intervals and possibly older times of most recent rupture in less active regions of the world rather than restricting the model to include only Holocene faults (i.e., those with evidence of large-magnitude earthquakes in the past 11,500 years) as is the practice in tectonically active regions with high deformation rates. During the past 15 years, our institutions independently developed databases to characterize seismogenic sources based on geologic data at a national scale. Our goal here is to compare the content of these two publicly available seismogenic source models compiled for the primary purpose of supporting seismic hazard calculations by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the U.S. Geological Survey (USGS); hereinafter we refer to the two seismogenic source models as INGV and USGS, respectively. This comparison is timely because new initiatives are emerging to characterize seismogenic sources at the continental scale (e.g., SHARE in the

  20. How many separable sources? Model selection in independent components analysis.

    Science.gov (United States)

    Woods, Roger P; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.

  1. Helium Reionization Simulations. I. Modeling Quasars as Radiation Sources

    CERN Document Server

    La Plante, Paul

    2015-01-01

    We introduce a new project to understand helium reionization using fully coupled $N$-body, hydrodynamics, and radiative transfer simulations. This project aims to capture correctly the thermal history of the intergalactic medium (IGM) as a result of reionization and make predictions about the Lyman-$\\alpha$ forest and baryon temperature-density relation. The dominant sources of radiation for this transition are quasars, so modeling the source population accurately is very important for making reliable predictions. In this first paper, we present a new method for populating dark matter halos with quasars. Our set of quasar models include two different light curves, a lightbulb (simple on/off) and symmetric exponential model, and luminosity-dependent quasar lifetimes. Our method self-consistently reproduces an input quasar luminosity function (QLF) given a halo catalog from an $N$-body simulation, and propagates quasars through the merger history of halo hosts. After calibrating quasar clustering using measurem...

  2. Residential radon in Finland: sources, variation, modelling and dose comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Arvela, H.

    1995-09-01

    The study deals with sources of indoor radon in Finland, seasonal variations in radon concentration, the effect of house construction and ventilation and also with the radiation dose from indoor radon and terrestrial gamma radiation. The results are based on radon measurements in approximately 4000 dwellings and on air exchange measurements in 250 dwellings as well as on model calculations. The results confirm that convective soil air flow is by far the most important source of indoor radon in Finnish low-rise residential housing. (97 refs., 61 figs., 30 tabs.).

  3. Model for the radio source Sagittarius B2

    Energy Technology Data Exchange (ETDEWEB)

    Gosachinskii, I.; Khersonskii, V.

    1981-07-01

    A dynamical model is proposed for the gas cloud surrounding the radio source Sgr B2. The kinematic behavior of the gas in the source is interpreted in terms of a contracting core and a rotating outer envelope. The cloud initially would have been kept stable by turbulent motion, whose energy would be dissipated into heat by magnetic viscosity. This process should operate more rapidly in the dense core, which would begin to collapse while the envelope remains stable. The initial cloud parameters and various circumstances of the collapse are calculated, and estimates are obtained for the conditions in the core at the time of its fragmentation into clumps of stellar mass.

  4. Finite-Source Modeling of the South Napa Earthquake

    Science.gov (United States)

    Dreger, D. S.; Huang, M. H.; Wooddell, K. E.; Taira, T.; Luna, B.

    2014-12-01

    On August 24 2014 an Mw 6.0 earthquake struck south-southwest of the city of Napa, California. As part of the Berkeley Seismological Laboratory (BSL) Alarm Response a seismic moment tensor solution and preliminary finite-source model were estimated. The preliminary finite-source model used high quality three-component strong motion recordings, instrument corrected and integrated to displacement, from 8 stations of the BSL BK network for stations located between 30 to 200 km. The BSL focal mechanism (strike=155, dip=82, rake=-172), and a constant rise time and rupture velocity were assumed. The GIL7 plane-layered velocity model was used to compute Green's functions using a frequency wave-number integration approach. The preliminary model from these stations indicates the rupture was unilateral to the NNW, and up dip with a average slip of 42 cm and peak slip of 102 cm. The total scalar moment was found to be 1.15*1025 dyne cm giving a Mw 6.0.The strong directivity from the rupture likely leads to the observed elevated local strong ground motions and the extensive damage to buildings in Napa and surrounding residential areas. In this study we will reevaluate the seismic moment tensor of the mainshock and larger aftershocks, and incorporate local strong motion waveforms, GPS, and InSAR deformation data to better constrain the finite-source model. While the hypocenter and focal parameters used in the preliminary model are consistent with the mapped surface trace of the west Napa fault, the mapped surface slip lies approximately 2 km to the west. Furthermore there is a pronounced change in strike of the mapped surface offsets at the northern end. We will investigate the location of the fault model and the fit to the joint data set as well as examine the possibility of multi-segmented fault models to account for these apparently inconsistent observations.

  5. Numerical model of electron cyclotron resonance ion source

    Directory of Open Access Journals (Sweden)

    V. Mironov

    2015-12-01

    Full Text Available Important features of the electron cyclotron resonance ion source (ECRIS operation are accurately reproduced with a numerical code. The code uses the particle-in-cell technique to model the dynamics of ions in ECRIS plasma. It is shown that a gas dynamical ion confinement mechanism is sufficient to provide the ion production rates in ECRIS close to the experimentally observed values. Extracted ion currents are calculated and compared to the experiment for a few sources. Changes in the simulated extracted ion currents are obtained with varying the gas flow into the source chamber and the microwave power. Empirical scaling laws for ECRIS design are studied and the underlying physical effects are discussed.

  6. A FRAMEWORK FOR AN OPEN SOURCE GEOSPATIAL CERTIFICATION MODEL

    Directory of Open Access Journals (Sweden)

    T. U. R. Khan

    2016-06-01

    Full Text Available The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission “Making geospatial education and opportunities accessible to all”. Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the “Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM. The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and

  7. a Framework for AN Open Source Geospatial Certification Model

    Science.gov (United States)

    Khan, T. U. R.; Davis, P.; Behr, F.-J.

    2016-06-01

    The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission "Making geospatial education and opportunities accessible to all". Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the "Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM). The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and evaluated with 105

  8. Earthquake Source Modeling using Time-Reversal or Adjoint Methods

    Science.gov (United States)

    Hjorleifsdottir, V.; Liu, Q.; Tromp, J.

    2007-12-01

    In recent years there have been great advances in earthquake source modeling. Despite the effort, many questions about earthquake source physics remain unanswered. In order to address some of these questions, it is useful to reconstruct what happens on the fault during an event. In this study we focus on determining the slip distribution on a fault plane, or a moment-rate density, as a function of time and space. This is a difficult process involving many trade offs between model parameters. The difficulty lies in the fact that earthquakes are not a controlled experiment, we don't know when and where they will occur, and therefore we have only limited control over what data will be acquired for each event. As a result, much of the advance that can be made, is by extracting more information out of the data that is routinely collected. Here we use a technique that uses 3D waveforms to invert for the slip on a fault plane during rupture. By including 3D wave-forms we can use parts of the wave-forms that are often discarded, as they are altered by structural effects in ways that cannot be accurately predicted using 1D Earth models. However, generating 3D synthetic is computationally expensive. Therefore we turn to an `adjoint' method (Tarantola Geoph.~1984, Tromp et al.~GJI 2005), that reduces the computational cost relative to methods that use Green's function libraries. In it's simplest form an adjoint method for inverting for source parameters can be viewed as a time-reversal experiment performed with a wave-propagation code (McMechan GJRAS 1982). The recorded seismograms are inserted as simultaneous sources at the location of the receiver and the computed wave field (which we call the adjoint wavefield) is recorded on an array around the earthquake location. Here we show, mathematically, that for source inversions for a moment tensor (distributed) source, the time integral of the adjoint strain is the quantity to monitor. We present the results of time

  9. Helium Reionization Simulations. I. Modeling Quasars as Radiation Sources

    Science.gov (United States)

    La Plante, Paul; Trac, Hy

    2016-09-01

    We introduce a new project to understand helium reionization using fully coupled N-body, hydrodynamics, and radiative transfer simulations. This project aims to capture correctly the thermal history of the intergalactic medium as a result of reionization and make predictions about the Lyα forest and baryon temperature-density relation. The dominant sources of radiation for this transition are quasars, so modeling the source population accurately is very important for making reliable predictions. In this first paper, we present a new method for populating dark matter halos with quasars. Our set of quasar models includes two different light curves, a lightbulb (simple on/off) and symmetric exponential model, and luminosity-dependent quasar lifetimes. Our method self-consistently reproduces an input quasar luminosity function given a halo catalog from an N-body simulation, and propagates quasars through the merger history of halo hosts. After calibrating quasar clustering using measurements from the Baryon Oscillation Spectroscopic Survey, we find that the characteristic mass of quasar hosts is {M}h˜ 2.5× {10}12 {h}-1 {M}⊙ for the lightbulb model, and {M}h˜ 2.3× {10}12 {h}-1 {M}⊙ for the exponential model. In the latter model, the peak quasar luminosity for a given halo mass is larger than that in the former, typically by a factor of 1.5-2. The effective lifetime for quasars in the lightbulb model is 59 Myr, and in the exponential case, the effective time constant is about 15 Myr. We include semi-analytic calculations of helium reionization, and discuss how to include these quasars as sources of ionizing radiation for full hydrodynamics with radiative transfer simulations in order to study helium reionization.

  10. Cortical sources of ERP in prosaccade and antisaccade eye movements using realistic source models

    Science.gov (United States)

    Richards, John E.

    2013-01-01

    The cortical sources of event-related-potentials (ERP) using realistic source models were examined in a prosaccade and antisaccade procedure. College-age participants were presented with a preparatory interval and a target that indicated the direction of the eye movement that was to be made. In some blocks a cue was given in the peripheral location where the target was to be presented and in other blocks no cue was given. In Experiment 1 the prosaccade and antisaccade trials were presented randomly within a block; in Experiment 2 procedures were compared in which either prosaccade and antisaccade trials were mixed in the same block, or trials were presented in separate blocks with only one type of eye movement. There was a central negative slow wave occurring prior to the target, a slow positive wave over the parietal scalp prior to the saccade, and a parietal spike potential immediately prior to saccade onset. Cortical source analysis of these ERP components showed a common set of sources in the ventral anterior cingulate and orbital frontal gyrus for the presaccadic positive slow wave and the spike potential. In Experiment 2 the same cued- and non-cued blocks were used, but prosaccade and antisaccade trials were presented in separate blocks. This resulted in a smaller difference in reaction time between prosaccade and antisaccade trials. Unlike the first experiment, the central negative slow wave was larger on antisaccade than on prosaccade trials, and this effect on the ERP component had its cortical source primarily in the parietal and mid-central cortical areas contralateral to the direction of the eye movement. These results suggest that blocked prosaccade and antisaccade trials results in preparatory or set effects that decreases reaction time, eliminates some cueing effects, and is based on contralateral parietal-central brain areas. PMID:23847476

  11. Cortical Sources of ERP in Prosaccade and Antisaccade Eye Movements using Realistic Source Models

    Directory of Open Access Journals (Sweden)

    John E Richards

    2013-07-01

    Full Text Available The cortical sources of event-related-potentials (ERP using realistic source models were examined in a prosaccade and antisaccade task. College-age participants were presented with a preparatory interval and a target that indicated the direction of the eye movement that was to be made. In some blocks a cue was given in the peripheral location where the target was to be presented and in other blocks no cue was given. In Experiment 1 the prosaccade and antisaccade trials were presented randomly within a block; in Experiment 2 procedures were compared in which either prosaccade and antisaccade trials were mixed in the same block, or trials were presented in separate blocks with only one type of eye movement. There was a central negative slow wave occurring prior to the target, a slow positive wave over the parietal scalp prior to the saccade, and a parietal spike potential immediately prior to saccade onset. Cortical source analysis of these ERP components showed a common set of sources in the ventral anterior cingulate and orbital frontal gyrus for the presaccadic positive slow wave and the spike potential. In Experiment 2 the same cued- and non-cued blocks were used, but prosaccade and antisaccade trials were presented in separate blocks. This resulted in a smaller difference in reaction time between prosaccade and antisaccade trials. Unlike the first experiment, the central negative slow wave was larger on antisaccade than on prosaccade trials, and this effect on the ERP component had its cortical source primarily in the parietal and mid-central cortical areas contralateral to the direction of the eye movement. These results suggest that blocked prosaccade and antisaccade trials results in preparatory or set effects that decreases reaction time, eliminates some cueing effects, and is based on contralateral parietal-central brain areas.

  12. Open Sourcing Social Change: Inside the Constellation Model

    Directory of Open Access Journals (Sweden)

    Tonya Surman

    2008-09-01

    Full Text Available The constellation model was developed by and for the Canadian Partnership for Children's Health and the Environment. The model offers an innovative approach to organizing collaborative efforts in the social mission sector and shares various elements of the open source model. It emphasizes self-organizing and concrete action within a network of partner organizations working on a common issue. Constellations are self-organizing action teams that operate within the broader strategic vision of a partnership. These constellations are outwardly focused, placing their attention on creating value for those in the external environment rather than on the partnership itself. While serious effort is invested into core partnership governance and management, most of the energy is devoted to the decision making, resources and collaborative effort required to create social value. The constellations drive and define the partnership. The constellation model emerged from a deep understanding of the power of networks and peer production. Leadership rotates fluidly amongst partners, with each partner having the freedom to head up a constellation and to participate in constellations that carry out activities that are of more peripheral interest. The Internet provided the platform, the partner network enabled the expertise to align itself, and the goal of reducing chemical exposure in children kept the energy flowing. Building on seven years of experience, this article provides an overview of the constellation model, discusses the results from the CPCHE, and identifies similarities and differences between the constellation and open source models.

  13. Open Knee: Open Source Modeling and Simulation in Knee Biomechanics.

    Science.gov (United States)

    Erdemir, Ahmet

    2016-02-01

    Virtual representations of the knee joint can provide clinicians, scientists, and engineers the tools to explore mechanical functions of the knee and its tissue structures in health and disease. Modeling and simulation approaches such as finite element analysis also provide the possibility to understand the influence of surgical procedures and implants on joint stresses and tissue deformations. A large number of knee joint models are described in the biomechanics literature. However, freely accessible, customizable, and easy-to-use models are scarce. Availability of such models can accelerate clinical translation of simulations, where labor-intensive reproduction of model development steps can be avoided. Interested parties can immediately utilize readily available models for scientific discovery and clinical care. Motivated by this gap, this study aims to describe an open source and freely available finite element representation of the tibiofemoral joint, namely Open Knee, which includes the detailed anatomical representation of the joint's major tissue structures and their nonlinear mechanical properties and interactions. Three use cases illustrate customization potential of the model, its predictive capacity, and its scientific and clinical utility: prediction of joint movements during passive flexion, examining the role of meniscectomy on contact mechanics and joint movements, and understanding anterior cruciate ligament mechanics. A summary of scientific and clinically directed studies conducted by other investigators are also provided. The utilization of this open source model by groups other than its developers emphasizes the premise of model sharing as an accelerator of simulation-based medicine. Finally, the imminent need to develop next-generation knee models is noted. These are anticipated to incorporate individualized anatomy and tissue properties supported by specimen-specific joint mechanics data for evaluation, all acquired in vitro from varying age

  14. Model of the Sgr B2 radio source

    Energy Technology Data Exchange (ETDEWEB)

    Gosachinskii, I.V.; Khersonskii, V.K. (AN SSSR, Spetsial' naya Astrofizicheskaya Observatoriya)

    The dynamical model of the gas cloud around the radio source Sagittarius B2 is suggested. This model describes the kinematic features of the gas in this source: contraction of the core and rotation of the envelope. The stability of the cloud at the initial stage is supported by the turbulent motion of the gas, turbulence energy dissipates due to magnetic viscosity. This process is occurring more rapidly in the dense core and the core begins to collapse but the envelope remains stable. The parameters of the primary cloud and some parameters (mass, density and size) of the collapse are calculated. The conditions in the core at the moment of its fragmentation into masses of stellar order are established.

  15. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Science.gov (United States)

    Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  16. Building an Open Source Framework for Integrated Catchment Modeling

    Science.gov (United States)

    Jagers, B.; Meijers, E.; Villars, M.

    2015-12-01

    In order to develop effective strategies and associated policies for environmental management, we need to understand the dynamics of the natural system as a whole and the human role therein. This understanding is gained by comparing our mental model of the world with observations from the field. However, to properly understand the system we should look at dynamics of water, sediments, water quality, and ecology throughout the whole system from catchment to coast both at the surface and in the subsurface. Numerical models are indispensable in helping us understand the interactions of the overall system, but we need to be able to update and adjust them to improve our understanding and test our hypotheses. To support researchers around the world with this challenging task we started a few years ago with the development of a new open source modeling environment DeltaShell that integrates distributed hydrological models with 1D, 2D, and 3D hydraulic models including generic components for the tracking of sediment, water quality, and ecological quantities throughout the hydrological cycle composed of the aforementioned components. The open source approach combined with a modular approach based on open standards, which allow for easy adjustment and expansion as demands and knowledge grow, provides an ideal starting point for addressing challenging integrated environmental questions.

  17. An open source simulation model for soil and sediment bioturbation.

    Science.gov (United States)

    Schiffers, Katja; Teal, Lorna Rachel; Travis, Justin Mark John; Solan, Martin

    2011-01-01

    Bioturbation is one of the most widespread forms of ecological engineering and has significant implications for the structure and functioning of ecosystems, yet our understanding of the processes involved in biotic mixing remains incomplete. One reason is that, despite their value and utility, most mathematical models currently applied to bioturbation data tend to neglect aspects of the natural complexity of bioturbation in favour of mathematical simplicity. At the same time, the abstract nature of these approaches limits the application of such models to a limited range of users. Here, we contend that a movement towards process-based modelling can improve both the representation of the mechanistic basis of bioturbation and the intuitiveness of modelling approaches. In support of this initiative, we present an open source modelling framework that explicitly simulates particle displacement and a worked example to facilitate application and further development. The framework combines the advantages of rule-based lattice models with the application of parameterisable probability density functions to generate mixing on the lattice. Model parameters can be fitted by experimental data and describe particle displacement at the spatial and temporal scales at which bioturbation data is routinely collected. By using the same model structure across species, but generating species-specific parameters, a generic understanding of species-specific bioturbation behaviour can be achieved. An application to a case study and comparison with a commonly used model attest the predictive power of the approach.

  18. Receptor Model Source Apportionment of Nonmethane Hydrocarbons in Mexico City

    Directory of Open Access Journals (Sweden)

    V. Mugica

    2002-01-01

    Full Text Available With the purpose of estimating the source contributions of nonmethane hydrocarbons (NMHC to the atmosphere at three different sites in the Mexico City Metropolitan Area, 92 ambient air samples were measured from February 23 to March 22 of 1997. Light- and heavy-duty vehicular profiles were determined to differentiate the NMHC contribution of diesel and gasoline to the atmosphere. Food cooking source profiles were also determined for chemical mass balance receptor model application. Initial source contribution estimates were carried out to determine the adequate combination of source profiles and fitting species. Ambient samples of NMHC were apportioned to motor vehicle exhaust, gasoline vapor, handling and distribution of liquefied petroleum gas (LP gas, asphalt operations, painting operations, landfills, and food cooking. Both gasoline and diesel motor vehicle exhaust were the major NMHC contributors for all sites and times, with a percentage of up to 75%. The average motor vehicle exhaust contributions increased during the day. In contrast, LP gas contribution was higher during the morning than in the afternoon. Apportionment for the most abundant individual NMHC showed that the vehicular source is the major contributor to acetylene, ethylene, pentanes, n-hexane, toluene, and xylenes, while handling and distribution of LP gas was the major source contributor to propane and butanes. Comparison between CMB estimates of NMHC and the emission inventory showed a good agreement for vehicles, handling and distribution of LP gas, and painting operations; nevertheless, emissions from diesel exhaust and asphalt operations showed differences, and the results suggest that these emissions could be underestimated.

  19. Modelling the spectral evolution of classical double radio sources

    CERN Document Server

    Manolakou, K

    2002-01-01

    The spectral evolution of powerful double radio galaxies (FR II's) is thought to be determined by the acceleration of electrons at the termination shock of the jet, their transport through the bright head region into the lobes and the production of the radio emission by synchrotron radiation in the lobes. Models presented to date incorporate some of these processes in prescribing the electron distribution which enters the lobes. We have extended these models to include a description of electron acceleration at the relativistic termination shock and a selection of transport models for the head region. These are coupled to the evolution of the electron spectrum in the lobes under the influence of losses due to adiabatic expansion, by inverse Compton scattering on the cosmic background radiation and by synchrotron radiation. The evolutionary tracks predicted by this model are compared to observation using the power/source-size (P-D) diagram. We find that the simplest scenario, in which accelerated particles suff...

  20. Atmospheric mercury dispersion modelling from two nearest hypothetical point sources

    Energy Technology Data Exchange (ETDEWEB)

    Al Razi, Khandakar Md Habib; Hiroshi, Moritomi; Shinji, Kambara [Environmental and Renewable Energy System (ERES), Graduate School of Engineering, Gifu University, Yanagido, Gifu City, 501-1193 (Japan)

    2012-07-01

    The Japan coastal areas are still environmentally friendly, though there are multiple air emission sources originating as a consequence of several developmental activities such as automobile industries, operation of thermal power plants, and mobile-source pollution. Mercury is known to be a potential air pollutant in the region apart from SOX, NOX, CO and Ozone. Mercury contamination in water bodies and other ecosystems due to deposition of atmospheric mercury is considered a serious environmental concern. Identification of sources contributing to the high atmospheric mercury levels will be useful for formulating pollution control and mitigation strategies in the region. In Japan, mercury and its compounds were categorized as hazardous air pollutants in 1996 and are on the list of 'Substances Requiring Priority Action' published by the Central Environmental Council of Japan. The Air Quality Management Division of the Environmental Bureau, Ministry of the Environment, Japan, selected the current annual mean environmental air quality standard for mercury and its compounds of 0.04 ?g/m3. Long-term exposure to mercury and its compounds can have a carcinogenic effect, inducing eg, Minamata disease. This study evaluates the impact of mercury emissions on air quality in the coastal area of Japan. Average yearly emission of mercury from an elevated point source in this area with background concentration and one-year meteorological data were used to predict the ground level concentration of mercury. To estimate the concentration of mercury and its compounds in air of the local area, two different simulation models have been used. The first is the National Institute of Advanced Science and Technology Atmospheric Dispersion Model for Exposure and Risk Assessment (AIST-ADMER) that estimates regional atmospheric concentration and distribution. The second is the Hybrid Single Particle Lagrangian Integrated trajectory Model (HYSPLIT) that estimates the atmospheric

  1. Atmospheric mercury dispersion modelling from two nearest hypothetical point sources

    Directory of Open Access Journals (Sweden)

    Khandakar Md Habib Al Razi, Moritomi Hiroshi, Kambara Shinji

    2012-01-01

    Full Text Available The Japan coastal areas are still environmentally friendly, though there are multiple air emission sources originating as a consequence of several developmental activities such as automobile industries, operation of thermal power plants, and mobile-source pollution. Mercury is known to be a potential air pollutant in the region apart from SOX, NOX, CO and Ozone. Mercury contamination in water bodies and other ecosystems due to deposition of atmospheric mercury is considered a serious environmental concern. Identification of sources contributing to the high atmospheric mercury levels will be useful for formulating pollution control and mitigation strategies in the region. In Japan, mercury and its compounds were categorized as hazardous air pollutants in 1996 and are on the list of "Substances Requiring Priority Action" published by the Central Environmental Council of Japan. The Air Quality Management Division of the Environmental Bureau, Ministry of the Environment, Japan, selected the current annual mean environmental air quality standard for mercury and its compounds of 0.04 μg/m3. Long-term exposure to mercury and its compounds can have a carcinogenic effect, inducing eg, Minamata disease. This study evaluates the impact of mercury emissions on air quality in the coastal area of Japan. Average yearly emission of mercury from an elevated point source in this area with background concentration and one-year meteorological data were used to predict the ground level concentration of mercury. To estimate the concentration of mercury and its compounds in air of the local area, two different simulation models have been used. The first is the National Institute of Advanced Science and Technology Atmospheric Dispersion Model for Exposure and Risk Assessment (AIST-ADMER that estimates regional atmospheric concentration and distribution. The second is the Hybrid Single Particle Lagrangian Integrated trajectory Model (HYSPLIT that estimates the

  2. Propagating Uncertainties from Source Model Estimations to Coulomb Stress Changes

    Science.gov (United States)

    Baumann, C.; Jonsson, S.; Woessner, J.

    2009-12-01

    Multiple studies have shown that static stress changes due to permanent fault displacement trigger earthquakes on the causative and on nearby faults. Calculations of static stress changes in previous studies have been based on fault parameters without considering any source model uncertainties or with crude assumptions about fault model errors based on available different source models. In this study, we investigate the influence of fault model parameter uncertainties on Coulomb Failure Stress change (ΔCFS) calculations by propagating the uncertainties from the fault estimation process to the Coulomb Failure stress changes. We use 2500 sets of correlated model parameters determined for the June 2000 Mw = 5.8 Kleifarvatn earthquake, southwest Iceland, which were estimated by using a repeated optimization procedure and multiple data sets that had been modified by synthetic noise. The model parameters show that the event was predominantly a right-lateral strike-slip earthquake on a north-south striking fault. The variability of the sets of models represents the posterior probability density distribution for the Kleifarvatn source model. First we investigate the influence of individual source model parameters on the ΔCFS calculations. We show through a correlation analysis that for this event, changes in dip, east location, strike, width and in part north location have stronger impact on the Coulomb failure stress changes than changes in fault length, depth, dip-slip and strike-slip. Second we find that the accuracy of Coulomb failure stress changes appears to increase with increasing distance from the fault. The absolute value of the standard deviation decays rapidly with distance within about 5-6 km around the fault from about 3-3.5 MPa down to a few Pa, implying that the influence of parameter changes decrease with increasing distance. This is underlined by the coefficient of variation CV, defined as the ratio of the standard deviation of the Coulomb stress

  3. Modeling of multi-depth slanted airgun source for deghosting

    Institute of Scientific and Technical Information of China (English)

    Shen Hong-Lei; Elboth Thomas; Tian Gang; Lin Zhi

    2014-01-01

    To obtain high-resolution of the subsurface structure, we modeled multi-depth slanted airgun sources to attenuate the source ghost. By firing the guns in sequence according to their relative depths, such a source can build constructive primaries and destructive ghosts. To evaluate the attenuation of ghosts, the normalized squared error of the spectrum of the actual vs the expected signature is computed. We used a typical 680 cu.in airgun string and found via simulations that a depth interval of 1 or 1.5 m between airguns is optimum when considering deghosting performance and operational feasibility. When more subarrays are combined, preliminary simulations are necessary to determine the optimum depth combination. The frequency notches introduced by the excess use of subarrays may negatively affect the deghosting performance. Two or three slanted subarrays can be combined to remove the ghost effect. The sequence combination may partly affect deghosting but this can be eliminated by matched filtering. Directivity comparison shows that a multi-depth slanted source can significantly attenuate the notches and widen the energy transmission stability area.

  4. Modeling of multi-depth slanted airgun source for deghosting

    Science.gov (United States)

    Shen, Hong-Lei; Elboth, Thomas; Tian, Gang; Lin, Zhi

    2014-12-01

    To obtain high-resolution of the subsurface structure, we modeled multi-depth slanted airgun sources to attenuate the source ghost. By firing the guns in sequence according to their relative depths, such a source can build constructive primaries and destructive ghosts. To evaluate the attenuation of ghosts, the normalized squared error of the spectrum of the actual vs the expected signature is computed. We used a typical 680 cu.in airgun string and found via simulations that a depth interval of 1 or 1.5 m between airguns is optimum when considering deghosting performance and operational feasibility. When more subarrays are combined, preliminary simulations are necessary to determine the optimum depth combination. The frequency notches introduced by the excess use of subarrays may negatively affect the deghosting performance. Two or three slanted subarrays can be combined to remove the ghost effect. The sequence combination may partly affect deghosting but this can be eliminated by matched filtering. Directivity comparison shows that a multi-depth slanted source can significantly attenuate the notches and widen the energy transmission stability area.

  5. An Open Source modular platform for hydrological model implementation

    Science.gov (United States)

    Kolberg, Sjur; Bruland, Oddbjørn

    2010-05-01

    An implementation framework for setup and evaluation of spatio-temporal models is developed, forming a highly modularized distributed model system. The ENKI framework allows building space-time models for hydrological or other environmental purposes, from a suite of separately compiled subroutine modules. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational hydropower forecasting or other water resource management. Written in C++, ENKI uses a plug-in structure to build a complete model from separately compiled subroutine implementations. These modules contain very little code apart from the core process simulation, and are compiled as dynamic-link libraries (dll). A narrow interface allows the main executable to recognise the number and type of the different variables in each routine. The framework then exposes these variables to the user within the proper context, ensuring that time series exist for input variables, initialisation for states, GIS data sets for static map data, manually or automatically calibrated values for parameters etc. ENKI is designed to meet three different levels of involvement in model construction: • Model application: Running and evaluating a given model. Regional calibration against arbitrary data using a rich suite of objective functions, including likelihood and Bayesian estimation. Uncertainty analysis directed towards input or parameter uncertainty. o Need not: Know the model's composition of subroutines, or the internal variables in the model, or the creation of method modules. • Model analysis: Link together different process methods, including parallel setup of alternative methods for solving the same task. Investigate the effect of different spatial discretization schemes. o Need not

  6. Greenhouse Gas Source Attribution: Measurements Modeling and Uncertainty Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhen [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States); LaFranchi, Brian W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ivey, Mark D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Schrader, Paul E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Michelsen, Hope A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bambha, Ray P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2014-09-01

    In this project we have developed atmospheric measurement capabilities and a suite of atmospheric modeling and analysis tools that are well suited for verifying emissions of green- house gases (GHGs) on an urban-through-regional scale. We have for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate atmospheric CO2 . This will allow for the examination of regional-scale transport and distribution of CO2 along with air pollutants traditionally studied using CMAQ at relatively high spatial and temporal resolution with the goal of leveraging emissions verification efforts for both air quality and climate. We have developed a bias-enhanced Bayesian inference approach that can remedy the well-known problem of transport model errors in atmospheric CO2 inversions. We have tested the approach using data and model outputs from the TransCom3 global CO2 inversion comparison project. We have also performed two prototyping studies on inversion approaches in the generalized convection-diffusion context. One of these studies employed Polynomial Chaos Expansion to accelerate the evaluation of a regional transport model and enable efficient Markov Chain Monte Carlo sampling of the posterior for Bayesian inference. The other approach uses de- terministic inversion of a convection-diffusion-reaction system in the presence of uncertainty. These approaches should, in principle, be applicable to realistic atmospheric problems with moderate adaptation. We outline a regional greenhouse gas source inference system that integrates (1) two ap- proaches of atmospheric dispersion simulation and (2) a class of Bayesian inference and un- certainty quantification algorithms. We use two different and complementary approaches to simulate atmospheric dispersion. Specifically, we use a Eulerian chemical transport model CMAQ and a Lagrangian Particle Dispersion Model - FLEXPART-WRF. These two models share the same WRF

  7. Kinetic models for the VASIMR thruster helicon plasma source

    Science.gov (United States)

    Batishchev, Oleg; Molvig, Kim

    2001-10-01

    Helicon gas discharge [1] is widely used by industry because of its remarkable efficiency [2]. High energy and fuel efficiencies make it very attractive for space electrical propulsion applications. For example, helicon plasma source is used in the high specific impulse VASIMR [3] plasma thruster, including experimental prototypes VX-3 and upgraded VX-10 [4] configurations, which operate with hydrogen (deuterium) and helium plasmas. We have developed a set of models for the VASIMR helicon discharge. Firstly, we use zero-dimensional energy and mass balance equations to characterize partially ionized gas condition/composition. Next, we couple it to one-dimensional hybrid model [6] for gas flow in the quartz tube of the helicon. We compare hybrid model results to a purely kinetic simulation of propellant flow in gas feed + helicon source subsystem. Some of the experimental data [3-4] are explained. Lastly, we discuss full-scale kinetic modeling of coupled gas and plasmas [5-6] in the helicon discharge. [1] M.A.Lieberman, A.J.Lihtenberg, 'Principles of ..', Wiley, 1994; [2] F.F.Chen, Plas. Phys. Contr. Fus. 33, 339, 1991; [3] F.Chang-Diaz et al, Bull. APS 45 (7) 129, 2000; [4] J.Squire et al., Bull. APS 45 (7) 130, 2000; [5] O.Batishchev et al, J. Plasma Phys. 61, part II, 347, 1999; [6] O.Batishchev, K.Molvig, AIAA technical paper 2000-3754, -14p, 2001.

  8. Modeling of low pressure plasma sources for microelectronics fabrication

    Science.gov (United States)

    Agarwal, Ankur; Bera, Kallol; Kenney, Jason; Likhanskii, Alexandre; Rauf, Shahid

    2017-10-01

    Chemically reactive plasmas operating in the 1 mTorr–10 Torr pressure range are widely used for thin film processing in the semiconductor industry. Plasma modeling has come to play an important role in the design of these plasma processing systems. A number of 3-dimensional (3D) fluid and hybrid plasma modeling examples are used to illustrate the role of computational investigations in design of plasma processing hardware for applications such as ion implantation, deposition, and etching. A model for a rectangular inductively coupled plasma (ICP) source is described, which is employed as an ion source for ion implantation. It is shown that gas pressure strongly influences ion flux uniformity, which is determined by the balance between the location of plasma production and diffusion. The effect of chamber dimensions on plasma uniformity in a rectangular capacitively coupled plasma (CCP) is examined using an electromagnetic plasma model. Due to high pressure and small gap in this system, plasma uniformity is found to be primarily determined by the electric field profile in the sheath/pre-sheath region. A 3D model is utilized to investigate the confinement properties of a mesh in a cylindrical CCP. Results highlight the role of hole topology and size on the formation of localized hot-spots. A 3D electromagnetic plasma model for a cylindrical ICP is used to study inductive versus capacitive power coupling and how placement of ground return wires influences it. Finally, a 3D hybrid plasma model for an electron beam generated magnetized plasma is used to understand the role of reactor geometry on plasma uniformity in the presence of E  ×  B drift.

  9. Modeling Source Water Threshold Exceedances with Extreme Value Theory

    Science.gov (United States)

    Rajagopalan, B.; Samson, C.; Summers, R. S.

    2016-12-01

    Variability in surface water quality, influenced by seasonal and long-term climate changes, can impact drinking water quality and treatment. In particular, temperature and precipitation can impact surface water quality directly or through their influence on streamflow and dilution capacity. Furthermore, they also impact land surface factors, such as soil moisture and vegetation, which can in turn affect surface water quality, in particular, levels of organic matter in surface waters which are of concern. All of these will be exacerbated by anthropogenic climate change. While some source water quality parameters, particularly Total Organic Carbon (TOC) and bromide concentrations, are not directly regulated for drinking water, these parameters are precursors to the formation of disinfection byproducts (DBPs), which are regulated in drinking water distribution systems. These DBPs form when a disinfectant, added to the water to protect public health against microbial pathogens, most commonly chlorine, reacts with dissolved organic matter (DOM), measured as TOC or dissolved organic carbon (DOC), and inorganic precursor materials, such as bromide. Therefore, understanding and modeling the extremes of TOC and Bromide concentrations is of critical interest for drinking water utilities. In this study we develop nonstationary extreme value analysis models for threshold exceedances of source water quality parameters, specifically TOC and bromide concentrations. In this, the threshold exceedances are modeled as Generalized Pareto Distribution (GPD) whose parameters vary as a function of climate and land surface variables - thus, enabling to capture the temporal nonstationarity. We apply these to model threshold exceedance of source water TOC and bromide concentrations at two locations with different climate and find very good performance.

  10. RF Plasma modeling of the Linac4 H− ion source

    CERN Document Server

    Mattei, S; Hatayama, A; Lettry, J; Kawamura, Y; Yasumoto, M; Schmitzer, C

    2013-01-01

    This study focuses on the modelling of the ICP RF-plasma in the Linac4 H− ion source currently being constructed at CERN. A self-consistent model of the plasma dynamics with the RF electromagnetic field has been developed by a PIC-MCC method. In this paper, the model is applied to the analysis of a low density plasma discharge initiation, with particular interest on the effect of the external magnetic field on the plasma properties, such as wall loss, electron density and electron energy. The use of a multi-cusp magnetic field effectively limits the wall losses, particularly in the radial direction. Preliminary results however indicate that a reduced heating efficiency results in such a configuration. The effect is possibly due to trapping of electrons in the multi-cusp magnetic field, preventing their continuous acceleration in the azimuthal direction.

  11. CHALLENGES IN SOURCE TERM MODELING OF DECONTAMINATION AND DECOMMISSIONING WASTES.

    Energy Technology Data Exchange (ETDEWEB)

    SULLIVAN, T.M.

    2006-08-01

    Development of real-time predictive modeling to identify the dispersion and/or source(s) of airborne weapons of mass destruction including chemical, biological, radiological, and nuclear material in urban environments is needed to improve response to potential releases of these materials via either terrorist or accidental means. These models will also prove useful in defining airborne pollution dispersion in urban environments for pollution management/abatement programs. Predicting gas flow in an urban setting on a scale of less than a few kilometers is a complicated and challenging task due to the irregular flow paths that occur along streets and alleys and around buildings of different sizes and shapes, i.e., ''urban canyons''. In addition, air exchange between the outside and buildings and subway areas further complicate the situation. Transport models that are used to predict dispersion of WMD/CBRN materials or to back track the source of the release require high-density data and need defensible parameterizations of urban processes. Errors in the data or any of the parameter inputs or assumptions will lead to misidentification of the airborne spread or source release location(s). The need for these models to provide output in a real-time fashion if they are to be useful for emergency response provides another challenge. To improve the ability of New York City's (NYC's) emergency management teams and first response personnel to protect the public during releases of hazardous materials, the New York City Urban Dispersion Program (UDP) has been initiated. This is a four year research program being conducted from 2004 through 2007. This paper will discuss ground level and subway Perfluorocarbon tracer (PFT) release studies conducted in New York City. The studies released multiple tracers to study ground level and vertical transport of contaminants. This paper will discuss the results from these tests and how these results can be used

  12. A parameter model for dredge plume sediment source terms

    Science.gov (United States)

    Decrop, Boudewijn; De Mulder, Tom; Toorman, Erik; Sas, Marc

    2017-01-01

    The presented model allows for fast simulations of the near-field behaviour of overflow dredging plumes. Overflow dredging plumes occur when dredging vessels employ a dropshaft release system to discharge the excess sea water, which is pumped into the trailing suction hopper dredger (TSHD) along with the dredged sediments. The fine sediment fraction in the loaded water-sediment mixture does not fully settle before it reaches the overflow shaft. By consequence, the released water contains a fine sediment fraction of time-varying concentration. The sediment grain size is in the range of clays, silt and fine sand; the sediment concentration varies roughly between 10 and 200 g/l in most cases, peaking at even higher value with short duration. In order to assess the environmental impact of the increased turbidity caused by this release, plume dispersion predictions are often carried out. These predictions are usually executed with a large-scale model covering a complete coastal zone, bay, or estuary. A source term of fine sediments is implemented in the hydrodynamic model to simulate the fine sediment dispersion. The large-scale model mesh resolution and governing equations, however, do not allow to simulate the near-field plume behaviour in the vicinity of the ship hull and propellers. Moreover, in the near-field, these plumes are under influence of buoyancy forces and air bubbles. The initial distribution of sediments is therefore unknown and has to be based on crude assumptions at present. The initial (vertical) distribution of the sediment source is indeed of great influence on the final far-field plume dispersion results. In order to study this near-field behaviour, a highly-detailed computationally fluid dynamics (CFD) model was developed. This model contains a realistic geometry of a dredging vessel, buoyancy effects, air bubbles and propeller action, and was validated earlier by comparing with field measurements. A CFD model requires significant simulation times

  13. An Integrated Risk Management Model for Source Water Protection Areas

    Directory of Open Access Journals (Sweden)

    Shang-Lien Lo

    2012-10-01

    Full Text Available Watersheds are recognized as the most effective management unit for the protection of water resources. For surface water supplies that use water from upstream watersheds, evaluating threats to water quality and implementing a watershed management plan are crucial for the maintenance of drinking water safe for humans. The aim of this article is to establish a risk assessment model that provides basic information for identifying critical pollutants and areas at high risk for degraded water quality. In this study, a quantitative risk model that uses hazard quotients for each water quality parameter was combined with a qualitative risk model that uses the relative risk level of potential pollution events in order to characterize the current condition and potential risk of watersheds providing drinking water. In a case study of Taipei Source Water Area in northern Taiwan, total coliforms and total phosphorus were the top two pollutants of concern. Intensive tea-growing and recreational activities around the riparian zone may contribute the greatest pollution to the watershed. Our risk assessment tool may be enhanced by developing, recording, and updating information on pollution sources in the water supply watersheds. Moreover, management authorities could use the resultant information to create watershed risk management plans.

  14. An integrated risk management model for source water protection areas.

    Science.gov (United States)

    Chiueh, Pei-Te; Shang, Wei-Ting; Lo, Shang-Lien

    2012-10-17

    Watersheds are recognized as the most effective management unit for the protection of water resources. For surface water supplies that use water from upstream watersheds, evaluating threats to water quality and implementing a watershed management plan are crucial for the maintenance of drinking water safe for humans. The aim of this article is to establish a risk assessment model that provides basic information for identifying critical pollutants and areas at high risk for degraded water quality. In this study, a quantitative risk model that uses hazard quotients for each water quality parameter was combined with a qualitative risk model that uses the relative risk level of potential pollution events in order to characterize the current condition and potential risk of watersheds providing drinking water. In a case study of Taipei Source Water Area in northern Taiwan, total coliforms and total phosphorus were the top two pollutants of concern. Intensive tea-growing and recreational activities around the riparian zone may contribute the greatest pollution to the watershed. Our risk assessment tool may be enhanced by developing, recording, and updating information on pollution sources in the water supply watersheds. Moreover, management authorities could use the resultant information to create watershed risk management plans.

  15. Cardiac magnetic source imaging based on current multipole model

    Institute of Scientific and Technical Information of China (English)

    Tang Fa-Kuan; Wang Qian; Hua Ning; Lu Hong; Tang Xue-Zheng; Ma Ping

    2011-01-01

    It is widely accepted that the heart current source can be reduced into a current multipole. By adopting three linear inverse methods, the cardiac magnetic imaging is achieved in this article based on the current multipole model expanded to the first order terms. This magnetic imaging is realized in a reconstruction plane in the centre of human heart, where the current dipole array is employed to represent realistic cardiac current distribution. The current multipole as testing source generates magnetic fields in the measuring plane, serving as inputs of cardiac magnetic inverse problem. In the heart-torso model constructed by boundary element method, the current multipole magnetic field distribution is compared with that in the homogeneous infinite space, and also with the single current dipole magnetic field distribution.Then the minimum-norm least-squares (MNLS) method, the optimal weighted pseuDOInverse method (OWPIM), and the optimal constrained linear inverse method (OCLIM) are selected as the algorithms for inverse computation based on current multipole model innovatively, and the imaging effects of these three inverse methods are compared. Besides,two reconstructing parameters, residual and mean residual, are also discussed, and their trends under MNLS, OWPIM and OCLIM each as a function of SNR are obtained and compared.

  16. Attributing Sources of Variability in Regional Climate Model Experiments

    Science.gov (United States)

    Kaufman, C. G.; Sain, S. R.

    2008-12-01

    Variability in regional climate model (RCM) projections may be due to a number of factors, including the choice of RCM itself, the boundary conditions provided by a driving general circulation model (GCM), and the choice of emission scenario. We describe a new statistical methodology, Gaussian Process ANOVA, which allows us to decompose these sources of variability while also taking account of correlations in the output across space. Our hierarchical Bayesian framework easily allows joint inference about high probability envelopes for the functions, as well as decompositions of total variance that vary over the domain of the functions. These may be used to create maps illustrating the magnitude of each source of variability across the domain of the regional model. We use this method to analyze temperature and precipitation data from the Prudence Project, an RCM intercomparison project in which RCMs were crossed with GCM forcings and scenarios in a designed experiment. This work was funded by the North American Regional Climate Change Assessment Program (NARCCAP).

  17. Assimilating multi-source uncertainties of a parsimonious conceptual hydrological model using hierarchical Bayesian modeling

    Science.gov (United States)

    Wei Wu; James Clark; James Vose

    2010-01-01

    Hierarchical Bayesian (HB) modeling allows for multiple sources of uncertainty by factoring complex relationships into conditional distributions that can be used to draw inference and make predictions. We applied an HB model to estimate the parameters and state variables of a parsimonious hydrological model – GR4J – by coherently assimilating the uncertainties from the...

  18. Plant model of KIPT neutron source facility simulator

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Yan [Argonne National Lab. (ANL), Argonne, IL (United States); Wei, Thomas Y. [Argonne National Lab. (ANL), Argonne, IL (United States); Grelle, Austin L. [Argonne National Lab. (ANL), Argonne, IL (United States); Gohar, Yousry [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-02-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  19. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  20. Synchronising data sources and filling gaps by global hydrological modelling

    Science.gov (United States)

    Pimentel, Rafael; Crochemore, Louise; Hasan, Abdulghani; Pineda, Luis; Isberg, Kristina; Arheimer, Berit

    2017-04-01

    The advances in remote sensing in the last decades combined with the creation of different open hydrological databases have generated a very large amount of useful information for global hydrological modelling. Working with this huge number of datasets to set up a global hydrological model can constitute challenges such as multiple data formats and big heterogeneity on spatial and temporal resolutions. Different initiatives have made effort to homogenize some of these data sources, i.e. GRDC (Global Runoff Data Center), HYDROSHEDS (SHuttle Elevation Derivatives at multiple Scales), GLWD (Global Lake and Wetland Database) for runoff, watershed delineation and water bodies respectively. However, not all the related issues are covered or homogenously solved at the global scale and new information is continuously available to complete the current ones. This work presents synchronising efforts to make use of different global data sources needed to set up the semi-distributed hydrological model HYPE (Hydrological Predictions for the Environment) at the global scale. These data sources included: topography for watershed delineation, gauging stations of river flow, and extention of lakes, flood plains and land cover classes. A new database with approximately 100 000 subbasins, with an average area of 1000 km2, was created. Subbasin delineation was done combining Global Width Database for Large River (GWD-LR), SRTM high-resolution elevation data and a number of forced points of interest (gauging station of river flow, lakes, reservoirs, urban areas, nuclear plants and areas with high risk of flooding). Regarding flow data, the locations of GRDC stations were checked or placed along the river network when necessary, and completed with available information from national water services in data-sparse regions. A screening of doublet stations and associated time series was necessary to efficiently combine the two types of data sources. A total number about 21 000 stations were

  1. Modelling the plasma plume of an assist source in PIAD

    Science.gov (United States)

    Wauer, Jochen; Harhausen, Jens; Foest, Rüdiger; Loffhagen, Detlef

    2016-09-01

    Plasma ion assisted deposition (PIAD) is a technique commonly used to produce high-precision optical interference coatings. Knowledge regarding plasma properties is most often limited to dedicated scenarios without film deposition. Approaches have been made to gather information on the process plasma in situ to detect drifts which are suspected to cause limits in repeatability of resulting layer properties. Present efforts focus on radiance monitoring of the plasma plume of an Advanced Plasma Source (APSpro, Bühler) by optical emission spectroscopy to provide the basis for an advanced plasma control. In this contribution modelling results of the plume region are presented to interpret these experimental data. In the framework of the collisional radiative model used, 15 excited neutral argon states in the plasma are considered. Results of the species densities show good consistency with the measured optical emission of various argon 2 p - 1 s transitions. This work was funded by BMBF under grant 13N13213.

  2. Italian Case Studies Modelling Complex Earthquake Sources In PSHA

    Science.gov (United States)

    Gee, Robin; Peruzza, Laura; Pagani, Marco

    2017-04-01

    This study presents two examples of modelling complex seismic sources in Italy, done in the framework of regional probabilistic seismic hazard assessment (PSHA). The first case study is for an area centred around Collalto Stoccaggio, a natural gas storage facility in Northern Italy, located within a system of potentially seismogenic thrust faults in the Venetian Plain. The storage exploits a depleted natural gas reservoir located within an actively growing anticline, which is likely driven by the Montello Fault, the underlying blind thrust. This fault has been well identified by microseismic activity (Mseismological information. We explore the sensitivity of the hazard results to various parameters affected by epistemic uncertainty, such as ground motions prediction equations with different rupture-to-site distance metrics, fault geometry, and maximum magnitude. The second case is an innovative study, where we perform aftershock probabilistic seismic hazard assessment (APSHA) in Central Italy, following the Amatrice M6.1 earthquake of August 24th, 2016 (298 casualties) and the subsequent earthquakes of Oct 26th and 30th (M6.1 and M6.6 respectively, no deaths). The aftershock hazard is modelled using a fault source with complex geometry, based on literature data and field evidence associated with the August mainshock. Earthquake activity rates during the very first weeks after the deadly earthquake were used to calibrated an Omori-Utsu decay curve, and the magnitude distribution of aftershocks is assumed to follow a Gutenberg-Richter distribution. We apply uniform and non-uniform spatial distribution of the seismicity across the fault source, by modulating the rates as a decreasing function of distance from the mainshock. The hazard results are computed for short-exposure periods (1 month, before the occurrences of October earthquakes) and compared to the background hazard given by law (MPS04), and to observations at some reference sites. We also show the results of

  3. A Model fot the Sources of the Slow Solar Wind

    Science.gov (United States)

    Antiochos, S. K.; Mikic, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.

    2011-01-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: the slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind also has large angular width, up to approx.60deg, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far from the heliospheric current sheet. We then use an MHD code and MDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind, and magnetic field for a time period preceding the 2008 August 1 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere and propose further tests of the model. Key words: solar wind - Sun: corona - Sun: magnetic topology

  4. A Model for the Sources of the Slow Solar Wind

    Science.gov (United States)

    Antiochos, Spiro K.; Mikic, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.

    2010-01-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: The slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind has large angular width, up to approximately 60 degrees, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far front the heliospheric current sheet. We then use an MHD code and MIDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind and magnetic field for a time period preceding the August 1, 2008 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere, and propose further tests of the model.

  5. A source-controlled data center network model.

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.

  6. A source-controlled data center network model

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925

  7. Structure Modeling and Validation applied to Source Physics Experiments (SPEs)

    Science.gov (United States)

    Larmat, C. S.; Rowe, C. A.; Patton, H. J.

    2012-12-01

    The U. S. Department of Energy's Source Physics Experiments (SPEs) comprise a series of small chemical explosions used to develop a better understanding of seismic energy generation and wave propagation for low-yield explosions. In particular, we anticipate improved understanding of the processes through which shear waves are generated by the explosion source. Three tests, 100, 1000 and 1000 kg yields respectively, were detonated in the same emplacement hole and recorded on the same networks of ground motion sensors in the granites of Climax Stock at the Nevada National Security Site. We present results for the analysis and modeling of seismic waveforms recorded close-in on five linear geophone lines extending radially from ground zero, having offsets from 100 to 2000 m and station spacing of 100 m. These records exhibit azimuthal variations of P-wave arrival times, and phase velocity, spreading and attenuation properties of high-frequency Rg waves. We construct a 1D seismic body-wave model starting from a refraction analysis of P-waves and adjusting to address time-domain and frequency-domain dispersion measurements of Rg waves between 2 and 9 Hz. The shallowest part of the structure we address using the arrival times recorded by near-field accelerometers residing within 200 m of the shot hole. We additionally perform a 2D modeling study with the Spectral Element Method (SEM) to investigate which structural features are most responsible for the observed variations, in particular anomalously weak amplitude decay in some directions of this topographically complicated locality. We find that a near-surface, thin, weathered layer of varying thickness and low wave speeds plays a major role on the observed waveforms. We anticipate performing full 3D modeling of the seismic near-field through analysis and validation of waveforms on the 5 radial receiver arrays.

  8. Global Modeling of the Oceanic Source of Organic Aerosols

    Directory of Open Access Journals (Sweden)

    Stelios Myriokefalitakis

    2010-01-01

    Full Text Available The global marine organic aerosol budget is investigated by a 3-dimensional chemistry-transport model considering recently proposed parameterisations of the primary marine organic aerosol (POA and secondary organic aerosol (SOA formation from the oxidation of marine volatile organic compounds. MODIS and SeaWiFS satellite data of Chlorophyll-a and ECMWF solar incoming radiation, wind speed, and temperature are driving the oceanic emissions in the model. Based on the adopted parameterisations, the SOA and the submicron POA marine sources are evaluated at about 5 Tg yr−1 (∼1.5 Tg C yr−1 and 7 to 8 Tg yr−1 (∼4 Tg C yr−1, respectively. The computed marine SOA originates from the dimethylsulfide oxidation (∼78%, the potentially formed dialkyl amine salts (∼21%, and marine hydrocarbon oxidation (∼0.1%. Comparison of calculations with observations indicates an additional marine source of soluble organic carbon that could be partially encountered by marine POA chemical ageing.

  9. A Model for the Sources of the Slow Solar Wind

    CERN Document Server

    Antiochos, S K; Titov, V S; Lionello, R; Linker, J A

    2011-01-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: The slow wind has the composition of the closed field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind also has large angular width, up to ~ 60{\\circ}, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far from the heliospheric current sheet. We then use an MHD code and MDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spat...

  10. A Model for the Sources of the Slow Solar Wind

    Science.gov (United States)

    Antiochos, Spiro K.; Mikic, Z.; Lionello, R.; Titov, V.; Linker, J.

    2010-05-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: The slow wind has the composition of the closed-field corona, implying that it originates at the open-closed field boundary layer, but it also has large angular width, up to 40 degrees. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We calculate with high numerical resolution, the quasi-steady solar wind and magnetic field for a Carrington rotation centered about the August 1, 2008 total solar eclipse. Our numerical results demonstrate that, at least for this time period, a web of separatrices (S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere, and propose further tests of the model. This work was supported, in part, by the NASA HTP, TR&T and SR&T programs.

  11. Experimentally validated pencil beam scanning source model in TOPAS.

    Science.gov (United States)

    Lin, Liyong; Kang, Minglei; Solberg, Timothy D; Ainsley, Christopher G; McDonough, James E

    2014-11-21

    The presence of a low-dose envelope, or 'halo', in the fluence profile of a proton spot can increase the output of a pencil beam scanning field by over 10%. This study evaluated whether the Monte Carlo simulation code, TOPAS 1.0-beta 8, based on Geant4.9.6 with its default physics list, can predict the spot halo at depth in phantom by incorporating a halo model within the proton source distribution. Proton sources were modelled using three 2D Gaussian functions, and optimized until simulated spot profiles matched measurements at the phantom surface out to a radius of 100 mm. Simulations were subsequently compared with profiles measured using EBT3 film in Solidwater® phantoms at various depths for 100, 115, 150, 180, 210 and 225 MeV proton beams. Simulations predict measured profiles within a 1 mm distance to agreement for 2D profiles extending to the 0.1% isodose, and within 1 mm/1% Gamma criteria over the integrated curve of spot profile as a function of radius. For isodose lines beyond 0.1% of the central spot dose, the simulated primary spot sigma is smaller than the measurement by up to 15%, and can differ by over 1 mm. The choice of particle interaction algorithm and phantom material were found to cause ~1 mm range uncertainty, a maximal 5% (0.3 mm) difference in spot sigma, and maximal 1 mm and ~2 mm distance to agreement in isodoses above and below the 0.1% level, respectively. Based on these observations, therefore, the selection of physics model and the application of Solidwater® as water replacement material in simulation and measurement should be used with caution.

  12. Deposition parameterizations for the Industrial Source Complex (ISC3) model

    Energy Technology Data Exchange (ETDEWEB)

    Wesely, Marvin L. [Argonne National Lab. (ANL), Argonne, IL (United States); Doskey, Paul V. [Argonne National Lab. (ANL), Argonne, IL (United States); Shannon, J. D. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2002-06-01

    Improved algorithms have been developed to simulate the dry and wet deposition of hazardous air pollutants (HAPs) with the Industrial Source Complex version 3 (ISC3) model system. The dry deposition velocities (concentrations divided by downward flux at a specified height) of the gaseous HAPs are modeled with algorithms adapted from existing dry deposition modules. The dry deposition velocities are described in a conventional resistance scheme, for which micrometeorological formulas are applied to describe the aerodynamic resistances above the surface. Pathways to uptake at the ground and in vegetative canopies are depicted with several resistances that are affected by variations in air temperature, humidity, solar irradiance, and soil moisture. The role of soil moisture variations in affecting the uptake of gases through vegetative plant leaf stomata is assessed with the relative available soil moisture, which is estimated with a rudimentary budget of soil moisture content. Some of the procedures and equations are simplified to be commensurate with the type and extent of information on atmospheric and surface conditions available to the ISC3 model system user. For example, standardized land use types and seasonal categories provide sets of resistances to uptake by various components of the surface. To describe the dry deposition of the large number of gaseous organic HAPS, a new technique based on laboratory study results and theoretical considerations has been developed providing a means of evaluating the role of lipid solubility in uptake by the waxy outer cuticle of vegetative plant leaves.

  13. Numerical modeling of a high power terahertz source in Shanghai

    Institute of Scientific and Technical Information of China (English)

    DAI Jin-Hua; DENG Hai-Xiao; DAI Zhi-Min

    2012-01-01

    On the basis of an energy-recovery linac,a terahertz source with a potential for kilowatts of average power is proposed in Shanghai,which will serve as an effective tool for material and biological sciences,In this paper,the physical design of two free electron laser (FEL) oscillators,in a frequency range of 2-10 THz and 0.5-2 THz respectively,are presented.By using three-dimensional,time-dependent numerical modeling of GENESIS in combination with a paraxial optical propagation code,the THz oscillator performance,the detuning effects,and the tolerance requirements on the electron beam,the undulator field and the cavity alignment are given.

  14. Modeling of water radiolysis at spallation neutron sources

    Energy Technology Data Exchange (ETDEWEB)

    Daemen, L.L.; Kanner, G.S.; Lillard, R.S.; Butt, D.P.; Brun, T.O.; Sommer, W.F.

    1998-12-01

    In spallation neutron sources neutrons are produced when a beam of high-energy particles (e.g., 1 GeV protons) collides with a (water-cooled) heavy metal target such as tungsten. The resulting spallation reactions produce a complex radiation environment (which differs from typical conditions at fission and fusion reactors) leading to the radiolysis of water molecules. Most water radiolysis products are short-lived but extremely reactive. When formed in the vicinity of the target surface they can react with metal atoms, thereby contributing to target corrosion. The authors will describe the results of calculations and experiments performed at Los Alamos to determine the impact on target corrosion of water radiolysis in the spallation radiation environment. The computational methodology relies on the use of the Los Alamos radiation transport code, LAHET, to determine the radiation environment, and the AEA code, FACSIMILE, to model reaction-diffusion processes.

  15. Crowd Sourcing for Challenging Technical Problems and Business Model

    Science.gov (United States)

    Davis, Jeffrey R.; Richard, Elizabeth

    2011-01-01

    Crowd sourcing may be defined as the act of outsourcing tasks that are traditionally performed by an employee or contractor to an undefined, generally large group of people or community (a crowd) in the form of an open call. The open call may be issued by an organization wishing to find a solution to a particular problem or complete a task, or by an open innovation service provider on behalf of that organization. In 2008, the Space Life Sciences Directorate (SLSD), with the support of Wyle Integrated Science and Engineering, established and implemented pilot projects in open innovation (crowd sourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical challenges. These unsolved technical problems were converted to problem statements, also called "Challenges" or "Technical Needs" by the various open innovation service providers, and were then posted externally to seek solutions. In addition, an open call was issued internally to NASA employees Agency wide (10 Field Centers and NASA HQ) using an open innovation service provider crowd sourcing platform to post NASA challenges from each Center for the others to propose solutions). From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external problems or challenges were posted through three different vendors: InnoCentive, Yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive crowd sourcing platform designed for internal use by an organization. This platform was customized for NASA use and promoted as NASA@Work. The results were significant. Of the seven InnoCentive external challenges, two full and five partial awards were made in complex technical areas such as predicting solar flares and long-duration food packaging. Similarly, the TopCoder challenge yielded an optimization algorithm for designing a lunar medical kit. The Yet2.com challenges yielded many new industry and academic contacts in bone

  16. [Spatiotemporal variation of water source supply service in Three Rivers Source Area of China based on InVEST model].

    Science.gov (United States)

    Pan, Tao; Wu, Shao-Hong; Dai, Er-Fu; Liu, Yu-Jie

    2013-01-01

    The Three Rivers Source Area is the largest ecological function region of water source supply and conservation in China. As affected by a variety of driving factors, the ecosystems in this region are seriously degraded, giving definite impacts on the water source supply service. This paper approached the variation patterns of precipitation and runoff coefficient from 1981 to 2010, quantitatively estimated the water source supply of the ecosystems in the region from 1980 to 2005 based on InVEST model, and analyzed the spatiotemporal variation pattern and its causes of the water source supply in different periods. In 1981-2010, the precipitation in the Three Rivers Source Area had a trend of increase after an initial decrease, while the precipitation runoff coefficient presented an obvious decreasing trend, suggesting a reduced capability of runoff water source supply of this region. The potential evapotranspiration had a declining trend, but not obvious, with a rate of -0.226 mm x a(-1). In 1980-2005, the water source supply of the region represented an overall decreasing trend, which was most obvious in the Yellow River Source Area. The spatiotemporal variation of the water source supply in the Three Rivers Source Area was the results of the combined effects of climate and land use change, and the climate factors affected the water source supply mainly through affecting the precipitation and potential evapotranspiration. Climate and land use change induced the ecosystem degradation and underlying surface change, which could be the main driving forces of the declined water source supply in the Three Rivers Source Area.

  17. Pan-European modelling of riverine nutrient concentrations - spatial patterns, source detection, trend analyses, scenario modelling

    Science.gov (United States)

    Bartosova, Alena; Arheimer, Berit; Capell, Rene; Donnelly, Chantal; Strömqvist, Johan

    2016-04-01

    Nutrient transport models are important tools for large scale assessments of macro-nutrient fluxes (nitrogen, phosphorus) and thus can serve as support tool for environmental assessment and management. Results from model applications over large areas, i.e. from major river basin to continental scales can fill a gap where monitoring data is not available. Here, we present results from the pan-European rainfall-runoff and nutrient transfer model E-HYPE, which is based on open data sources. We investigate the ability of the E-HYPE model to replicate the spatial and temporal variations found in observed time-series of riverine N and P concentrations, and illustrate the model usefulness for nutrient source detection, trend analyses, and scenario modelling. The results show spatial patterns in N concentration in rivers across Europe which can be used to further our understanding of nutrient issues across the European continent. E-HYPE results show hot spots with highest concentrations of total nitrogen in Western Europe along the North Sea coast. Source apportionment was performed to rank sources of nutrient inflow from land to sea along the European coast. An integrated dynamic model as E-HYPE also allows us to investigate impacts of climate change and measure programs, which was illustrated in a couple of scenarios for the Baltic Sea. Comparing model results with observations shows large uncertainty in many of the data sets and the assumptions used in the model set-up, e.g. point source release estimates. However, evaluation of model performance at a number of measurement sites in Europe shows that mean N concentration levels are generally well simulated. P levels are less well predicted which is expected as the variability of P concentrations in both time and space is higher. Comparing model performance with model set-ups using local data for the Weaver River (UK) did not result in systematically better model performance which highlights the complexity of model

  18. Empirical testing of earthquake recurrence models at source and site

    Science.gov (United States)

    Albarello, D.; Mucciarelli, M.

    2012-04-01

    Several probabilistic procedures are presently available for seismic hazard assessment (PSHA), based on time-dependent or time-independent models. The result is a number of different outcomes (hazard maps), and to take into account the inherent uncertainty (epistemic), the outcomes of alternative procedures are combined in the frame of logic-tree approaches by scoring each procedure as a function of the respective reliability. This is deduced by evaluating ex-ante (by expert judgements) each element concurring in the relevant PSH computational procedure. This approach appears unsatisfactory also because the value of each procedure depends both on the reliability of each concurring element and on that of their combination: thus, checking the correctness of single elements does not allow evaluating the correctness of the procedure as a whole. Alternative approaches should be based 1) on the ex-post empirical testing of the considered PSH computational models and 2) on the validation of the assumptions underlying concurrent models. The first goal can be achieved comparing the probabilistic forecasts provided by each model with empirical evidence relative to seismic occurrences (e.g., strong-motion data or macroseismic intensity evaluations) during some selected control periods of dimension comparable with the relevant exposure time. About assumptions validation, critical issues are the dimension of the minimum data set necessary to distinguish processes with or without memory, the reliability of mixed data on seismic sources (i.e. historical and palaeoseismological), the completeness of fault catalogues. Some results obtained by the application of these testing procedures in Italy will be shortly outlined.

  19. Validation and modeling of earthquake strong ground motion using a composite source model

    Science.gov (United States)

    Zeng, Y.

    2001-12-01

    Zeng et al. (1994) have proposed a composite source model for synthetic strong ground motion prediction. In that model, the source is taken as a superposition of circular subevents with a constant stress drop. The number of subevents and their radius follows a power law distribution equivalent to the Gutenberg and Richter's magnitude-frequency relation for seismicity. The heterogeneous nature of the composite source model is characterized by its maximum subevent size and subevent stress drop. As rupture propagates through each subevent, it radiates a Brune's pulse or a Sato and Hirasawa's circular crack pulse. The method has been proved to be successful in generating realistic strong motion seismograms in comparison with observations from earthquakes in California, eastern US, Guerrero of Mexico, Turkey and India. The model has since been improved by including scattering waves from small scale heterogeneity structure of the earth, site specific ground motion prediction using weak motion site amplification, and nonlinear soil response using geotechnical engineering models. Last year, I have introduced an asymmetric circular rupture to improve the subevent source radiation and to provide a consistent rupture model between overall fault rupture process and its subevents. In this study, I revisit the Landers, Loma Prieta, Northridge, Imperial Valley and Kobe earthquakes using the improved source model. The results show that the improved subevent ruptures provide an improved effect of rupture directivity compared to our previous studies. Additional validation includes comparison of synthetic strong ground motions to the observed ground accelerations from the Chi-Chi, Taiwan and Izmit, Turkey earthquakes. Since the method has evolved considerably when it was first proposed, I will also compare results between each major modification of the model and demonstrate its backward compatibility to any of its early simulation procedures.

  20. Preliminary Results of the first European Source Apportionment intercomparison for Receptor and Chemical Transport Models

    Science.gov (United States)

    Belis, Claudio A.; Pernigotti, Denise; Pirovano, Guido

    2017-04-01

    Source Apportionment (SA) is the identification of ambient air pollution sources and the quantification of their contribution to pollution levels. This task can be accomplished using different approaches: chemical transport models and receptor models. Receptor models are derived from measurements and therefore are considered as a reference for primary sources urban background levels. Chemical transport model have better estimation of the secondary pollutants (inorganic) and are capable to provide gridded results with high time resolution. Assessing the performance of SA model results is essential to guarantee reliable information on source contributions to be used for the reporting to the Commission and in the development of pollution abatement strategies. This is the first intercomparison ever designed to test both receptor oriented models (or receptor models) and chemical transport models (or source oriented models) using a comprehensive method based on model quality indicators and pre-established criteria. The target pollutant of this exercise, organised in the frame of FAIRMODE WG 3, is PM10. Both receptor models and chemical transport models present good performances when evaluated against their respective references. Both types of models demonstrate quite satisfactory capabilities to estimate the yearly source contributions while the estimation of the source contributions at the daily level (time series) is more critical. Chemical transport models showed a tendency to underestimate the contribution of some single sources when compared to receptor models. For receptor models the most critical source category is industry. This is probably due to the variety of single sources with different characteristics that belong to this category. Dust is the most problematic source for Chemical Transport Models, likely due to the poor information about this kind of source in the emission inventories, particularly concerning road dust re-suspension, and consequently the

  1. Assessing Model Characterization of Single Source Secondary Pollutant Impacts Using 2013 SENEX Field Study Measurements.

    Science.gov (United States)

    Baker, Kirk R; Woody, Matthew C

    2017-03-15

    Aircraft measurements made downwind from specific coal fired power plants during the 2013 Southeast Nexus field campaign provide a unique opportunity to evaluate single source photochemical model predictions of both O3 and secondary PM2.5 species. The model did well at predicting downwind plume placement. The model shows similar patterns of an increasing fraction of PM2.5 sulfate ion to the sum of SO2 and PM2.5 sulfate ion by distance from the source compared with ambient based estimates. The model was less consistent in capturing downwind ambient based trends in conversion of NOX to NOY from these sources. Source sensitivity approaches capture near-source O3 titration by fresh NO emissions, in particular subgrid plume treatment. However, capturing this near-source chemical feature did not translate into better downwind peak estimates of single source O3 impacts. The model estimated O3 production from these sources but often was lower than ambient based source production. The downwind transect ambient measurements, in particular secondary PM2.5 and O3, have some level of contribution from other sources which makes direct comparison with model source contribution challenging. Model source attribution results suggest contribution to secondary pollutants from multiple sources even where primary pollutants indicate the presence of a single source.

  2. Impacts of DEM uncertainties on critical source areas identification for non-point source pollution control based on SWAT model

    Science.gov (United States)

    Xu, Fei; Dong, Guangxia; Wang, Qingrui; Liu, Lumeng; Yu, Wenwen; Men, Cong; Liu, Ruimin

    2016-09-01

    The impacts of different digital elevation model (DEM) resolutions, sources and resampling techniques on nutrient simulations using the Soil and Water Assessment Tool (SWAT) model have not been well studied. The objective of this study was to evaluate the sensitivities of DEM resolutions (from 30 m to 1000 m), sources (ASTER GDEM2, SRTM and Topo-DEM) and resampling techniques (nearest neighbor, bilinear interpolation, cubic convolution and majority) to identification of non-point source (NPS) critical source area (CSA) based on nutrient loads using the SWAT model. The Xiangxi River, one of the main tributaries of Three Gorges Reservoir in China, was selected as the study area. The following findings were obtained: (1) Elevation and slope extracted from the DEMs were more sensitive to DEM resolution changes. Compared with the results of the 30 m DEM, 1000 m DEM underestimated the elevation and slope by 104 m and 41.57°, respectively; (2) The numbers of subwatersheds and hydrologic response units (HRUs) were considerably influenced by DEM resolutions, but the total nitrogen (TN) and total phosphorus (TP) loads of each subwatershed showed higher correlations with different DEM sources; (3) DEM resolutions and sources had larger effects on CSAs identifications, while TN and TP CSAs showed different response to DEM uncertainties. TN CSAs were more sensitive to resolution changes, exhibiting six distribution patterns at all DEM resolutions. TP CSAs were sensitive to source and resampling technique changes, exhibiting three distribution patterns for DEM sources and two distribution patterns for DEM resampling techniques. DEM resolutions and sources are the two most sensitive SWAT model DEM parameters that must be considered when nutrient CSAs are identified.

  3. A New Model for the Genesis of Natural Gases--Multi-source Overlap, Multi-stage Continuity, Type Controlled by Main Source and Nomenclature by Main Stage (Ⅰ)--Multi-source Overlap and Type Controlled by Main Source

    Institute of Scientific and Technical Information of China (English)

    徐永昌; 沈平

    1994-01-01

    Based on the geochemical studies of natural gases in the past ten years in China, the authors have proposed a new model for their genesis--multi-source overlap, multi-stage continuity, main source-controlling type and nomenclature by the main stage.Multi-source refers to a diversity of material sources involved in the formation of natural gases, including abiogenic and biogenic material sources. In regard to biogenic sources, either oil-generating or coal-generating organic matter would produce gaseous hydrocarbon reservoirs of commercial importance. Generally, natural gases originating from these sources can overlap to form gas reservoirs. Under specific circumstances mantle-source abiogenic gases could overlap biogenic gases to form gas reservoirs. In nature, natural gases predominated by gaseous hydrocarbons may be formed from a single end-member source. However, multi-source overlap is more typical of the genesis of natural gases.

  4. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere.

    Science.gov (United States)

    Ma, Denglong; Zhang, Zaoxiao

    2016-07-05

    Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  5. PIC modeling of negative ion sources for fusion

    Science.gov (United States)

    Taccogna, F.; Minelli, P.

    2017-01-01

    This work represents the first attempt to model the full-size ITER negative ion source prototype including expansion, extraction and part of the acceleration regions keeping the resolution fine enough to resolve every single aperture of the extraction grid. The model consists of a 2.5-dimensional Particle-in-Cell/Monte Carlo Collision representation of the plane perpendicular to the filter field lines. Both the magnetic filter and electron deflection fields have been included. A negative ion current density of {j}{H-}=500 {{A}} {{{m}}}-2 produced by neutral conversion from the plasma grid is used as fixed parameter, while negative ions produced by electron dissociative attachment of vibrationally excited molecules and by ionic conversion on plasma grid are self-consistently simulated. Results show the non-ambipolar character of the transport in the expansion region driven by electron magnetic drifts in the plane perpendicular to the filter field. It induces a top-bottom asymmetry detected up to the extraction grid which in turn leads to a tilted positive ion flow hitting the plasma grid and a tilted negative ion flow emitted from the plasma grid. As a consequence, the plasma structure is not uniform around the single aperture: the meniscus assumes a form of asymmetric lobe and a deeper potential well is detected from one side of the aperture relative to the other side. Therefore, the surface-produced contribution to the negative ion extraction is not equally distributed between both the sides around the aperture but it come mainly from the lower side of the grid giving an asymmetrical current distribution in the single beamlet.

  6. Quasi-homogeneous partial coherent source modeling of multimode optical fiber output using the elementary source method

    Science.gov (United States)

    Fathy, Alaa; Sabry, Yasser M.; Khalil, Diaa A.

    2017-10-01

    Multimode fibers (MMF) have many applications in illumination, spectroscopy, sensing and even in optical communication systems. In this work, we present a model for the MMF output field assuming the fiber end as a quasi-homogenous source. The fiber end is modeled by a group of partially coherent elementary sources, spatially shifted and uncorrelated with each other. The elementary source distribution is derived from the far field intensity measurement, while the weighting function of the sources is derived from the fiber end intensity measurement. The model is compared with practical measurements for fibers with different core/cladding diameters at different propagation distances and for different input excitations: laser, white light and LED. The obtained results show normalized root mean square error less than 8% in the intensity profile in most cases, even when the fiber end surface is not perfectly cleaved. Also, the comparison with the Gaussian–Schell model results shows a better agreement with the measurement. In addition, the complex degree of coherence, derived from the model results, is compared with the theoretical predictions of the modified Van Zernike equation showing very good agreement, which strongly supports the assumption that the large core MMF could be considered as a quasi-homogenous source.

  7. Two States CBR Modeling of Data Source in Dynamic Traffic Monitoring Sensor Networks

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Real traffic information was analyzed in the statistical characteristics and approximated as a Gaussian time series. A data source model, called two states constant bit rate (TSCBR), was proposed in dynamic traffic monitoring sensor networks. Analysis of autocorrelation of the models shows that the proposed TSCBR model matches with the statistical characteristics of real data source closely. To further verify the validity of the TSCBR data source model, the performance metrics of power consumption and network lifetime was studied in the evaluation of sensor media access control (SMAC) algorithm. The simulation results show that compared with traditional data source models, TSCBR model can significantly improve accuracy of the algorithm evaluation.

  8. Research on modeling of heat source for electron beam welding fusion-solidification zone

    Institute of Scientific and Technical Information of China (English)

    Wang Yajun; Fu Pengfei; Guan Yongjun; Lu Zhijun; Wei Yintao

    2013-01-01

    In this paper,the common heat source model of point and linear heat source in the numerical simulation of electron beam welding (EBW) were summarized and introduced.The combined point-linear heat source model was brought forward and to simulate the welding temperature fields of EBW and predicting the weld shape.The model parameters were put forward and regulated in the combined model,which included the ratio of point heat source to linear heat source Qpr and the distribution of linear heat source Lr.Based on the combined model,the welding temperature fields of EBW were investigated.The results show that the predicted weld shapes are conformable to those of the actual,the temperature fields are reasonable and correct by simulating with combined point-linear heat source model and the typical weld shapes are gained.

  9. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  10. Sources

    OpenAIRE

    2015-01-01

    SOURCES MANUSCRITES Archives nationales Rôles de taille 1768/71 Z1G-344/18 Aulnay Z1G-343a/02 Gennevilliers Z1G-340/01 Ivry Z1G-340/05 Orly Z1G-334c/09 Saint-Remy-lès-Chevreuse Z1G-344/18 Sevran Z1G-340/05 Thiais 1779/80 Z1G-391a/18 Aulnay Z1G-380/02 Gennevilliers Z1G-385/01 Ivry Z1G-387b/05 Orly Z1G-388a/09 Saint-Remy-lès-Chevreuse Z1G-391a/18 Sevran Z1G-387b/05 Thiais 1788/89 Z1G-451/18 Aulnay Z1G-452/21 Chennevières Z1G-443b/02 Gennevilliers Z1G-440a/01 Ivry Z1G-452/17 Noiseau Z1G-445b/05 ...

  11. Versatile Markovian models for networks with asymmetric TCP sources

    NARCIS (Netherlands)

    Foreest, van N.D.; Haverkort, B.R.; Mandjes, M.R.H.; Scheinhardt, W.R.W.

    2004-01-01

    In this paper we use Stochastic Petri Nets (SPNs) to study the interaction of multiple TCP sources that share one or two buffers, thereby considerably extending earlier work. We first consider two sources sharing a buffer and investigate the consequences of two popular assumptions for the loss proce

  12. Development of PM2.5 source impact spatial fields using a hybrid source apportionment air quality model

    Directory of Open Access Journals (Sweden)

    C. E. Ivey

    2015-01-01

    Full Text Available An integral part of air quality management is knowledge of the impact of pollutant sources on ambient concentrations of particulate matter (PM. There is also a growing desire to directly use source impact estimates in health studies; however, source impacts cannot be directly measured. Several limitations are inherent in most source apportionment methods, which has led to the development of a novel hybrid approach that is used to estimate source impacts by combining the capabilities of receptor modeling (RM and chemical transport modeling (CTM. The hybrid CTM-RM method calculates adjustment factors to refine the CTM-estimated impact of sources at monitoring sites using pollutant species observations and the results of CTM sensitivity analyses, though it does not directly generate spatial source impact fields. The CTM used here is the Community Multi-Scale Air Quality (CMAQ model, and the RM approach is based on the Chemical Mass Balance model. This work presents a method that utilizes kriging to spatially interpolate source-specific impact adjustment factors to generate revised CTM source impact fields from the CTM-RM method results, and is applied to January 2004 over the continental United States. The kriging step is evaluated using data withholding and by comparing results to data from alternative networks. Directly applied and spatially interpolated hybrid adjustment factors at withheld monitors had a correlation coefficient of 0.89, a linear regression slope of 0.83 ± 0.02, and an intercept of 0.14 ± 0.02. Refined source contributions reflect current knowledge of PM emissions (e.g., significant differences in biomass burning impact fields. Concentrations of 19 species and total PM2.5 mass were reconstructed for withheld monitors using directly applied and spatially interpolated hybrid adjustment factors. The mean concentrations of total PM2.5 for withheld monitors were 11.7 (± 8.3, 16.3 (± 11, 8.59 (± 4.7, and 9.20 (± 5.7 μg m−3

  13. Percolation model with an additional source of disorder

    Science.gov (United States)

    Kundu, Sumanta; Manna, S. S.

    2016-06-01

    The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p . Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R1 and R2 of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R1-R2 plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is pc(sq) , the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R ∈{0 ,R0} and a percolation transition is observed with R0 as the control variable, similar to the site occupation probability.

  14. Laboratory Plasma Source as an MHD Model for Astrophysical Jets

    Science.gov (United States)

    Mayo, Robert M.

    1997-01-01

    The significance of the work described herein lies in the demonstration of Magnetized Coaxial Plasma Gun (MCG) devices like CPS-1 to produce energetic laboratory magneto-flows with embedded magnetic fields that can be used as a simulation tool to study flow interaction dynamic of jet flows, to demonstrate the magnetic acceleration and collimation of flows with primarily toroidal fields, and study cross field transport in turbulent accreting flows. Since plasma produced in MCG devices have magnetic topology and MHD flow regime similarity to stellar and extragalactic jets, we expect that careful investigation of these flows in the laboratory will reveal fundamental physical mechanisms influencing astrophysical flows. Discussion in the next section (sec.2) focuses on recent results describing collimation, leading flow surface interaction layers, and turbulent accretion. The primary objectives for a new three year effort would involve the development and deployment of novel electrostatic, magnetic, and visible plasma diagnostic techniques to measure plasma and flow parameters of the CPS-1 device in the flow chamber downstream of the plasma source to study, (1) mass ejection, morphology, and collimation and stability of energetic outflows, (2) the effects of external magnetization on collimation and stability, (3) the interaction of such flows with background neutral gas, the generation of visible emission in such interaction, and effect of neutral clouds on jet flow dynamics, and (4) the cross magnetic field transport of turbulent accreting flows. The applicability of existing laboratory plasma facilities to the study of stellar and extragalactic plasma should be exploited to elucidate underlying physical mechanisms that cannot be ascertained though astrophysical observation, and provide baseline to a wide variety of proposed models, MHD and otherwise. The work proposed herin represents a continued effort on a novel approach in relating laboratory experiments to

  15. Studies and modeling of cold neutron sources; Etude et modelisation des sources froides de neutron

    Energy Technology Data Exchange (ETDEWEB)

    Campioni, G

    2004-11-15

    With the purpose of updating knowledge in the fields of cold neutron sources, the work of this thesis has been run according to the 3 following axes. First, the gathering of specific information forming the materials of this work. This set of knowledge covers the following fields: cold neutron, cross-sections for the different cold moderators, flux slowing down, different measurements of the cold flux and finally, issues in the thermal analysis of the problem. Secondly, the study and development of suitable computation tools. After an analysis of the problem, several tools have been planed, implemented and tested in the 3-dimensional radiation transport code Tripoli-4. In particular, a module of uncoupling, integrated in the official version of Tripoli-4, can perform Monte-Carlo parametric studies with a spare factor of Cpu time fetching 50 times. A module of coupling, simulating neutron guides, has also been developed and implemented in the Monte-Carlo code McStas. Thirdly, achieving a complete study for the validation of the installed calculation chain. These studies focus on 3 cold sources currently functioning: SP1 from Orphee reactor and 2 other sources (SFH and SFV) from the HFR at the Laue Langevin Institute. These studies give examples of problems and methods for the design of future cold sources.

  16. Source apportionment of airborne particulates through receptor modeling: Indian scenario

    Science.gov (United States)

    Banerjee, Tirthankar; Murari, Vishnu; Kumar, Manish; Raju, M. P.

    2015-10-01

    Airborne particulate chemistry mostly governed by associated sources and apportionment of specific sources is extremely essential to delineate explicit control strategies. The present submission initially deals with the publications (1980s-2010s) of Indian origin which report regional heterogeneities of particulate concentrations with reference to associated species. Such meta-analyses clearly indicate the presence of reservoir of both primary and secondary aerosols in different geographical regions. Further, identification of specific signatory molecules for individual source category was also evaluated in terms of their scientific merit and repeatability. Source signatures mostly resemble international profile while, in selected cases lack appropriateness. In India, source apportionment (SA) of airborne particulates was initiated way back in 1985 through factor analysis, however, principal component analysis (PCA) shares a major proportion of applications (34%) followed by enrichment factor (EF, 27%), chemical mass balance (CMB, 15%) and positive matrix factorization (PMF, 9%). Mainstream SA analyses identify earth crust and road dust resuspensions (traced by Al, Ca, Fe, Na and Mg) as a principal source (6-73%) followed by vehicular emissions (traced by Fe, Cu, Pb, Cr, Ni, Mn, Ba and Zn; 5-65%), industrial emissions (traced by Co, Cr, Zn, V, Ni, Mn, Cd; 0-60%), fuel combustion (traced by K, NH4+, SO4-, As, Te, S, Mn; 4-42%), marine aerosols (traced by Na, Mg, K; 0-15%) and biomass/refuse burning (traced by Cd, V, K, Cr, As, TC, Na, K, NH4+, NO3-, OC; 1-42%). In most of the cases, temporal variations of individual source contribution for a specific geographic region exhibit radical heterogeneity possibly due to unscientific orientation of individual tracers for specific source and well exaggerated by methodological weakness, inappropriate sample size, implications of secondary aerosols and inadequate emission inventories. Conclusively, a number of challenging

  17. Model Driven Architecture (MDA: Integration and Model Reuse for Open Source eLearning Platforms

    Directory of Open Access Journals (Sweden)

    Blasius Lofi Dewanto

    2005-02-01

    Full Text Available Open Source (OS community offers numerous eLearning platforms of both types: Learning Management Systems (LMS and Learning Content Systems (LCS. General purpose OS intermediaries such as SourceForge, ObjectWeb, Apache or specialized intermediaries like CampusSource reduce the cost to locate such eLearning platforms. Still, it is impossible to directly compare the functionalities of those OS software products without performing detailed testing on each product. Some articles available from eLearning Wikipedia show comparisons between eLearning platforms which can help, but at the end they barely serve as documentation which are becoming out of date quickly (1. The absence of integration activities between OS eLearning platforms - which are sometimes quite similar in terms of functionalities and implementation technologies - is sometimes critical since most of the OS projects possess small financial and human resources. This paper shows a possible solution for these barriers of OS eLearning platforms. We propose the Model Driven Architecture (MDA concept to capture functionalities and to identify similarities between available OS eLearning platforms. This contribution evolved from a fruitful discussion at the 2nd CampusSource Developer Conference at the University of Muenster (27th August 2004.Die Open Source-Community bietet zahlreiche eLearning-Plattformen an: Learning Management-Systeme (LMS sowie Learning Content-Systeme (LCS. Allgemeine Open-Source-Mediatoren, wie SourceForge, ObjectWeb, Apache und der eLearning-spezifische Mediator CampusSource ermöglichen eine einfache Suche nach eLearning-Softwareprodukten. Ein Vergleich unterschiedlicher Plattformen in Bezug auf ihre Funktionalitäten ist jedoch aufwändig. Beiträge aus der “eLearning Wikipedia” können kaum als Entscheidungsgrundlage genutzt werden, da sie schnell veraltet sind (1. Zudem fehlen derzeit Aktivitäten zur Integration von Open Source-eLearning-Plattformen, die oft

  18. An Analytic Linear Accelerator Source Model for Monte Carlo Dose Calculations. I. Model Representation and Construction

    CERN Document Server

    Tian, Zhen; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-01-01

    Monte Carlo (MC) simulation is considered as the most accurate method for radiation dose calculations. Accuracy of a source model for a linear accelerator is critical for the overall dose calculation accuracy. In this paper, we presented an analytical source model that we recently developed for GPU-based MC dose calculations. A key concept called phase-space-ring (PSR) was proposed. It contained a group of particles that are of the same type and close in energy and radial distance to the center of the phase-space plane. The model parameterized probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. For a primary photon PSRs, the particle direction is assumed to be from the beam spot. A finite spot size is modeled with a 2D Gaussian distribution. For a scattered photon PSR, multiple Gaussian components were used to model the particle direction. The direction distribution of an electron PSRs was also modeled as a 2D Gaussian distributi...

  19. Analysis of Multi-particle Production at RHIC by Two-source Statistical Model

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The data of multi-particle production in s1/2 =130 AGeV Au+Au collisions (RHIC) are analyzed by two-source statistical model which was successfully applied in analyzing the data of multi-particle production in 158 AGeV Pb+Pb collisions (SPS). It is found that sources in RHIC are different from that in SPS which has a small and hot inner source surrounded by a larger and cooler outer source. The two sources in RHIC are identical. They have the same temperature, volume, particle density and other thermodynamic quantities. Besides, the results of two-source model are identical with that of single-source model (the total volume of the two sources equals the volume of single source). The

  20. Surface modeling for optical fabrication with linear ion source

    CERN Document Server

    Wu, Lixiang; Shao, Jianda

    2016-01-01

    We present a concept of surface decomposition extended from double Fourier series to nonnegative sinusoidal wave surfaces, on the basis of which linear ion sources apply to the ultra-precision fabrication of complex surfaces and diffractive optics. It is the first time that we have a surface descriptor for building a relationship between the fabrication process of optical surfaces and the surface characterization based on PSD analysis, which akin to Zernike polynomials used for mapping the relationship between surface errors and Seidel aberrations. Also, we demonstrate that the one-dimensional scanning of linear ion source is applicable to the removal of surface errors caused by small-tool polishing in raster scan mode as well as the fabrication of beam sampling grating of high diffractive uniformity without a post-processing procedure. The simulation results show that, in theory, optical fabrication with linear ion source is feasible and even of higher output efficiency compared with the conventional approac...

  1. Including source uncertainty and prior information in the analysis of stable isotope mixing models.

    Science.gov (United States)

    Ward, Eric J; Semmens, Brice X; Schindler, Daniel E

    2010-06-15

    Stable isotope mixing models offer a statistical framework for estimating the contribution of multiple sources (such as prey) to a mixture distribution. Recent advances in these models have estimated the source proportions using Bayesian methods, but have not explicitly accounted for uncertainty in the mean and variance of sources. We demonstrate that treating these quantities as unknown parameters can reduce bias in the estimated source contributions, although model complexity is increased (thereby increasing the variance of estimates). The advantages of this fully Bayesian approach are particularly apparent when the source geometry is poor or sample sizes are small. A second benefit to treating source quantities as parameters is that prior source information can be included. We present findings from 9 lake food-webs, where the consumer of interest (fish) has a diet composed of 5 sources: aquatic insects, snails, zooplankton, amphipods, and terrestrial insects. We compared the traditional Bayesian stable isotope mixing model with fixed source parameters to our fully Bayesian model-with and without an informative prior. The informative prior has much less impact than the choice of model-the traditional mixing model with fixed source parameters estimates the diet to be dominated by aquatic insects, while the fully Bayesian model estimates the diet to be more balanced but with greater importance of zooplankton. The findings from this example demonstrate that there can be stark differences in inference between the two model approaches, particularly when the source geometry of the mixing model is poor. These analyses also emphasize the importance of investing substantial effort toward characterizing the variation in the isotopic characteristics of source pools to appropriately quantify uncertainties in their contributions to consumers in food webs.

  2. Versatile Stochastic Models for Networks with Asymmetric TCP Sources

    NARCIS (Netherlands)

    Foreest, van Nicky D.; Haverkort, Boudewijn R.; Mandjes, Michel R.H.; Scheinhardt, Werner R.W.

    2007-01-01

    In this paper we use stochastic Petri nets (SPNs) to study the interaction of multiple TCP sources that share one or two buffers. No analytical nor numerical results have been presented for such cases yet. We use SPNs in an unconventional way: the tokens in the SPN do not represent the packets being

  3. Simple models of two-dimensional information sources and codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Shtarkov, Y. M.

    1998-01-01

    We consider discrete random fields which have simple descriptions of rows and columns. We present constructions which combine high entropy with simple means of generating the fields and analyzing the probability distribution. Hidden state Markov sources are an essential tool in the construction...

  4. Modeling of negative ion transport in a plasma source

    Science.gov (United States)

    Riz, David; Paméla, Jérôme

    1998-08-01

    A code called NIETZSCHE has been developed to simulate the negative ion transport in a plasma source, from their birth place to the extraction holes. The ion trajectory is calculated by numerically solving the 3-D motion equation, while the atomic processes of destruction, of elastic collision H-/H+ and of charge exchange H-/H0 are handled at each time step by a Monte-Carlo procedure. This code can be used to calculate the extraction probability of a negative ion produced at any location inside the source. Calculations performed with NIETZSCHE have allowed to explain, either quantitatively or qualitatively, several phenomena observed in negative ion sources, such as the isotopic H-/D- effect, and the influence of the plasma grid bias or of the magnetic filter on the negative ion extraction. The code has also shown that in the type of sources contemplated for ITER, which operate at large arc power densities (>1 W cm-3), negative ions can reach the extraction region provided if they are produced at a distance lower than 2 cm from the plasma grid in the case of «volume production» (dissociative attachment processes), or if they are produced at the plasma grid surface, in the vicinity of the extraction holes.

  5. Modeling of negative ion transport in a plasma source (invited)

    Science.gov (United States)

    Riz, David; Paméla, Jérôme

    1998-02-01

    A code called NIETZSCHE has been developed to simulate the negative ion transport in a plasma source, from their birth place to the extraction holes. The H-/D- trajectory is calculated by numerically solving the 3D motion equation, while the atomic processes of destruction, of elastic collision with H+/D+ and of charge exchange with H0/D0 are handled at each time step by a Monte Carlo procedure. This code can be used to calculate the extraction probability of a negative ion produced at any location inside the source. Calculations performed with NIETZSCHE have been allowed to explain, either quantitatively or qualitatively, several phenomena observed in negative ion sources, such as the isotopic H-/D- effect, and the influence of the plasma grid bias or of the magnetic filter on the negative ion extraction. The code has also shown that, in the type of sources contemplated for ITER, which operate at large arc power densities (>1 W cm-3), negative ions can reach the extraction region provided they are produced at a distance lower than 2 cm from the plasma grid in the case of volume production (dissociative attachment processes), or if they are produced at the plasma grid surface, in the vicinity of the extraction holes.

  6. An Analytic Linear Accelerator Source Model for Monte Carlo dose calculations. II. Model Utilization in a GPU-based Monte Carlo Package and Automatic Source Commissioning

    CERN Document Server

    Tian, Zhen; Li, Yongbao; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-01-01

    We recently built an analytical source model for GPU-based MC dose engine. In this paper, we present a sampling strategy to efficiently utilize this source model in GPU-based dose calculation. Our source model was based on a concept of phase-space-ring (PSR). This ring structure makes it effective to account for beam rotational symmetry, but not suitable for dose calculations due to rectangular jaw settings. Hence, we first convert PSR source model to its phase-space let (PSL) representation. Then in dose calculation, different types of sub-sources were separately sampled. Source sampling and particle transport were iterated. So that the particles being sampled and transported simultaneously are of same type and close in energy to alleviate GPU thread divergence. We also present an automatic commissioning approach to adjust the model for a good representation of a clinical linear accelerator . Weighting factors were introduced to adjust relative weights of PSRs, determined by solving a quadratic minimization ...

  7. Open Source Software Success Model for Iran: End-User Satisfaction Viewpoint

    Directory of Open Access Journals (Sweden)

    Ali Niknafs

    2012-03-01

    Full Text Available The open source software development is notable option for software companies. Recent years, many advantages of this software type are cause of move to that in Iran. National security and international restrictions problems and also software and services costs and more other problems intensified importance of use of this software. Users and their viewpoints are the critical success factor in the software plans. But there is not an appropriate model for open source software case in Iran. This research tried to develop a measuring open source software success model for Iran. By use of data gathered from open source users and online survey the model was tested. The results showed that components by positive effect on open source success were user satisfaction, open source community services quality, open source quality, copyright and security.

  8. Open source software engineering for geoscientific modeling applications

    Science.gov (United States)

    Bilke, L.; Rink, K.; Fischer, T.; Kolditz, O.

    2012-12-01

    OpenGeoSys (OGS) is a scientific open source project for numerical simulation of thermo-hydro-mechanical-chemical (THMC) processes in porous and fractured media. The OGS software development community is distributed all over the world and people with different backgrounds are contributing code to a complex software system. The following points have to be addressed for successful software development: - Platform independent code - A unified build system - A version control system - A collaborative project web site - Continuous builds and testing - Providing binaries and documentation for end users OGS should run on a PC as well as on a computing cluster regardless of the operating system. Therefore the code should not include any platform specific feature or library. Instead open source and platform independent libraries like Qt for the graphical user interface or VTK for visualization algorithms are used. A source code management and version control system is a definite requirement for distributed software development. For this purpose Git is used, which enables developers to work on separate versions (branches) of the software and to merge those versions at some point to the official one. The version control system is integrated into an information and collaboration website based on a wiki system. The wiki is used for collecting information such as tutorials, application examples and case studies. Discussions take place in the OGS mailing list. To improve code stability and to verify code correctness a continuous build and testing system, based on the Jenkins Continuous Integration Server, has been established. This server is connected to the version control system and does the following on every code change: - Compiles (builds) the code on every supported platform (Linux, Windows, MacOS) - Runs a comprehensive test suite of over 120 benchmarks and verifies the results Runs software development related metrics on the code (like compiler warnings, code complexity

  9. The Design of a Fire Source in Scale-Model Experiments with Smoke Ventilation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Brohus, Henrik; la Cour-Harbo, H.

    2004-01-01

    The paper describes the design of a fire and a smoke source for scale-model experiments with smoke ventilation. It is only possible to work with scale-model experiments where the Reynolds number is reduced compared to full scale, and it is demonstrated that special attention to the fire source...... (heat and smoke source) may improve the possibility of obtaining Reynolds number independent solutions with a fully developed flow. The paper shows scale-model experiments for the Ofenegg tunnel case. Design of a fire source for experiments with smoke ventilation in a large room and smoke movement...

  10. A FRAMEWORK FOR AN OPEN SOURCE GEOSPATIAL CERTIFICATION MODEL

    OpenAIRE

    Khan, T. U. R.; Davis, P.; Behr, F.-J.

    2016-01-01

    The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cart...

  11. [Solute transport modeling application in groundwater organic contaminant source identification].

    Science.gov (United States)

    Wang, Shu-Fang; Wang, Li-Ya; Wang, Xiao-Hong; Lin, Pei; Liu, Jiu-Rong; Xin, Bao-Dong; He, Guo-Ping

    2012-03-01

    Investigation and numerical simulation, based on RT3D (reactive transport in 3-dimensions)were used to identify the source of tetrachloroethylene (PCE) and trichloroethylene (TCE) in the groundwater of a city in the north of China and reverse the input intensity. Multiple regressions were applied to analyze the influenced factors of input intensity of PCE and TCE using Stepwise function in Matlab. The results indicate that the factories and industries are the source of the PCE and TCE in groundwater. Natural attenuation was identified and the natural attenuation rates are 93.15%, 61.70% and 61.00% for PCE, and 70.05%, 73.66% and 63.66% for TCE in 173 days. The 4 source points identified by the simulation have released 0.910 6 kg PCE and 95.693 8 kg TCE during the simulation period. The regression analysis results indicate that local precipitation and the thickness of vadose zone are the main factors influencing organic solution transporting from surface to groundwater. The PCE and TCE concentration are found to be 0 and 5 mg x kg(-1) from surface to 35 cm in vadose zone. All above results suggest that PCE and TCE in groundwater are from the source in the surface. Natural attenuation occurred when PCE and TCE transporting from the surface to groundwater, and the rest was transported to groundwater through vadose zone. Local precipitation was one of the critical factors influencing the transportation of PCE and TCE to aquifer through sand, pebble and gravel of the Quaternary.

  12. From sub-source to source: Interpreting results of biological trace investigations using probabilistic models

    NARCIS (Netherlands)

    Oosterman, W.T.; Kokshoorn, B.; Maaskant-van Wijk, P.A.; de Zoete, J.

    2015-01-01

    The current method of reporting a putative cell type is based on a non-probabilistic assessment of test results by the forensic practitioner. Additionally, the association between donor and cell type in mixed DNA profiles can be exceedingly complex. We present a probabilistic model for interpretatio

  13. Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE) using a Hierarchical Bayesian Approach

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole;

    2011-01-01

    We present an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model representation is motivated by the many random contributions to the path from sources to measurements including the tissue conductivity distribution, the geometry of the cortical s...

  14. An Equivalent Source Method for Modelling the Global Lithospheric Magnetic Field

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris;

    We produce a new model of the global lithospheric magnetic field based on 3-component vector field observations at all latitudes from the CHAMP satellite using an equivalent source technique.......We produce a new model of the global lithospheric magnetic field based on 3-component vector field observations at all latitudes from the CHAMP satellite using an equivalent source technique....

  15. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  16. Optimized Second-Order Dynamical Systems and Their RLC Circuit Models with PWL Controlled Sources

    Directory of Open Access Journals (Sweden)

    J. Brzobohaty

    2004-09-01

    Full Text Available Complementary active RLC circuit models with a voltage-controlledvoltage source (VCVS and a current-controlled current source (CCCSfor the second-order autonomous dynamical system realization areproposed. The main advantage of these equivalent circuits is the simplerelation between the state model parameters and their correspondingcircuit parameters, which leads also to simple design formulas.

  17. Finite element modeling of plasmon based single-photon sources

    DEFF Research Database (Denmark)

    Chen, Yuntian; Gregersen, Niels; Nielsen, Torben Roland;

    2011-01-01

    A finite element method (FEM) approach of calculating a single emitter coupled to plasmonic waveguides has been developed. The method consists of a 2D model and a 3D model: (I) In the 2D model, we have calculated the spontaneous emission decay rate of a single emitter into guided plasmonic modes...... waveguides with different geometries, as long as only one guided plasmonic mode is predominantly excited....

  18. Review of release models used in source-term codes

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jongsoon [Department of Nuclear Engineering, Chosen University, Kwangju (Korea, Republic of)

    1999-07-01

    Throughout this reviews, the limitations of current release models are identified and ways of improving them suggested, By incorporation recent experimental results, recommendations for future release modeling activities can be made. All release under review were compared with respect to the following six items: scenario, assumptions, mathematical formulations, solution method, radioactive decay chain considered, and geometry. The following nine models are considered for review: SOTEC and SCCEX (CNWRA), DOE/INTERA, TSPA (SNL), Vault Model (AECL), CCALIBRE (SKI), AREST (PNL), Risk Assessment (EPRI), TOSPAC (SNL). (author)

  19. Contribution of polycyclic aromatic hydrocarbon (PAH) sources to the urban environment: A comparison of receptor models.

    Science.gov (United States)

    Teixeira, Elba Calesso; Agudelo-Castañeda, Dayana Milena; Mattiuzi, Camila Dalla Porta

    2015-12-15

    The aim of this study was to evaluate the contribution of the main emission sources of PAHs associated with PM2.5, in an urban area of the Rio Grande do Sul state. Source apportionment was conducted using both the US EPA Positive Matrix Factorization (PMF) model and the Chemical Mass Balance (CMB) model. The two models were compared to analyze the source contributions similarities and differences, their advantages and disadvantages. PM2.5 samples were collected continuously over 24h using a stacked filter unit at 3 sampling sites of the urban area of the Rio Grande do Sul state every 15days between 2006 and 2008. Both models identified the main emission sources of PAHs in PM2.5: vehicle fleet (diesel and gasoline), coal combustion, wood burning, and dust. Results indicated similar source contribution amongst the sampling sites, as expected because of the proximity amongst the sampling sites, which are under the influence of the same pollutants emitting sources. Moreover, differences were observed in obtained sources contributions for the same data set of each sampling site. The PMF model attributed a slightly greater amount of PAHs to the gasoline and diesel sources, while diesel contributed more in the CMB results. The results were comparable with previous works of the region and in accordance with the characteristics of the study area. Comparison between these receptor models, which contain different physical constraints, is important for understanding better PAH emissions sources in order to reduce air pollution.

  20. Photochemical grid model implementation of VOC, NOx, and O3 source apportionment

    Directory of Open Access Journals (Sweden)

    R. H. F. Kwok

    2014-09-01

    Full Text Available For the purposes of developing optimal emissions control strategies, efficient approaches are needed to identify the major sources or groups of sources that contribute to elevated ozone (O3 concentrations. Source based apportionment techniques implemented in photochemical grid models track sources through the physical and chemical processes important to the formation and transport of air pollutants. Photochemical model source apportionment has been used to estimate impacts of specific sources, groups of sources (sectors, sources in specific geographic areas, and stratospheric and lateral boundary inflow on O3. The implementation and application of a source apportionment technique for O3 and its precursors, nitrogen oxides (NOx and volatile organic compounds (VOC, for the Community Multiscale Air Quality (CMAQ model are described here. The Integrated Source Apportionment Method (ISAM O3 approach is a hybrid of source apportionment and source sensitivity in that O3 production is attributed to precursor sources based on O3 formation regime (e.g., for a NOx-sensitive regime, O3 is apportioned to participating NOx emissions. This implementation is illustrated by tracking multiple emissions source sectors and lateral boundary inflow. NOx, VOC, and O3 attribution to tracked sectors in the application are consistent with spatial and temporal patterns of precursor emissions. The O3 ISAM implementation is further evaluated through comparisons of apportioned ambient concentrations and deposition amounts with those derived from brute force zero-out scenarios, with correlation coefficients ranging between 0.58 and 0.99 depending on specific combination of target species and tracked precursor emissions. Low correlation coefficients occur for chemical regimes that have strong non-linearity in O3 sensitivity, which demonstrates different functionalities between source apportionment and zero-out approaches, depending on whether sources of interest are either to

  1. Influence of head models on neuromagnetic fields and inverse source localizations

    Directory of Open Access Journals (Sweden)

    Schimpf Paul H

    2006-10-01

    Full Text Available Abstract Background The magnetoencephalograms (MEGs are mainly due to the source currents. However, there is a significant contribution to MEGs from the volume currents. The structure of the anatomical surfaces, e.g., gray and white matter, could severely influence the flow of volume currents in a head model. This, in turn, will also influence the MEGs and the inverse source localizations. This was examined in detail with three different human head models. Methods Three finite element head models constructed from segmented MR images of an adult male subject were used for this study. These models were: (1 Model 1: full model with eleven tissues that included detailed structure of the scalp, hard and soft skull bone, CSF, gray and white matter and other prominent tissues, (2 the Model 2 was derived from the Model 1 in which the conductivity of gray matter was set equal to the white matter, i.e., a ten tissuetype model, (3 the Model 3 consisted of scalp, hard skull bone, CSF, gray and white matter, i.e., a five tissue-type model. The lead fields and MEGs due to dipolar sources in the motor cortex were computed for all three models. The dipolar sources were oriented normal to the cortical surface and had a dipole moment of 100 μA meter. The inverse source localizations were performed with an exhaustive search pattern in the motor cortex area. A set of 100 trial inverse runs was made covering the 3 cm cube motor cortex area in a random fashion. The Model 1 was used as a reference model. Results The reference model (Model 1, as expected, performed best in localizing the sources in the motor cortex area. The Model 3 performed the worst. The mean source localization errors (MLEs of the Model 3 were larger than the Model 1 or 2. The contour plots of the magnetic fields on top of the head were also different for all three models. The magnetic fields due to source currents were larger in magnitude as compared to the magnetic fields of volume currents

  2. Comparison of two propeller source models for aircraft interior noise studies

    Science.gov (United States)

    Mahan, J. R.; Fuller, C. R.

    1986-01-01

    The sensitivity of the predicted synchrophasing (SP) effectiveness trends to the propeller source model issued is investigated with reference to the development of advanced turboprop engines for transport aircraft. SP effectiveness is shown to be sensitive to the type of source model used. For the virtually rotating dipole source model, the SP effectiveness is sensitive to the direction of rotation at some frequencies but not at others. The SP effectiveness obtained from the virtually rotating dipole model is not very sensitive to the radial location of the source distribution within reasonable limits. Finally, the predicted SP effectiveness is shown to be more sensitive to the details of the source model used for the case of corotation than for the case of counterrotation.

  3. Comparison of realistic head modeling methods in EEG source imaging - biomed 2010.

    Science.gov (United States)

    Vatta, F; Meneghini, F; Esposito, F; Mininel, S; Disalle, F

    2010-01-01

    EEG inverse source imaging aims at reconstructing the underlying current distribution in the human brain using potential differences measured non-invasively from the head surface. A critical component of source reconstruction is the head volume conductor model used to reach an accurate solution of the associated forward problem, i.e., the simulation of the EEG for a known current source in the brain. The volume conductor model contains both the geometry and the electrical conduction properties of the head tissues and the accuracy of both parameters has direct impact on the accuracy of the source analysis. This was examined in detail with two different human head models. Two realistic head models derived from an averaged T1-weighted MRI dataset of the Montreal Neurological Institute (MNI) were used for this study. These models were: (1) BEM Model: a four-shell surface-based Boundary Elements (BEM) head model; (2) FDM Model: a volume-based Finite Difference (FDM) model, which allows better modeling accuracy than BEM as it better represents the cortical structures, such as, sulci and gyri in the brain in a three-dimensional head model. How model accuracy description influences the EEG source localizations was studied with the above realistic models of the head. We present here a detailed computer simulation study in which the performances of the two realistic four-shell head models are compared, the realistic MNI-based BEM Model and the FDM Model. As figures of merit for the comparative analysis, the point spread function (PSF) maps and the lead field (LF) correlation coefficients are used. The obtained results demonstrate that a better description of realistic geometry can provide a factor of improvement particularly important when considering sources placed in the temporal or in the occipital cortex. In these situations, using a more refined realistic head model will allow a better spatial discrimination of neural sources.

  4. Improved estimation of sediment source contributions by concentration-dependent Bayesian isotopic mixing model

    Science.gov (United States)

    Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal

    2017-04-01

    The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable

  5. Chemical transport model simulations of organic aerosol in southern California: model evaluation and gasoline and diesel source contributions

    Science.gov (United States)

    Jathar, Shantanu H.; Woody, Matthew; Pye, Havala O. T.; Baker, Kirk R.; Robinson, Allen L.

    2017-03-01

    Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA-SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data. Mobile sources were predicted to contribute 30-40 % of the OA in southern California (half of which was SOA), making mobile sources the single largest source contributor to OA in southern California. The remainder of the OA was attributed to non-mobile anthropogenic sources (e.g., cooking, biomass burning) with biogenic sources contributing to less than 5 % to the total OA. Gasoline sources were predicted to contribute about 13 times more OA than diesel sources; this difference was driven by differences in

  6. Source term model evaluations for the low-level waste facility performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Yim, M.S.; Su, S.I. [North Carolina State Univ., Raleigh, NC (United States)

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  7. DYNAMO: concurrent dynamic multi-model source localization method for EEG and/or MEG.

    Science.gov (United States)

    Antelis, Javier M; Minguez, Javier

    2013-01-15

    This work presents a new dipolar method to estimate the neural sources from separate or combined EEG and MEG data. The novelty lies in the simultaneous estimation and integration of neural sources from different dynamic models with different parameters, leading to a dynamic multi-model solution for the EEG/MEG source localization problem. The first key aspect of this method is defining the source model as a dipolar dynamic system, which allows for the estimation of the probability distribution of the sources within the Bayesian filter estimation framework. A second important aspect is the consideration of several banks of filters that simultaneously estimate and integrate the neural sources of different models. A third relevant aspect is that the final probability estimate is a result of the probabilistic integration of the neural sources of numerous models. Such characteristics lead to a new approach that does not require a prior definition neither of the number of sources or of the underlying temporal dynamics, allowing for the specification of multiple initial prior estimates. The method was validated by three sensor modalities with simulated data designed to impose difficult estimation situations, and with real EEG data recorded in a feedback error-related potential paradigm. On the basis of these evaluations, the method was able to localize the sources with high accuracy.

  8. Sources of nitrogen and phosphorus emissions to Irish rivers: estimates from the Source Load Apportionment Model (SLAM)

    Science.gov (United States)

    Mockler, Eva; Deakin, Jenny; Archbold, Marie; Daly, Donal; Bruen, Michael

    2017-04-01

    More than half of the river and lake water bodies in Europe are at less than good ecological status or potential, and diffuse pollution from agriculture remains a major, but not the only, cause of this poor performance. In Ireland, it is evident that agri-environmental policy and land management practices have, in many areas, reduced nutrient emissions to water, mitigating the potential impact on water quality. However, additional measures may be required in order to further decouple the relationship between agricultural productivity and emissions to water, which is of vital importance given the on-going agricultural intensification in Ireland. Catchment management can be greatly supported by modelling, which can reduce the resources required to analyse large amounts of information and can enable investigations and measures to be targeted. The Source Load Apportionment Model (SLAM) framework was developed to support catchment management in Ireland by characterising the contributions from various sources of phosphorus (P) and nitrogen (N) emissions to water. The SLAM integrates multiple national spatial datasets relating to nutrient emissions to surface water, including land use and physical characteristics of the sub-catchments to predict emissions from point (wastewater, industry discharges and septic tank systems) and diffuse sources (agriculture, forestry, peatlands, etc.). The annual nutrient emissions predicted by the SLAM were assessed against nutrient monitoring data for 16 major river catchments covering 50% of the area of Ireland. At national scale, results indicate that the total average annual emissions to surface water in Ireland are over 2,700 t yr-1 of P and 80,000 t yr-1 of N. The SLAM results include the proportional contributions from individual sources at a range of scales from sub-catchment to national, and show that the main sources of P are from wastewater and agriculture, with wide variations across the country related to local anthropogenic

  9. Unified Models of Molecular Emission from Class 0 Protostellar Outflow Sources

    CERN Document Server

    Rawlings, J M C; Carolan, P B

    2013-01-01

    Low mass star-forming regions are more complex than the simple spherically symmetric approximation that is often assumed. We apply a more realistic infall/outflow physical model to molecular/continuum observations of three late Class 0 protostellar sources with the aims of (a) proving the applicability of a single physical model for all three sources, and (b) deriving physical parameters for the molecular gas component in each of the sources. We have observed several molecular species in multiple rotational transitions. The observed line profiles were modelled in the context of a dynamical model which incorporates infall and bipolar outflows, using a three dimensional radiative transfer code. This results in constraints on the physical parameters and chemical abundances in each source. Self-consistent fits to each source are obtained. We constrain the characteristics of the molecular gas in the envelopes as well as in the molecular outflows. We find that the molecular gas abundances in the infalling envelope ...

  10. Asteroid models from photometry and complementary data sources

    Energy Technology Data Exchange (ETDEWEB)

    Kaasalainen, Mikko [Department of Mathematics, Tampere University of Technology (Finland)

    2016-05-10

    I discuss inversion methods for asteroid shape and spin reconstruction with photometry (lightcurves) and complementary data sources such as adaptive optics or other images, occultation timings, interferometry, and range-Doppler radar data. These are essentially different sampling modes (generalized projections) of plane-of-sky images. An important concept in this approach is the optimal weighting of the various data modes. The maximum compatibility estimate, a multi-modal generalization of the maximum likelihood estimate, can be used for this purpose. I discuss the fundamental properties of lightcurve inversion by examining the two-dimensional case that, though not usable in our three-dimensional world, is simple to analyze, and it shares essentially the same uniqueness and stability properties as the 3-D case. After this, I review the main aspects of 3-D shape representations, lightcurve inversion, and the inclusion of complementary data.

  11. A Latent Source Model for Patch-Based Image Segmentation.

    Science.gov (United States)

    Chen, George H; Shah, Devavrat; Golland, Polina

    2015-10-01

    Despite the popularity and empirical success of patch-based nearest-neighbor and weighted majority voting approaches to medical image segmentation, there has been no theoretical development on when, why, and how well these nonparametric methods work. We bridge this gap by providing a theoretical performance guarantee for nearest-neighbor and weighted majority voting segmentation under a new probabilistic model for patch-based image segmentation. Our analysis relies on a new local property for how similar nearby patches are, and fuses existing lines of work on modeling natural imagery patches and theory for nonparametric classification. We use the model to derive a new patch-based segmentation algorithm that iterates between inferring local label patches and merging these local segmentations to produce a globally consistent image segmentation. Many existing patch-based algorithms arise as special cases of the new algorithm.

  12. MODELING OF SEDIMENT AND NONPOINT SOURCE POLLUTANT YIELD

    Institute of Scientific and Technical Information of China (English)

    Huai'en LI; Xiaokang Hong; Bing SHEN

    2001-01-01

    For water and soil conservation and water pollution control, it is very important to simulate and predict the load of sediment and pollutant during storm-runoff. On the basis of analyzing the simultaneous measurements of flow, sediment and pollutants observed at watershed outlet, a practical sediment yield model is developed by standardizing the load rate. The results show that the standardized pollutant yield equals effective rainfall and the process of effective load yield is the same as effective rainfall hyetograph. Comparison with measured data show that this model is applicable to various pollutants.

  13. A GIS-based time-dependent seismic source modeling of Northern Iran

    Science.gov (United States)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  14. An ion species model for positive ion sources - part I description of the model

    CERN Document Server

    Surrey, E

    2014-01-01

    A one dimensional model of the magnetic multipole volume plasma source has been developed for use in intense ion/neutral atom beam injectors. The model uses plasma transport coefficients for particle and energy flow to create a detailed description of the plasma parameters along an axis parallel to that of the extracted beam. Primarily constructed for applications to neutral beam injection systems on fusion devices, the model concentrates on the hydrogenic isotopes but can be extended to any gas by substitution of the relevant masses, cross sections and rate coefficients. The model considers the flow of fast ionizing electrons that create the ratios of the three hydrogenic isotope ion species, H+, H2 +, H3 + (and similarly for deuterium and tritium) as they flow towards the beam extraction electrode, together with the production of negative hydrogenic ions through volume processes. The use of detailed energy balance in the discharge allows the determination of the fraction of the gas density that is in an ato...

  15. Current-voltage model of LED light sources

    DEFF Research Database (Denmark)

    Beczkowski, Szymon; Munk-Nielsen, Stig

    2012-01-01

    Amplitude modulation is rarely used for dimming light-emitting diodes in polychromatic luminaires due to big color shifts caused by varying magnitude of LED driving current and nonlinear relationship between intensity of a diode and driving current. Current-voltage empirical model of light...

  16. A preference-based multiple-source rough set model

    NARCIS (Netherlands)

    M.A. Khan; M. Banerjee

    2010-01-01

    We propose a generalization of Pawlak’s rough set model for the multi-agent situation, where information from an agent can be preferred over that of another agent of the system while deciding membership of objects. Notions of lower/upper approximations are given which depend on the knowledge base of

  17. Reason Maintenance in Product Modelling via Open Source CAD System

    Directory of Open Access Journals (Sweden)

    Z. Ibrahim

    2016-12-01

    Full Text Available The present and future challenges of a new product design, forecasting and risk management launch strategy for a new product modelling decision process. This paper intends to propose and to look towards the development of a low-cost integrated CAD-CAPP-CAD/CAM product modelling system for the design and manufacture of a proposed product. It is a mapping between several design phases like functional design, technical design and physical design. The modelling data generation process begins with the drafting of a product to be maintained using the drafting software package. From the CAD drawing, the data are transferred to be used as the product models and a CAPP software package will then prepare the operational parameters for the manufacturing of the product. These process data are relayed to a CAM software package, which will then generate the automating information-processing functions. The final stage of the function is to support design and manufacturing operations that may have reaped many benefits in terms of its initial equipment and software costs.

  18. Total Variability Modeling using Source-specific Priors

    DEFF Research Database (Denmark)

    Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou

    2016-01-01

    In total variability modeling, variable length speech utterances are mapped to fixed low-dimensional i-vectors. Central to computing the total variability matrix and i-vector extraction, is the computation of the posterior distribution for a latent variable conditioned on an observed feature...... sequence of an utterance. In both cases the prior for the latent variable is assumed to be non-informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows in the heterogeneous case, that using informative priors for com- puting the posterior......, can lead to favorable results. We focus on modeling the priors using minimum divergence criterion or fac- tor analysis techniques. Tests on the NIST 2008 and 2010 Speaker Recognition Evaluation (SRE) dataset show that our proposed method beats four baselines: For i-vector extraction using an already...

  19. LINEAR MODELS FOR MANAGING SOURCES OF GROUNDWATER POLLUTION.

    Science.gov (United States)

    Gorelick, Steven M.; Gustafson, Sven-Ake; ,

    1984-01-01

    Mathematical models for the problem of maintaining a specified groundwater quality while permitting solute waste disposal at various facilities distributed over space are discussed. The pollutants are assumed to be chemically inert and their concentrations in the groundwater are governed by linear equations for advection and diffusion. The aim is to determine a disposal policy which maximises the total amount of pollutants released during a fixed time T while meeting the condition that the concentration everywhere is below prescribed levels.

  20. Strong ground-motion prediction from Stochastic-dynamic source models

    Science.gov (United States)

    Guatteri, Mariagiovanna; Mai, P.M.; Beroza, G.C.; Boatwright, J.

    2003-01-01

    In the absence of sufficient data in the very near source, predictions of the intensity and variability of ground motions from future large earthquakes depend strongly on our ability to develop realistic models of the earthquake source. In this article we simulate near-fault strong ground motion using dynamic source models. We use a boundary integral method to simulate dynamic rupture of earthquakes by specifying dynamic source parameters (fracture energy and stress drop) as spatial random fields. We choose these quantities such that they are consistent with the statistical properties of slip heterogeneity found in finite-source models of past earthquakes. From these rupture models we compute theoretical strong-motion seismograms up to a frequency of 2 Hz for several realizations of a scenario strike-slip Mw 7.0 earthquake and compare empirical response spectra, spectra obtained from our dynamic models, and spectra determined from corresponding kinematic simulations. We find that spatial and temporal variations in slip, slip rise time, and rupture propagation consistent with dynamic rupture models exert a strong influence on near-source ground motion. Our results lead to a feasible approach to specify the variability in the rupture time distribution in kinematic models through a generalization of Andrews' (1976) result relating rupture speed to apparent fracture energy, stress drop, and crack length to 3D dynamic models. This suggests that a simplified representation of dynamic rupture may be obtained to approximate the effects of dynamic rupture without having to do full dynamic simulations.

  1. Combined source apportionment and degradation quantification of organic pollutants with CSIA: 1. Model derivation.

    Science.gov (United States)

    Lutz, S R; Van Breukelen, B M

    2014-06-03

    Compound-specific stable isotope analysis (CSIA) serves as a tool for source apportionment (SA) and for the quantification of the extent of degradation (QED) of organic pollutants. However, simultaneous occurrence of mixing of sources and degradation is generally believed to hamper both SA and QED. On the basis of the linear stable isotope mixing model and the Rayleigh equation, we developed the stable isotope sources and sinks model, which allows for simultaneous SA and QED of a pollutant that is emitted by two sources and degrades via one transformation process. It was shown that the model necessitates at least dual-element CSIA for unequivocal SA in the presence of degradation-induced isotope fractionation, as illustrated for perchlorate in groundwater. The model also enables QED, provided degradation follows instantaneous mixing of two sources. If mixing occurs after two sources have degraded separately, the model can still provide a conservative estimate of the overall extent of degradation. The model can be extended to a larger number of sources and sinks as outlined. It may aid in forensics and natural attenuation assessment of soil, groundwater, surface water, or atmospheric pollution.

  2. Strangeness production in heavy ion collisions at SPS and RHIC within two-source statistical model

    CERN Document Server

    Lu, Z D; Fuchs, C; Zabrodin, E E; Lu, Zhong-Dao; Faessler, Amand

    2002-01-01

    The experimental data on hadron yields and ratios in central Pb+Pb and Au+Au collisions at SPS and RHIC energies, respectively, are analysed within a two-source statistical model of an ideal hadron gas. These two sources represent the expanding system of colliding heavy ions, where the hot central fireball is embedded in a larger but cooler fireball. The volume of the central source increases with rising bombarding energy. Results of the two-source model fit to RHIC experimental data at midrapidity coincide with the results of the one-source thermal model fit, indicating the formation of an extended fireball, which is three times larger than the corresponding core at SPS.

  3. Sources of motivation, interpersonal conflict management styles, and leadership effectiveness: a structural model.

    Science.gov (United States)

    Barbuto, John E; Xu, Ye

    2006-02-01

    126 leaders and 624 employees were sampled to test the relationship between sources of motivation and conflict management styles of leaders and how these variables influence effectiveness of leadership. Five sources of motivation measured by the Motivation Sources Inventory were tested-intrinsic process, instrumental, self-concept external, self-concept internal, and goal internalization. These sources of work motivation were associated with Rahim's modes of interpersonal conflict management-dominating, avoiding, obliging, complying, and integrating-and to perceived leadership effectiveness. A structural equation model tested leaders' conflict management styles and leadership effectiveness based upon different sources of work motivation. The model explained variance for obliging (65%), dominating (79%), avoiding (76%), and compromising (68%), but explained little variance for integrating (7%). The model explained only 28% of the variance in leader effectiveness.

  4. Different approaches to modeling the LANSCE H- ion source filament performance

    Science.gov (United States)

    Draganic, I. N.; O'Hara, J. F.; Rybarcyk, L. J.

    2016-02-01

    An overview of different approaches to modeling of hot tungsten filament performance in the Los Alamos Neutron Science Center (LANSCE) H- surface converter ion source is presented. The most critical components in this negative ion source are two specially shaped wire filaments heated up to the working temperature range of 2600 K-2700 K during normal beam production. In order to prevent catastrophic filament failures (creation of hot spots, wire breaking, excessive filament deflection towards source body, etc.) and to improve understanding of the material erosion processes, we have simulated the filament performance using three different models: a semi-empirical model, a thermal finite-element analysis model, and an analytical model. Results of all three models were compared with data taken during LANSCE beam production. The models were used to support the recent successful transition from the beam pulse repetition rate of 60 Hz-120 Hz.

  5. Gravitational wave source counts at high redshift and in models with extra dimensions

    Science.gov (United States)

    García-Bellido, Juan; Nesseris, Savvas; Trashorras, Manuel

    2016-07-01

    Gravitational wave (GW) source counts have been recently shown to be able to test how gravitational radiation propagates with the distance from the source. Here, we extend this formalism to cosmological scales, i.e. the high redshift regime, and we discuss the complications of applying this methodology to high redshift sources. We also allow for models with compactified extra dimensions like in the Kaluza-Klein model. Furthermore, we also consider the case of intermediate redshifts, i.e. 0 < z lesssim 1, where we show it is possible to find an analytical approximation for the source counts dN/d(S/N). This can be done in terms of cosmological parameters, such as the matter density Ωm,0 of the cosmological constant model or the cosmographic parameters for a general dark energy model. Our analysis is as general as possible, but it depends on two important factors: a source model for the black hole binary mergers and the GW source to galaxy bias. This methodology also allows us to obtain the higher order corrections of the source counts in terms of the signal-to-noise S/N. We then forecast the sensitivity of future observations in constraining GW physics but also the underlying cosmology by simulating sources distributed over a finite range of signal-to-noise with a number of sources ranging from 10 to 500 sources as expected from future detectors. We find that with 500 events it will be possible to provide constraints on the matter density parameter at present Ωm,0 on the order of a few percent and with the precision growing fast with the number of events. In the case of extra dimensions we find that depending on the degeneracies of the model, with 500 events it may be possible to provide stringent limits on the existence of the extra dimensions if the aforementioned degeneracies can be broken.

  6. Source modelling of train noise - Literature review and some initial measurements

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Xuetao; Jonasson, Hans; Holmberg, Kjell

    2000-07-01

    A literature review of source modelling of railway noise is reported. Measurements on a special test rig at Surahammar and on the new railway line between Arlanda and Stockholm City are reported and analyzed. In the analysis the train is modelled as a number of point sources with or without directivity and each source is combined with analytical sound propagation theory to predict the sound propagation pattern best fitting the measured data. Wheel/rail rolling noise is considered to be the most important noise source. The rolling noise can be modelled as an array of moving point sources, which have a dipole-like horizontal directivity and some kind of vertical directivity. In general it is necessary to distribute the point sources on several heights. Based on our model analysis the source heights for the rolling noise should be below the wheel axles and the most important height is about a quarter of wheel diameter above the railheads. When train speeds are greater than 250 km/h aerodynamic noise will become important and even dominant. It may be important for low frequency components only if the train speed is less than 220 km/h. Little data are available for these cases. It is believed that aerodynamic noise has dipole-like directivity. Its spectrum depends on many factors: speed, railway system, type of train, bogies, wheels, pantograph, presence of barriers and even weather conditions. Other sources such as fans, engine, transmission and carriage bodies are at most second order noise sources, but for trains with a diesel locomotive engine the engine noise will be dominant if train speeds are less than about 100 km/h. The Nord 2000 comprehensive model for sound propagation outdoors, together with the source model that is based on the understandings above, can suitably handle the problems of railway noise propagation in one-third octave bands although there are still problems left to be solved.

  7. A Moment Approach to Modeling Negative Ion Sources.

    Science.gov (United States)

    1987-12-01

    concentrated on consistently modeling the plasma discharge. Bretagne , et al, [10] numerically solved the Boltizmann equation to calculate the...electrons Looking at the electron energy distribution function (EEDF) for a typical MMIS, shown in Fig. 3 (from Bretagne , et al ), it is seen that there...Ir 5 10 i 101 0 "s 10S0 0 " - " 1 0 5r IS’ -I. I h. 0 so) 15 Ener/gy eV) ’., Fig. 3 - EEDF calculated by Bretagne , et a, [10:816] for a 40 mTorr, 90

  8. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    Science.gov (United States)

    Zhang, Shou-ping; Xin, Xiao-kang

    2016-01-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  9. Parsing pyrogenic polycyclic aromatic hydrocarbons: forensic chemistry, receptor models, and source control policy.

    Science.gov (United States)

    O'Reilly, Kirk T; Pietari, Jaana; Boehm, Paul D

    2014-04-01

    A realistic understanding of contaminant sources is required to set appropriate control policy. Forensic chemical methods can be powerful tools in source characterization and identification, but they require a multiple-lines-of-evidence approach. Atmospheric receptor models, such as the US Environmental Protection Agency (USEPA)'s chemical mass balance (CMB), are increasingly being used to evaluate sources of pyrogenic polycyclic aromatic hydrocarbons (PAHs) in sediments. This paper describes the assumptions underlying receptor models and discusses challenges in complying with these assumptions in practice. Given the variability within, and the similarity among, pyrogenic PAH source types, model outputs are sensitive to specific inputs, and parsing among some source types may not be possible. Although still useful for identifying potential sources, the technical specialist applying these methods must describe both the results and their inherent uncertainties in a way that is understandable to nontechnical policy makers. The authors present an example case study concerning an investigation of a class of parking-lot sealers as a significant source of PAHs in urban sediment. Principal component analysis is used to evaluate published CMB model inputs and outputs. Targeted analyses of 2 areas where bans have been implemented are included. The results do not support the claim that parking-lot sealers are a significant source of PAHs in urban sediments. © 2013 SETAC.

  10. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    Science.gov (United States)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  11. Remote Sensing of Alpha and Beta Sources - Modeling Summary

    Energy Technology Data Exchange (ETDEWEB)

    Dignon, J; Frank, M; Cherepy, N

    2005-10-20

    Evaluating the potential for optical detection of the products of interactions of energetic electrons or other particles with the background atmosphere depends on predictions of change in atmospheric concentrations of species which would generate detectable spectral signals within the range of observation. The solar blind region of the spectrum, in the ultra violet, would be the logical band for outdoor detection (see Figure 1). The chemistry relevant to these processes is composed of ion-molecule reactions involving the initially created N{sub 2}{sup +} and O{sub 2}{sup +} ions, and their subsequent interactions with ambient trace atmospheric constituents. Effective modeling of the atmospheric chemical system acted upon by energetic particles requires knowledge of the dominant mechanism that exchange charge and associate it with atmospheric constituents, kinetic parameters of the individual processes (see e.g. Brasseur and Solomon, 1995), and a solver for the coupled differential equations that is accurate for the very stiff set of time constants involved. The LLNL box model, VOLVO, simulates the diel cycle of trace constituent photochemistry for any point on the globe over the wide range of time scales present using a stiff Gear-type ODE solver, i.e. LSODE. It has been applied to problems such as tropospheric and stratospheric nitrogen oxides, stratospheric ozone production and loss, and tropospheric hydrocarbon oxidation. For this study we have included the appropriate ion flux.

  12. New Data Source for Studying and Modelling the Topside Ionosphere

    Science.gov (United States)

    Huang, X.; Reinisch, B.; Bilitza, D.; Benson, R.

    2001-05-01

    The existing uncertainties about the electron density profiles in the topside ionosphere, i.e., in the height regime from hmF2 to ~2000 km, requires the search for new data sources. Millions of ionograms had been recorded by the ISIS satellites that never were analyzed in terms of electron density profiles. In recent years an effort started to digitize the analog recordings to prepare the ionograms for computerized analysis. To date, approximately 300,000 ISIS-2 topside-sounder ionograms have been digitizd. Computation of electron density profiles from these ionograms requires identifying the echo traces on the ionogram and then applying an inversion algorithm. An automatic topside ionogram scaler with true height algorithm (TOPIST) has been developed that is successfully scaling ~70 % of the ionograms. This paper shows how the digital ionograms are processed and the profiles calculated. The most difficult part of the task is the automatic scaling of the echo traces in the ISIS ionograms to provide R'(f) where R' is the virtual range of the echo at frequency f. Characteristic resonance features seen in the topside ionograms occur at the gyro and plasma frequencies. An elaborate scheme was developed to measure these resonance frequencies in order to determine the local plasma and gyrofrequencies. This information helps in the identification of the O and X traces, and it provides the starting density of the electron density profile from the satellite altitude to hmF2. An 'editing process' is available to manually scale the more difficult ionograms. The electron density data and the TOPIST software will be made available online from NASA's National Space Science Data Center (NSSDC) at http://nssdc.gsfc.nasa.gov/space/isis/isis-status.html. This site provides already access to the digitized ISIS ionogram data and to related software. A search page lets users select data by location, time, and a host of other search criteria. Selected ionogram data can be viewed on

  13. Two Model-Based Methods for Policy Analyses of Fine Particulate Matter Control in China: Source Apportionment and Source Sensitivity

    Science.gov (United States)

    Li, X.; Zhang, Y.; Zheng, B.; Zhang, Q.; He, K.

    2013-12-01

    Anthropogenic emissions have been controlled in recent years in China to mitigate fine particulate matter (PM2.5) pollution. Recent studies show that sulfate dioxide (SO2)-only control cannot reduce total PM2.5 levels efficiently. Other species such as nitrogen oxide, ammonia, black carbon, and organic carbon may be equally important during particular seasons. Furthermore, each species is emitted from several anthropogenic sectors (e.g., industry, power plant, transportation, residential and agriculture). On the other hand, contribution of one emission sector to PM2.5 represents contributions of all species in this sector. In this work, two model-based methods are used to identify the most influential emission sectors and areas to PM2.5. The first method is the source apportionment (SA) based on the Particulate Source Apportionment Technology (PSAT) available in the Comprehensive Air Quality Model with extensions (CAMx) driven by meteorological predictions of the Weather Research and Forecast (WRF) model. The second method is the source sensitivity (SS) based on an adjoint integration technique (AIT) available in the GEOS-Chem model. The SA method attributes simulated PM2.5 concentrations to each emission group, while the SS method calculates their sensitivity to each emission group, accounting for the non-linear relationship between PM2.5 and its precursors. Despite their differences, the complementary nature of the two methods enables a complete analysis of source-receptor relationships to support emission control policies. Our objectives are to quantify the contributions of each emission group/area to PM2.5 in the receptor areas and to intercompare results from the two methods to gain a comprehensive understanding of the role of emission sources in PM2.5 formation. The results will be compared in terms of the magnitudes and rankings of SS or SA of emitted species and emission groups/areas. GEOS-Chem with AIT is applied over East Asia at a horizontal grid

  14. Skull defects in finite element head models for source reconstruction from magnetoencephalography signals

    Directory of Open Access Journals (Sweden)

    Stephan eLau

    2016-04-01

    Full Text Available Magnetoencephalography (MEG signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors.A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects.The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals.We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery.

  15. An incentive-based source separation model for sustainable municipal solid waste management in China.

    Science.gov (United States)

    Xu, Wanying; Zhou, Chuanbin; Lan, Yajun; Jin, Jiasheng; Cao, Aixin

    2015-05-01

    Municipal solid waste (MSW) management (MSWM) is most important and challenging in large urban communities. Sound community-based waste management systems normally include waste reduction and material recycling elements, often entailing the separation of recyclable materials by the residents. To increase the efficiency of source separation and recycling, an incentive-based source separation model was designed and this model was tested in 76 households in Guiyang, a city of almost three million people in southwest China. This model embraced the concepts of rewarding households for sorting organic waste, government funds for waste reduction, and introducing small recycling enterprises for promoting source separation. Results show that after one year of operation, the waste reduction rate was 87.3%, and the comprehensive net benefit under the incentive-based source separation model increased by 18.3 CNY tonne(-1) (2.4 Euros tonne(-1)), compared to that under the normal model. The stakeholder analysis (SA) shows that the centralized MSW disposal enterprises had minimum interest and may oppose the start-up of a new recycling system, while small recycling enterprises had a primary interest in promoting the incentive-based source separation model, but they had the least ability to make any change to the current recycling system. The strategies for promoting this incentive-based source separation model are also discussed in this study.

  16. Kalman filter-based microphone array signal processing using the equivalent source model

    Science.gov (United States)

    Bai, Mingsian R.; Chen, Ching-Cheng

    2012-10-01

    This paper demonstrates that microphone array signal processing can be implemented by using adaptive model-based filtering approaches. Nearfield and farfield sound propagation models are formulated into state-space forms in light of the Equivalent Source Method (ESM). In the model, the unknown source amplitudes of the virtual sources are adaptively estimated by using Kalman filters (KFs). The nearfield array aimed at noise source identification is based on a Multiple-Input-Multiple-Output (MIMO) state-space model with minimal realization, whereas the farfield array technique aimed at speech quality enhancement is based on a Single-Input-Multiple-Output (SIMO) state-space model. Performance of the nearfield array is evaluated in terms of relative error of the velocity reconstructed on the actual source surface. Numerical simulations for the nearfield array were conducted with a baffled planar piston source. From the error metric, the proposed KF algorithm proved effective in identifying noise sources. Objective simulations and subjective experiments are undertaken to validate the proposed farfield arrays in comparison with two conventional methods. The results of objective tests indicated that the farfield arrays significantly enhanced the speech quality and word recognition rate. The results of subjective tests post-processed with the analysis of variance (ANOVA) and a post-hoc Fisher's least significant difference (LSD) test have shown great promise in the KF-based microphone array signal processing techniques.

  17. General expression of double ellipsoidal heat source model and its error analysis

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In order to analyze the maximum power density error with different heat flux distribution parameter values for double ellipsoidal heat source model, a general expression of double ellipsoidal heat source model was derived from Goldak double ellipsoidal heat source model, and the error of maximum power density was analyzed under this foundation. The calculation error of thermal cycling parameters caused by the maximum power density error was compared quantitatively by numerical simulation. The results show that for guarantee the accuracy of welding numerical simulation, it is better to introduce an error correction coefficient into the Goldak double ellipsoidal heat source model expression. And, heat flux distribution parameter should get higher value for the higher power density welding methods.

  18. Bayesian modeling of source confusion in LISA data

    CERN Document Server

    Umstätter, R; Hendry, M; Meyer, R; Simha, V; Veitch, J; Vigeland, S; Woan, G; Umst\\"atter, Richard; Christensen, Nelson; Hendry, Martin; Meyer, Renate; Simha, Vimal; Veitch, John; Vigeland, Sarah; Woan, Graham

    2005-01-01

    One of the greatest data analysis challenges for the Laser Interferometer Space Antenna (LISA) is the need to account for a large number of gravitational wave signals from compact binary systems expected to be present in the data. We introduce the basis of a Bayesian method that we believe can address this challenge, and demonstrate its effectiveness on a simplified problem involving one hundred synthetic sinusoidal signals in noise. We use a reversible jump Markov chain Monte Carlo technique to infer simultaneously the number of signals present, the parameters of each identified signal, and the noise level. Our approach therefore tackles the detection and parameter estimation problems simultaneously, without the need to evaluate formal model selection criteria, such as the Akaike Information Criterion or explicit Bayes factors. The method does not require a stopping criterion to determine the number of signals, and produces results which compare very favorably with classical spectral techniques.

  19. Submillimetre continuum emission from Class 0 sources: Theory, Observations, and Modelling

    CERN Document Server

    Rengel, M; Fröbrich, D; Wolf, S; Eislöffel, J; Rengel, Miriam; Hodapp, Klaus; Froebrich, Dirk; Wolf, Sebastian; Eisloeffel, Jochen

    2004-01-01

    We report on a study of the thermal dust emission of the circumstellar envelopes of a sample of Class 0 sources. The physical structure (geometry, radial intensity profile, spatial temperature and spectral energy distribution) and properties (mass, size, bolometric luminosity (L_bol) and temperature (T_ bol), and age) of Class 0 sources are derived here in an evolutionary context. This is done by combining SCUBA imaging at 450 and 850 microm of the thermal dust emission of envelopes of Class 0 sources in the Perseus and Orion molecular cloud complexes with a model of the envelope, with the implementation of techniques like the blackbody fitting and radiative transfer calculations of dusty envelopes, and with the Smith evolutionary model for protostars. The modelling results obtained here confirm the validity of a simple spherical symmetric model envelope, and the assumptions about density and dust distributions following the standard envelope model. The spherically model reproduces reasonably well the observe...

  20. A Method of Auxiliary Sources Approach for Modelling the Impact of Ground Planes on Antenna

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2006-01-01

    The Method of Auxiliary Sources (MAS) is employed to model the impact of finite ground planes on the radiation from antennas. Two different antenna test cases are shown and the calculated results agree well with reference measurements......The Method of Auxiliary Sources (MAS) is employed to model the impact of finite ground planes on the radiation from antennas. Two different antenna test cases are shown and the calculated results agree well with reference measurements...

  1. Demand Estimation for US Apple Juice Imports: A Restricted Source Differentiated AIDS Model

    OpenAIRE

    Mekonnen, Dawit Kelemework; Fonsah, Esendugue Greg

    2011-01-01

    Although this paper focuses on apple juice, a restricted version of source differentiated Almost Ideal Demand System (RSDAIDS) was used to examine U.S. import demand for fresh apple, apple juice and other processed apple. Apple imports were differentiated by type and source of origin and the RSDAIDS model was estimated after imposing the general demand restrictions of adding-up, homogeneity and slutsky symmetry. Seasonality and trend variables were also included on the model. The estimation r...

  2. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    Science.gov (United States)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  3. LEAD ACID BATTERY MODELING FOR ELECTRIC CAR POWER SOURCES

    Directory of Open Access Journals (Sweden)

    Bambang Sri Kaloko

    2010-06-01

    Full Text Available Successful commercialization of electric vehicles will require a confluence of technology, market, economic, and political factors that transform EVs into an attractive choice for consumers. The characteristics of the traction battery will play a critical role in this transformation. The relationship between battery characteristics such as power, capacity and efficiency, and EV customer satisfaction are discussed based on real world experience. A general problem, however, is that electrical energy can hardly be stored. In general, the storage of electrical energy requires its conversion into another form of energy. Electrical energy is typically obtained through conversion of chemical energy stored in devices such as batteries. In batteries the energy of chemical compounds acts as storage medium, and during discharge, a chemical process occurs that generates energy which can be drawn from the battery in form of an electric current at a certain voltage. A computer simulation is developed to examine overall battery design with the MATLAB/Simulink. Battery modelling with this program have error level less than 5%.   Keywords: Electrochemistry, lead acid battery, stored energy

  4. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  5. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg–Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  6. Putting in operation a full-scale ultracold-neutron source model with superfluid helium

    Science.gov (United States)

    Serebrov, A. P.; Lyamkin, V. A.; Prudnikov, D. V.; Keshishev, K. O.; Boldarev, S. T.; Vasil'ev, A. V.

    2017-02-01

    A project of the source of ultracold neutrons for the WWR-M reactor based on superfluid helium for ultracold-neutron production has been developed. The full-scale source model, including all required cryogenic and vacuum equipment, the cryostat, and the ultracold-neutron source model has been created. The superfluid helium temperature T = 1.08 K without a heat load and T = 1.371 K with a heat load on the simulator of P = 60 W has been achieved in experiments at a technological complex of the ultracold-neutron source. The result proves the feasibility of implementing the ultracold-neutron source at the WWR-M reactor and the possibility of applying superfluid helium in nuclear engineering.

  7. Head model and electrical source imaging: A study of 38 epileptic patients

    Directory of Open Access Journals (Sweden)

    Gwénael Birot

    2014-01-01

    We found that all head models provided very similar source locations. In patients having a positive post-operative outcome, at least 74% of the source maxima were within the resection. The median distance from the source maximum to the nearest intracranial electrode showing IED was 13.2, 15.6 and 15.6 mm for LSMAC, BEM and FEM, respectively. The study demonstrates that in clinical applications, the use of highly sophisticated and difficult to implement head models is not a crucial factor for an accurate ESI.

  8. An Active Global Attack Model for Sensor Source Location Privacy: Analysis and Countermeasures

    Science.gov (United States)

    Yang, Yi; Zhu, Sencun; Cao, Guohong; Laporta, Thomas

    Source locations of events are sensitive contextual information that needs to be protected in sensor networks. Previous work focuses on either an active local attacker that traces back to a real source in a hop-by-hop fashion, or a passive global attacker that eavesdrops/analyzes all network traffic to discover real sources. An active global attack model, which is more realistic and powerful than current ones, has not been studied yet. In this paper, we not only formalize this strong attack model, but also propose countermeasures against it.

  9. Beam conditions for radiation generated by an electromagnetic Hermite-Gaussian model source

    Institute of Scientific and Technical Information of China (English)

    LI Jia; XIN Yu; CHEN Yan-ru

    2011-01-01

    @@ Within the framework of the correlation theory of electromagnetic laser beams, the far field cross-spectral density matrix of the light radiated from an electromagnetic Hermite-Gaussian model source is derived.By utilizing the convergence property of Hermite polynomials, the conditions of the matrices for the source to generate an electromagnetic Hermite-Gaussian beam are obtained.Furthermore, in order to generate a scalar Hermite-Gaussian model beam, it is required that the source should be locally rather coherent in the spatial domain.

  10. An Equivalent Source Method for Modelling the Global Lithospheric Magnetic Field

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    2014-01-01

    are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid. The corresponding source values are estimated using an iteratively reweighted least squares algorithm......We present a new technique for modelling the global lithospheric magnetic field at Earth's surface based on the estimation of equivalent potential field sources. As a demonstration we show an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010 when...

  11. Towards the full information chain theory: answer depth and source models

    CERN Document Server

    Perevalov, Eugene

    2012-01-01

    A problem of optimal information acquisition for its use in general decision making problems is considered. This motivates the need for developing quantitative measures of information sources' capabilities for supplying accurate information depending on the particular content of the latter. A companion article developed the notion of a question difficulty functional for questions concerning input data for a decision making problem. Here, answers which an information source may provide in response to such questions are considered. In particular, a real valued answer depth functional measuring the degree of accuracy of such answers is introduced and its overall form is derived under the assumption of isotropic knowledge structure of the information source. Additionally, information source models that relate answer depth to question difficulty are discussed. It turns out to be possible to introduce a notion of an information source capacity as the highest value of the answer depth the source is capable of provid...

  12. AC Small Signal Modeling of PWM Y-Source Converter by Circuit Averaging and Averaged Switch Modeling Technique

    DEFF Research Database (Denmark)

    Forouzesh, Mojtaba; Siwakoti, Yam Prasad; Blaabjerg, Frede

    2016-01-01

    Magnetically coupled Y-source impedance network is a newly proposed structure with versatile features intended for various power converter applications e.g. in the renewable energy technologies. The voltage gain of the Y-source impedance network rises exponentially as a function of turns ratio......, which is inherited from a special coupled inductor with three windings. Due to the importance of modeling in the converter design procedure, this paper is dedicated to dc and ac small signal modeling of the PWM Y-source converter. The derived transfer functions are presented in detail and have been...

  13. A stable isotope model for combined source apportionment and degradation quantification of environmental pollutants

    Science.gov (United States)

    Lutz, Stefanie; Van Breukelen, Boris

    2014-05-01

    Natural attenuation can represent a complementary or alternative approach to engineered remediation of polluted sites. In this context, compound specific stable isotope analysis (CSIA) has proven a useful tool, as it can provide evidence of natural attenuation and assess the extent of in-situ degradation based on changes in isotope ratios of pollutants. Moreover, CSIA can allow for source identification and apportionment, which might help to identify major emission sources in complex contamination scenarios. However, degradation and mixing processes in aquifers can lead to changes in isotopic compositions, such that their simultaneous occurrence might complicate combined source apportionment (SA) and assessment of the extent of degradation (ED). We developed a mathematical model (stable isotope sources and sinks model; SISS model) based on the linear stable isotope mixing model and the Rayleigh equation that allows for simultaneous SA and quantification of the ED in a scenario of two emission sources and degradation via one reaction pathway. It was shown that the SISS model with CSIA of at least two elements contained in the pollutant (e.g., C and H in benzene) allows for unequivocal SA even in the presence of degradation-induced isotope fractionation. In addition, the model enables precise quantification of the ED provided degradation follows instantaneous mixing of two sources. If mixing occurs after two sources have degraded separately, the model can still yield a conservative estimate of the overall extent of degradation. The SISS model was validated against virtual data from a two-dimensional reactive transport model. The model results for SA and ED were in good agreement with the simulation results. The application of the SISS model to field data of benzene contamination was, however, challenged by large uncertainties in measured isotope data. Nonetheless, the use of the SISS model provided a better insight into the interplay of mixing and degradation

  14. Dorsal column steerability with dual parallel leads using dedicated power sources: a computational model.

    Science.gov (United States)

    Lee, Dongchul; Gillespie, Ewan; Bradley, Kerry

    2011-02-10

    In spinal cord stimulation (SCS), concordance of stimulation-induced paresthesia over painful body regions is a necessary condition for therapeutic efficacy. Since patient pain patterns can be unique, a common stimulation configuration is the placement of two leads in parallel in the dorsal epidural space. This construct provides flexibility in steering stimulation current mediolaterally over the dorsal column to achieve better pain-paresthesia overlap. Using a mathematical model with an accurate fiber diameter distribution, we studied the ability of dual parallel leads to steer stimulation between adjacent contacts on dual parallel leads using (1) a single source system, and (2) a multi-source system, with a dedicated current source for each contact. The volume conductor model of a low-thoracic spinal cord with epidurally-positioned dual parallel (2 mm separation) percutaneous leads was first created, and the electric field was calculated using ANSYS, a finite element modeling tool. The activating function for 10 um fibers was computed as the second difference of the extracellular potential along the nodes of Ranvier on the nerve fibers in the dorsal column. The volume of activation (VOA) and the central point of the VOA were computed using a predetermined threshold of the activating function. The model compared the field steering results with single source versus dedicated power source systems on dual 8-contact stimulation leads. The model predicted that the multi-source system can target more central points of stimulation on the dorsal column than a single source system (100 vs. 3) and the mean steering step for mediolateral steering is 0.02 mm for multi-source systems vs 1 mm for single source systems, a 50-fold improvement. The ability to center stimulation regions in the dorsal column with high resolution may allow for better optimization of paresthesia-pain overlap in patients.

  15. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    Directory of Open Access Journals (Sweden)

    Javier Macias-Guarasa

    2012-10-01

    Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  16. Optical modeling of sunlight by using partially coherent sources in organic solar cells.

    Science.gov (United States)

    Alaibakhsh, Hamzeh; Darvish, Ghafar

    2016-03-01

    We investigate the effects of coherent and partially coherent sources in optical modeling of organic solar cells. Two different organic solar cells are investigated: one without substrate and the other with a millimeter-sized glass substrate. The coherent light absorption is calculated with rigorous coupled-wave analysis. The result of this method is convolved with a distribution function to calculate the partially coherent light absorption. We propose a new formulation to accurately model sunlight as a set of partially coherent sources. In the structure with glass substrate, the accurate sunlight modeling results in the elimination of coherent effects in the thick substrate, but the coherency in other layers is not affected. Using partially coherent sources instead of coherent sources for simulations with sunlight results in a smoother absorption spectrum, but the change in the absorption efficiency is negligible.

  17. Modeling of plasma transport and negative ion extraction in a magnetized radio-frequency plasma source

    Science.gov (United States)

    Fubiani, G.; Garrigues, L.; Hagelaar, G.; Kohen, N.; Boeuf, J. P.

    2017-01-01

    Negative ion sources for fusion are high densities plasma sources in large discharge volumes. There are many challenges in the modeling of these sources, due to numerical constraints associated with the high plasma density, to the coupling between plasma and neutral transport and chemistry, the presence of a magnetic filter, and the extraction of negative ions. In this paper we present recent results concerning these different aspects. Emphasis is put on the modeling approach and on the methods and approximations. The models are not fully predictive and not complete as would be engineering codes but they are used to identify the basic principles and to better understand the physics of the negative ion sources.

  18. A factor analysis-multiple regression model for source apportionment of suspended particulate matter

    Science.gov (United States)

    Okamoto, Shin'ichi; Hayashi, Masayuki; Nakajima, Masaomi; Kainuma, Yasutaka; Shiozawa, Kiyoshige

    A factor analysis-multiple regression (FA-MR) model has been used for a source apportionment study in the Tokyo metropolitan area. By a varimax rotated factor analysis, five source types could be identified: refuse incineration, soil and automobile, secondary particles, sea salt and steel mill. Quantitative estimations using the FA-MR model corresponded to the calculated contributing concentrations determined by using a weighted least-squares CMB model. However, the source type of refuse incineration identified by the FA-MR model was similar to that of biomass burning, rather than that produced by an incineration plant. The estimated contributions of sea salt and steel mill by the FA-MR model contained those of other sources, which have the same temporal variation of contributing concentrations. This symptom was caused by a multicollinearity problem. Although this result shows the limitation of the multivariate receptor model, it gives useful information concerning source types and their distribution by comparing with the results of the CMB model. In the Tokyo metropolitan area, the contributions from soil (including road dust), automobile, secondary particles and refuse incineration (biomass burning) were larger than industrial contributions: fuel oil combustion and steel mill. However, since vanadium is highly correlated with SO 42- and other secondary particle related elements, a major portion of secondary particles is considered to be related to fuel oil combustion.

  19. A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.

    Science.gov (United States)

    Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco

    2018-01-01

    Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    Science.gov (United States)

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  1. The Analytical Repository Source-Term (AREST) model: Description and documentation

    Energy Technology Data Exchange (ETDEWEB)

    Liebetrau, A.M.; Apted, M.J.; Engel, D.W.; Altenhofen, M.K.; Strachan, D.M.; Reid, C.R.; Windisch, C.F.; Erikson, R.L.; Johnson, K.I.

    1987-10-01

    The geologic repository system consists of several components, one of which is the engineered barrier system. The engineered barrier system interfaces with natural barriers that constitute the setting of the repository. A model that simulates the releases from the engineered barrier system into the natural barriers of the geosphere, called a source-term model, is an important component of any model for assessing the overall performance of the geologic repository system. The Analytical Repository Source-Term (AREST) model being developed is one such model. This report describes the current state of development of the AREST model and the code in which the model is implemented. The AREST model consists of three component models and five process models that describe the post-emplacement environment of a waste package. All of these components are combined within a probabilistic framework. The component models are a waste package containment (WPC) model that simulates the corrosion and degradation processes which eventually result in waste package containment failure; a waste package release (WPR) model that calculates the rates of radionuclide release from the failed waste package; and an engineered system release (ESR) model that controls the flow of information among all AREST components and process models and combines release output from the WPR model with failure times from the WPC model to produce estimates of total release. 167 refs., 40 figs., 12 tabs.

  2. Solving the forward problem in EEG source analysis by spherical and fdm head modeling: a comparative analysis - biomed 2009

    NARCIS (Netherlands)

    Vatta, F.; Meneghini, F.; Esposito, F.; Mininel, S.; Di Salle, F.

    2009-01-01

    Neural source localization techniques based on electroencephalography (EEG) use scalp potential data to infer the location of underlying neural activity. This procedure entails modeling the sources of EEG activity and modeling the head volume conduction process to link the modeled sources to the EEG

  3. Explanation of temporal clustering of tsunami sources using the epidemic-type aftershock sequence model

    Science.gov (United States)

    Geist, Eric L.

    2014-01-01

    Temporal clustering of tsunami sources is examined in terms of a branching process model. It previously was observed that there are more short interevent times between consecutive tsunami sources than expected from a stationary Poisson process. The epidemic‐type aftershock sequence (ETAS) branching process model is fitted to tsunami catalog events, using the earthquake magnitude of the causative event from the Centennial and Global Centroid Moment Tensor (CMT) catalogs and tsunami sizes above a completeness level as a mark to indicate that a tsunami was generated. The ETAS parameters are estimated using the maximum‐likelihood method. The interevent distribution associated with the ETAS model provides a better fit to the data than the Poisson model or other temporal clustering models. When tsunamigenic conditions (magnitude threshold, submarine location, dip‐slip mechanism) are applied to the Global CMT catalog, ETAS parameters are obtained that are consistent with those estimated from the tsunami catalog. In particular, the dip‐slip condition appears to result in a near zero magnitude effect for triggered tsunami sources. The overall consistency between results from the tsunami catalog and that from the earthquake catalog under tsunamigenic conditions indicates that ETAS models based on seismicity can provide the structure for understanding patterns of tsunami source occurrence. The fractional rate of triggered tsunami sources on a global basis is approximately 14%.

  4. Source-sector contributions to European ozone and fine PM in 2010 using AQMEII modeling data

    Science.gov (United States)

    Karamchandani, Prakash; Long, Yoann; Pirovano, Guido; Balzarini, Alessandra; Yarwood, Greg

    2017-05-01

    Source apportionment modeling provides valuable information on the contributions of different source sectors and/or source regions to ozone (O3) or fine particulate matter (PM2.5) concentrations. This information can be useful in designing air quality management strategies and in understanding the potential benefits of reducing emissions from a particular source category. The Comprehensive Air quality Model with Extensions (CAMx) offers unique source attribution tools, called the Ozone and Particulate Source Apportionment Technology (OSAT/PSAT), which track source contributions. We present results from a CAMx source attribution modeling study for a summer month and a winter month using a recently evaluated European CAMx modeling database developed for Phase 3 of the Air Quality Model Evaluation International Initiative (AQMEII). The contributions of several source sectors (including model boundary conditions of chemical species representing transport of emissions from outside the modeling domain as well as initial conditions of these species) to O3 or PM2.5 concentrations in Europe were calculated using OSAT and PSAT, respectively. A 1-week spin-up period was used to reduce the influence of initial conditions. Evaluation focused on 16 major cities and on identifying source sectors that contributed above 5 %. Boundary conditions have a large impact on summer and winter ozone in Europe and on summer PM2.5, but they are only a minor contributor to winter PM2.5. Biogenic emissions are important for summer ozone and PM2.5. The important anthropogenic sectors for summer ozone are transportation (both on-road and non-road), energy production and conversion, and industry. In two of the 16 cities, solvent and product also contributed above 5 % to summertime ozone. For summertime PM2.5, the important anthropogenic source sectors are energy, transportation, industry, and agriculture. Residential wood combustion is an important anthropogenic sector in winter for PM2.5 over

  5. Seismic source inversion using Green's reciprocity and a 3-D structural model for the Japanese Islands

    Science.gov (United States)

    Simutė, S.; Fichtner, A.

    2015-12-01

    We present a feasibility study for seismic source inversions using a 3-D velocity model for the Japanese Islands. The approach involves numerically calculating 3-D Green's tensors, which is made efficient by exploiting Green's reciprocity. The rationale for 3-D seismic source inversion has several aspects. For structurally complex regions, such as the Japan area, it is necessary to account for 3-D Earth heterogeneities to prevent unknown structure polluting source solutions. In addition, earthquake source characterisation can serve as a means to delineate existing faults. Source parameters obtained for more realistic Earth models can then facilitate improvements in seismic tomography and early warning systems, which are particularly important for seismically active areas, such as Japan. We have created a database of numerically computed 3-D Green's reciprocals for a 40°× 40°× 600 km size area around the Japanese Archipelago for >150 broadband stations. For this we used a regional 3-D velocity model, recently obtained from full waveform inversion. The model includes attenuation and radial anisotropy and explains seismic waveform data for periods between 10 - 80 s generally well. The aim is to perform source inversions using the database of 3-D Green's tensors. As preliminary steps, we present initial concepts to address issues that are at the basis of our approach. We first investigate to which extent Green's reciprocity works in a discrete domain. Considering substantial amounts of computed Green's tensors we address storage requirements and file formatting. We discuss the importance of the initial source model, as an intelligent choice can substantially reduce the search volume. Possibilities to perform a Bayesian inversion and ways to move to finite source inversion are also explored.

  6. Variability of dynamic source parameters inferred from kinematic models of past earthquakes

    KAUST Repository

    Causse, M.

    2013-12-24

    We analyse the scaling and distribution of average dynamic source properties (fracture energy, static, dynamic and apparent stress drops) using 31 kinematic inversion models from 21 crustal earthquakes. Shear-stress histories are computed by solving the elastodynamic equations while imposing the slip velocity of a kinematic source model as a boundary condition on the fault plane. This is achieved using a 3-D finite difference method in which the rupture kinematics are modelled with the staggered-grid-split-node fault representation method of Dalguer & Day. Dynamic parameters are then estimated from the calculated stress-slip curves and averaged over the fault plane. Our results indicate that fracture energy, static, dynamic and apparent stress drops tend to increase with magnitude. The epistemic uncertainty due to uncertainties in kinematic inversions remains small (ϕ ∼ 0.1 in log10 units), showing that kinematic source models provide robust information to analyse the distribution of average dynamic source parameters. The proposed scaling relations may be useful to constrain friction law parameters in spontaneous dynamic rupture calculations for earthquake source studies, and physics-based near-source ground-motion prediction for seismic hazard and risk mitigation.

  7. Railway source models for integration in the new European noise prediction method proposed in Harmonoise

    NARCIS (Netherlands)

    Talotte, C.; Stap, P. van der; Ringheim, M.; Dittrich, M.G.; Zhang, X.; Stiebel, D.

    2006-01-01

    The purpose of the Harmonoise European project is to provide an engineering model for the propagation of road and rail traffic noise which requires, for a better accuracy than existing models, the distinction between source output and propagation. In that context, the purpose of work package 1.2 of

  8. Railway source models for integration in the new European noise prediction method proposed in Harmonoise

    NARCIS (Netherlands)

    Talotte, C.; Stap, P. van der; Ringheim, M.; Dittrich, M.G.; Zhang, X.; Stiebel, D.

    2006-01-01

    The purpose of the Harmonoise European project is to provide an engineering model for the propagation of road and rail traffic noise which requires, for a better accuracy than existing models, the distinction between source output and propagation. In that context, the purpose of work package 1.2 of

  9. Analysis of source term modeling for low-level radioactive waste performance assessments

    Energy Technology Data Exchange (ETDEWEB)

    Icenhour, A.S.

    1995-03-01

    Site-specific radiological performance assessments are required for the disposal of low-level radioactive waste (LLW) at both commercial and US Department of Energy facilities. This work explores source term modeling of LLW disposal facilities by using two state-of-the-art computer codes, SOURCEI and SOURCE2. An overview of the performance assessment methodology is presented, and the basic processes modeled in the SOURCE1 and SOURCE2 codes are described. Comparisons are made between the two advective models for a variety of radionuclides, transport parameters, and waste-disposal technologies. These comparisons show that, in general, the zero-order model predicts undecayed cumulative fractions leached that are slightly greater than or equal to those of the first-order model. For long-lived radionuclides, results from the two models eventually reach the same value. By contrast, for short-lived radionuclides, the zero-order model predicts a slightly higher undecayed cumulative fraction leached than does the first-order model. A new methodology, based on sensitivity and uncertainty analyses, is developed for predicting intruder scenarios. This method is demonstrated for {sup 137}Cs in a tumulus-type disposal facility. The sensitivity and uncertainty analyses incorporate input-parameter uncertainty into the evaluation of a potential time of intrusion and the remaining radionuclide inventory. Finally, conclusions from this study are presented, and recommendations for continuing work are made.

  10. Introducing a new open source GIS user interface for the SWAT model

    Science.gov (United States)

    The Soil and Water Assessment Tool (SWAT) model is a robust watershed modelling tool. It typically uses the ArcSWAT interface to create its inputs. ArcSWAT is public domain software which works in the licensed ArcGIS environment. The aim of this paper was to develop an open source user interface ...

  11. Spatiotemporal noise covariance model for MEG/EEG data source analysis

    CERN Document Server

    Plis, S M; Jun, S C; Pare-Blagoev, J; Ranken, D M; Schmidt, D M; Wood, C C

    2005-01-01

    A new method for approximating spatiotemporal noise covariance for use in MEG/EEG source analysis is proposed. Our proposed approach extends a parameterized one pair approximation consisting of a Kronecker product of a temporal covariance and a spatial covariance into 1) an unparameterized one pair approximation and then 2) into a multi-pair approximation. These models are motivated by the need to better describe correlated background and make estimation of these models more efficient. The effects of these different noise covariance models are compared using a multi-dipole inverse algorithm and simulated data consisting of empirical MEG background data as noise and simulated dipole sources.

  12. Differences in directional sound source behavior and perception between assorted computer room models

    DEFF Research Database (Denmark)

    Vigeant, Michelle C.; Wang, Lily M.; Rindel, Jens Holger

    2004-01-01

    Source directivity is an important input variable when using room acoustic computer modeling programs to generate auralizations. Previous research has shown that using a multichannel anechoic recording can produce a more natural sounding auralization, particularly as the number of channels...... pursued. The effect of changing the room’s material properties was studied in relation to turning the source around 180 degrees and on the range of acoustic parameters from the four and thirteen beams. As the room becomes increasingly diffuse, the importance of the modeled directivity decreases when...... when using computer modeling. [Work supported by the National Science Foundation.]...

  13. An Equivalent Source Method for Modelling the Global Lithospheric Magnetic Field

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    2015-01-01

    are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic equivalent potential field sources (monopoles) arranged in an icosahedron grid at a depth of 100 km belowthe surface. The corresponding model parameters......We present a new technique for modelling the global lithospheric magnetic field at Earth’s surface based on the estimation of equivalent potential field sources. As a demonstration we show an application to magnetic field measurements made by the CHAMP satellite during the period 2009–2010 when...

  14. Applying the Generalized Waring model for investigating sources of variance in motor vehicle crash analysis.

    Science.gov (United States)

    Peng, Yichuan; Lord, Dominique; Zou, Yajie

    2014-12-01

    As one of the major analysis methods, statistical models play an important role in traffic safety analysis. They can be used for a wide variety of purposes, including establishing relationships between variables and understanding the characteristics of a system. The purpose of this paper is to document a new type of model that can help with the latter. This model is based on the Generalized Waring (GW) distribution. The GW model yields more information about the sources of the variance observed in datasets than other traditional models, such as the negative binomial (NB) model. In this regards, the GW model can separate the observed variability into three parts: (1) the randomness, which explains the model's uncertainty; (2) the proneness, which refers to the internal differences between entities or observations; and (3) the liability, which is defined as the variance caused by other external factors that are difficult to be identified and have not been included as explanatory variables in the model. The study analyses were accomplished using two observed datasets to explore potential sources of variation. The results show that the GW model can provide meaningful information about sources of variance in crash data and also performs better than the NB model.

  15. Bayesian Inference of Seismic Sources Using a 3-D Earth Model for the Japanese Islands Region

    Science.gov (United States)

    Simutė, Saulė; Fichtner, Andreas

    2017-04-01

    Earthquake source inversion is an established problem in seismology. Nevertheless, one-dimensional Earth models are commonly used to compute synthetic data in point- as well as finite-fault inversions. Reliance on simplified Earth models limits the exploitable information to longer periods and as such, contributes to notorious non-uniqueness of finite-fault models. Failure to properly account for Earth structure means that inaccuracies in the Earth model can map into and pollute the earthquake source solutions. To tackle these problems we construct a full-waveform 3-D Earth model for the Japanese Islands region and infer earthquake source parameters in a probabilistic way using numerically computed 3-D Green's functions. Our model explains data from the earthquakes not used in the inversion significantly better than the initial model in the period range of 20-80 s. This indicates that the model is not over-fit and may thus be used for improved earthquake source inversion. To solve the forward problem, we pre-compute and store Green's functions with the spectral element solver SES3D for all potential source-receiver pairs. The exploitation of the Green's function database means that the forward problem of obtaining displacements is merely a linear combination of strain Green's tensor scaled by the moment tensor elements. We invert for ten model parameters - six moment tensors elements, three location parameters, and the time of the event. A feasible number of model parameters and the fast forward problem allow us to infer the unknowns using the Bayesian Markov chain Monte Carlo, which results in the marginal posterior distributions for every model parameter. The Monte Carlo algorithm is validated against analytical solutions for the linear test case. We perform the inversions using real data in the Japanese Islands region and assess the quality of the solutions by comparing the obtained results with those from the existing 1-D catalogues.

  16. Designing Open Source Computer Models for Physics by Inquiry using Easy Java Simulation

    CERN Document Server

    Wee, Loo Kang

    2012-01-01

    The Open Source Physics community has created hundreds of physics computer models (Wolfgang Christian, Esquembre, & Barbato, 2011; F. K. Hwang & Esquembre, 2003) which are mathematical computation representations of real-life Physics phenomenon. Since the source codes are available and can be modified for redistribution licensed Creative Commons Attribution or other compatible copyrights like GNU General Public License (GPL), educators can customize (Wee & Mak, 2009) these models for more targeted productive (Wee, 2012) activities for their classroom teaching and redistribute them to benefit all humankind. In this interactive event, we will share the basics of using the free authoring toolkit called Easy Java Simulation (W. Christian, Esquembre, & Mason, 2010; Esquembre, 2010) so that participants can modify the open source computer models for their own learning and teaching needs. These computer models has the potential to provide the experience and context, essential for deepening students c...

  17. A GIS Based Variable Source Area Model for Large-scale Basin Hydrology

    Directory of Open Access Journals (Sweden)

    Rajesh Vijaykumar Kherde

    2014-05-01

    Full Text Available A geographic information system-based rainfall runoff model that simulate variable source area runoff using topographic features of the basin is presented. The model simulate the flow processes on daily time step basis and has four non linear stores viz. Interception store, soil moisture store, channel store and ground water store. Source area fraction is modelled as a function of antecedent soil moisture, net rainfall and pore capacity raised to the power of areal average topographic index (. Source area fraction is used in conjuction with topographic index to develop linear relations for runoff, Infiltration and interflow. An exponential relation is developed for lower zone evapotranspiration and non-linear exponential relations to model macropore flow and base flow are proposed.

  18. Development of a user-friendly interface version of the Salmonella source-attribution model

    DEFF Research Database (Denmark)

    Hald, Tine; Lund, Jan

    of questions, where the use of a classical quantitative risk assessment model (i.e. transmission models) would be impaired due to a lack of data and time limitations. As these models require specialist knowledge, it was requested by EFSA to develop a flexible user-friendly source attribution model for use...... with a user-manual, which is also part of this report. Users of the interface are recommended to read this report before starting using the interface to become familiar with the model principles and the mathematics behind, which is required in order to interpret the model results and assess the validity...

  19. Integrating multiple data sources in species distribution modeling: a framework for data fusion.

    Science.gov (United States)

    Pacifici, Krishna; Reich, Brian J; Miller, David A W; Gardner, Beth; Stauffer, Glenn; Singh, Susheela; McKerrow, Alexa; Collazo, Jaime A

    2017-03-01

    The last decade has seen a dramatic increase in the use of species distribution models (SDMs) to characterize patterns of species' occurrence and abundance. Efforts to parameterize SDMs often create a tension between the quality and quantity of data available to fit models. Estimation methods that integrate both standardized and non-standardized data types offer a potential solution to the tradeoff between data quality and quantity. Recently several authors have developed approaches for jointly modeling two sources of data (one of high quality and one of lesser quality). We extend their work by allowing for explicit spatial autocorrelation in occurrence and detection error using a Multivariate Conditional Autoregressive (MVCAR) model and develop three models that share information in a less direct manner resulting in more robust performance when the auxiliary data is of lesser quality. We describe these three new approaches ("Shared," "Correlation," "Covariates") for combining data sources and show their use in a case study of the Brown-headed Nuthatch in the Southeastern U.S. and through simulations. All three of the approaches which used the second data source improved out-of-sample predictions relative to a single data source ("Single"). When information in the second data source is of high quality, the Shared model performs the best, but the Correlation and Covariates model also perform well. When the information quality in the second data source is of lesser quality, the Correlation and Covariates model performed better suggesting they are robust alternatives when little is known about auxiliary data collected opportunistically or through citizen scientists. Methods that allow for both data types to be used will maximize the useful information available for estimating species distributions. © 2016 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  20. Neutron activation analysis: Modelling studies to improve the neutron flux of Americium-Beryllium source

    Energy Technology Data Exchange (ETDEWEB)

    Didi, Abdessamad; Dadouch, Ahmed; Tajmouati, Jaouad; Bekkouri, Hassane [Advanced Technology and Integration System, Dept. of Physics, Faculty of Science Dhar Mehraz, University Sidi Mohamed Ben Abdellah, Fez (Morocco); Jai, Otman [Laboratory of Radiation and Nuclear Systems, Dept. of Physics, Faculty of Sciences, Tetouan (Morocco)

    2017-06-15

    Americium–beryllium (Am-Be; n, γ) is a neutron emitting source used in various research fields such as chemistry, physics, geology, archaeology, medicine, and environmental monitoring, as well as in the forensic sciences. It is a mobile source of neutron activity (20 Ci), yielding a small thermal neutron flux that is water moderated. The aim of this study is to develop a model to increase the neutron thermal flux of a source such as Am-Be. This study achieved multiple advantageous results: primarily, it will help us perform neutron activation analysis. Next, it will give us the opportunity to produce radio-elements with short half-lives. Am-Be single and multisource (5 sources) experiments were performed within an irradiation facility with a paraffin moderator. The resulting models mainly increase the thermal neutron flux compared to the traditional method with water moderator.

  1. An attempt to lower sources of systematic measurement error using Hierarchical Generalized Linear Modeling (HGLM).

    Science.gov (United States)

    Sideridis, George D; Tsaousis, Ioannis; Katsis, Athanasios

    2014-01-01

    The purpose of the present studies was to test the effects of systematic sources of measurement error on the parameter estimates of scales using the Rasch model. Studies 1 and 2 tested the effects of mood and affectivity. Study 3 evaluated the effects of fatigue. Last, studies 4 and 5 tested the effects of motivation on a number of parameters of the Rasch model (e.g., ability estimates). Results indicated that (a) the parameters of interest and the psychometric properties of the scales were substantially distorted in the presence of all systematic sources of error, and, (b) the use of HGLM provides a way of adjusting the parameter estimates in the presence of these sources of error. It is concluded that validity in measurement requires a thorough evaluation of potential sources of error and appropriate adjustments based on each occasion.

  2. Evaluation of the Agricultural Non-point Source Pollution in Chongqing Based on PSR Model

    Institute of Scientific and Technical Information of China (English)

    Hanwen; ZHANG; Xinli; MOU; Hui; XIE; Hong; LU; Xingyun; YAN

    2014-01-01

    Through a series of exploration based on PSR framework model,for the purpose of building a suitable Chongqing agricultural nonpoint source pollution evaluation index system model framework,combined with the presence of Chongqing specific agro-environmental issues,we build a agricultural non-point source pollution assessment index system,and then study the agricultural system pressure,agro-environmental status and human response in total 3 major categories,develope an agricultural non-point source pollution evaluation index consisting of 3 criteria indicators and 19 indicators. As can be seen from the analysis,pressures and responses tend to increase and decrease linearly,state and complex have large fluctuations,and their fluctuations are similar mainly due to the elimination of pressures and impact,increasing the impact for agricultural non-point source pollution.

  3. An Equivalent Source Method for Modelling the Lithospheric Magnetic Field Using Satellite and Airborne Magnetic Data

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid with an increasing grid resolution towards the airborne survey area. The corresponding source values are estimated using an iteratively reweighted least squares algorithm that includes model......We present a technique for modelling the lithospheric magnetic field based on estimation of equivalent potential field sources. As a first demonstration we present an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010. Three component vector field....... Advantages of the equivalent source method include its local nature and the ease of transforming to spherical harmonics when needed. The method can also be applied in local, high resolution, investigations of the lithospheric magnetic field, for example where suitable aeromagnetic data is available...

  4. Model Predictive Control of Z-source Neutral Point Clamped Inverter

    DEFF Research Database (Denmark)

    Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of Z-source Neutral Point Clamped (NPC) inverter. For illustration, current control of Z-source NPC grid-connected inverter is analyzed and simulated. With MPC’s advantage of easily including system constraints, load current, impedance network...... inductor current, capacitor voltage as well as switching frequency, transient response are all regulated as subjecting to constraints of this control method. The quality of output waveform, stability of impedance-network, level constraint of variable switching frequency as well as robustness of transient...... response are obtained at the same time with a formulated Z-source NPC inverter network model. Operation steady state and transient state simulation results of MPC are going to be presented, which shows good reference tracking ability of this method. It provides new control method for Z-source NPC inverter...

  5. An Equivalent Source Method for Modelling the Lithospheric Magnetic Field Using Satellite and Airborne Magnetic Data

    Science.gov (United States)

    Kother, L. K.; Hammer, M. D.; Finlay, C. C.; Olsen, N.

    2014-12-01

    We present a technique for modelling the lithospheric magnetic field based on estimation of equivalent potential field sources. As a first demonstration we present an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010. Three component vector field data are utilized at all latitudes. Estimates of core and large-scale magnetospheric sources are removed from the satellite measurements using the CHAOS-4 model. Quiet-time and night-side data selection criteria are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid with an increasing grid resolution towards the airborne survey area. The corresponding source values are estimated using an iteratively reweighted least squares algorithm that includes model regularization (either quadratic or maximum entropy) and Huber weighting. Data error covariance matrices are implemented, accounting for the dependence of data error variances on quasi-dipole latitudes. Results show good consistency with the CM5 and MF7 models for spherical harmonic degrees up to n = 95. Advantages of the equivalent source method include its local nature and the ease of transforming to spherical harmonics when needed. The method can also be applied in local, high resolution, investigations of the lithospheric magnetic field, for example where suitable aeromagnetic data is available. To illustrate this possibility, we present preliminary results from a case study combining satellite measurements and local airborne scalar magnetic measurements of the Norwegian coastline.

  6. Modeling nonpoint source nitrate contamination and associated uncertainty in groundwater of U.S. regional aquifers

    Science.gov (United States)

    Gurdak, J. J.; Lujan, C.

    2009-12-01

    Nonpoint source nitrate contamination in groundwater is spatially variable and can result in elevated nitrate concentrations that threaten drinking-water quality in many aquifers of the United States. Improved modeling approaches are needed to quantify the spatial controls on nonpoint source nitrate contamination and the associated uncertainty of predictive models. As part of the U.S. Geological Survey National Water Quality Assessment Program, logistic regression models were developed to predict nitrate concentrations greater than background in recently recharged (less than 50 years) groundwater in selected regional aquifer systems of the United States; including the Central Valley, California Coastal Basins, Basin and Range, Floridan, Glacial, Coastal Lowlands, Denver Basin, High Plains, North Atlantic Coastal Plain, and Piedmont aquifer systems. The models were used to evaluate the spatial controls of climate, soils, land use, hydrogeology, geochemistry, and water-quality conditions on nitrate contamination. The novel model Raster Error Propagation Tool (REPTool) was used to estimate error propagation and prediction uncertainty in the predictive nitrate models and to determine an approach to reduce uncertainty in future model development. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the prediction uncertainty of the model output. The presented nitrate models, maps, and uncertainty analysis provide important tools for water-resource managers of regional groundwater systems to identify likely areas and the spatial controls on nonpoint source nitrate contamination in groundwater.

  7. The error source analysis of oil spill transport modeling:a case study

    Institute of Scientific and Technical Information of China (English)

    LI Yan; ZHU Jiang; WANG Hui; KUANG Xiaodi

    2013-01-01

    Numerical modeling is an important tool to study and predict the transport of oil spills. However, the accu-racy of numerical models is not always good enough to provide reliable information for oil spill transport. It is necessary to analyze and identify major error sources for the models. A case study was conducted to analyze error sources of a three-dimensional oil spill model that was used operationally for oil spill forecast-ing in the National Marine Environmental Forecasting Center (NMEFC), the State Oceanic Administration, China. On June 4, 2011, oil from sea bed spilled into seawater in Penglai 19-3 region, the largest offshore oil field of China, and polluted an area of thousands of square kilometers in the Bohai Sea. Satellite remote sensing images were collected to locate oil slicks. By performing a series of model sensitivity experiments with different wind and current forcings and comparing the model results with the satellite images, it was identified that the major errors of the long-term simulation for oil spill transport were from the wind fields, and the wind-induced surface currents. An inverse model was developed to estimate the temporal variabil-ity of emission intensity at the oil spill source, which revealed the importance of the accuracy in oil spill source emission time function.

  8. An attribute recognition model based on entropy weight for evaluating the quality of groundwater sources

    Institute of Scientific and Technical Information of China (English)

    CHEN Suo-zhong; WANG Xiao-jing; ZHAO Xiu-jun

    2008-01-01

    In our study, entropy weight coefficients, based on Shannon entropy, were determined for an attribute recognition model to model the quality of groundwater sources. The model follows the theory previously proposed by Chen Q S. In the model, firstly, the author establishes the attribute space matrix and determines the weight based on Shannon entropy theory; secondly, calculates attribute measure; thirdly, evaluates that with confidence criterion and score criterion; finally, an application example is given. The results show that the water quality of the groundwater sources for the city comes up to the grade II or III standard. There is no pollution that obviously exceeds the standard and the water can meet people's needs .The results from an evaluation of this model are in basic agreement with the observed situation and with a set pair analysis (SPA) model.

  9. Error sources in atomic force microscopy for dimensional measurements: Taxonomy and modeling

    DEFF Research Database (Denmark)

    Marinello, F.; Voltan, A.; Savio, E.

    2010-01-01

    This paper aimed at identifying the error sources that occur in dimensional measurements performed using atomic force microscopy. In particular, a set of characterization techniques for errors quantification is presented. The discussion on error sources is organized in four main categories......: scanning system, tip-surface interaction, environment, and data processing. The discussed errors include scaling effects, squareness errors, hysteresis, creep, tip convolution, and thermal drift. A mathematical model of the measurement system is eventually described, as a reference basis for errors...

  10. Lossless Dynamic Models of the Quasi-Z-Source Converter Family

    Science.gov (United States)

    Vinnikov, Dmitri; Husev, Oleksandr; Roasto, Indrek

    2011-01-01

    This paper is devoted to the quasi-Z-source (qZS) converter family. Recently, the qZS-converters have attracted attention because of their specific properties of voltage boost and buck functions with a single switching stage, which could be especially beneficial in renewable energy applications. As main representatives of the qZS-converter family, the traditional quasi-Z-source inverter as well as two novel extended boost quasi-Z-source inverters are discussed. Lossless dynamic models of these topologies are presented and analyzed.

  11. Tracing the sources of human salmonellosis: a multi-model comparison of phenotyping and genotyping methods.

    Science.gov (United States)

    Mughini-Gras, Lapo; Smid, Joost; Enserink, Remko; Franz, Eelco; Schouls, Leo; Heck, Max; van Pelt, Wilfrid

    2014-12-01

    Salmonella source attribution is usually performed using frequency-matched models, such as the (modified) Dutch and Hald models, based on phenotyping data, i.e. serotyping, phage typing, and antimicrobial resistance profiling. However, for practical and economic reasons, genotyping methods such as Multi-locus Variable Number of Tandem Repeats Analysis (MLVA) are gradually replacing traditional phenotyping of salmonellas beyond the serovar level. As MLVA-based source attribution of human salmonellosis using frequency-matched models is problematic due to the high variability of the genetic targets investigated, other models need to be explored. Using a comprehensive data set from the Netherlands in 2005-2013, this study aimed at attributing sporadic and domestic cases of Salmonella Typhimurium/4,[5],12:i:- and Salmonella Enteritidis to four putative food-producing animal sources (pigs, cattle, broilers, and layers/eggs) using the modified Dutch and Hald models (based on sero/phage typing data) in comparison with a widely applied population genetics model - the asymmetric island model (AIM) - supplied with MLVA data. This allowed us to compare model outcomes and to corroborate whether MLVA-based Salmonella source attribution using the AIM is able to provide sound, comparable results. All three models provided very similar results, confirming once more that most S. Typhimurium/4,[5],12:i:- and S. Enteritidis cases are attributable to pigs and layers/eggs, respectively. We concluded that MLVA-based source attribution using the AIM is a feasible option, at least for S. Typhimurium/4,[5],12:i:- and S. Enteritidis. Enough information seems to be contained in the MLVA profiles to trace the sources of human salmonellosis even in presence of imperfect temporal overlap between human and source isolates. Besides Salmonella, the AIM might also be applicable to other pathogens that do not always comply to clonal models. This would add further value to current surveillance

  12. Numerical modeling of the source mechanism for microseismic events induced during fluid injection

    Science.gov (United States)

    Zhao, X.; Reyes-Montes, J.; Young, R.

    2013-12-01

    Passive microseismic (MS) monitoring is now common practice for imaging and real-time feedback of geological reservoir stimulation operations in a number of energy sectors. MS locations provide first-hand information of the fracture network geometry and propagation; however a full understanding of the fundamental processes of induced fracturing requires the use of additional information contained in the recorded waveforms. One of the current challenges is robustly solving the focal mechanism of recorded MS events from a sparse array, such as single borehole linear arrays. In this study, a synthetic rock mass model (SRM), a distinct element method, was developed to model typical source fracturing modes associated with reservoir stimulation, including shear dislocation (strike-slip and dip-slip), dilation (tensile), and explosion. The body forces directly exerted by source particles were monitored using linear sets of particle arrays simulating the sensors in field operations. The disturbance at each receiver (particle in the model) was recorded in three orthogonal directions to get 3-component waveforms. The model was validated analysing source mechanisms using a moment tensor inversion of P-wave time-domain amplitudes. The fault plane solution was also calculated from the distribution of P-wave first-break polarities. The moment tensor was then decomposed into eigenvalues and eigenvectors representing the principal axes (pressure, null and tension) of the source. Percentage isotropic, double-couple and compensated linear vector dipole components were calculated, along with orientations of the fault plane solution. The inverted moment and fault plane solutions show the similar failure modes to the source motion applied to the fault plane. This shows that the modelling approach can be used to combine different basic source modes to build a database that provides a tool to directly compare modelled and field data in order to probabilistically estimate a feasible focal

  13. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    Science.gov (United States)

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice

    2016-04-01

    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling

  14. An equivalent source method for modelling the global lithospheric magnetic field

    Science.gov (United States)

    Kother, Livia; Hammer, Magnus D.; Finlay, Christopher C.; Olsen, Nils

    2015-10-01

    We present a new technique for modelling the global lithospheric magnetic field at Earth's surface based on the estimation of equivalent potential field sources. As a demonstration we show an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010 when it was at its lowest altitude and solar activity was quiet. All three components of the vector field data are utilized at all available latitudes. Estimates of core and large-scale magnetospheric sources are removed from the measurements using the CHAOS-4 model. Quiet-time and night-side data selection criteria are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic equivalent potential field sources (monopoles) arranged in an icosahedron grid at a depth of 100 km below the surface. The corresponding model parameters are estimated using an iteratively reweighted least-squares algorithm that includes model regularization (either quadratic or maximum entropy) and Huber weighting. Data error covariance matrices are implemented, accounting for the dependence of data variances on quasi-dipole latitude. The resulting equivalent source lithospheric field models show a degree correlation to MF7 greater than 0.7 out to spherical harmonic degree 100. Compared to the quadratic regularization approach, the entropy regularized model possesses notably lower power above degree 70 and a lower number of degrees of freedom despite fitting the observations to a very similar level. Advantages of our equivalent source method include its local nature, the possibility for regional grid refinement and the production of local power spectra, the ability to implement constraints and regularization depending on geographical position, and the ease of transforming the equivalent source values into spherical harmonics.

  15. Physical conditions for sources radiating a cosh-Gaussian model beam

    Institute of Scientific and Technical Information of China (English)

    LI Jia

    2011-01-01

    Based on the coherence theory of diffracted optical field and the model for partially coherent beams, analytical expressions for the cross-spectral density and the irradiance spectral density in the far zone are derived, respectively. Utilizing the theoretical model of radiation from secondary planar sources, the physical conditions for sources generating a cosh-Gaussian (CHG) beam are investigated. Analytical results demonstrate that the parametric conditions strongly depend on the coherence property of sources. When almost coherence property is satisfied in the source plane, the conditions are the same as those for fundamental Gaussian beams; when partial coherence or almost incoherence property is satisfied in the spatial source plane, the conditions are the same as those for Gaussian-Schell model beams. The results also indicate that the variance of cosine parameters has no influence on the conditions. Our results may provide potential applications for some investigations such as the modulations of cosh-Gaussian beams and the designs of source beam parameters.

  16. SPARROW models used to understand nutrient sources in the Mississippi/Atchafalaya River Basin

    Science.gov (United States)

    Robertson, Dale M.; Saad, David A.

    2013-01-01

    Nitrogen (N) and phosphorus (P) loading from the Mississippi/Atchafalaya River Basin (MARB) has been linked to hypoxia in the Gulf of Mexico. To describe where and from what sources those loads originate, SPAtially Referenced Regression On Watershed attributes (SPARROW) models were constructed for the MARB using geospatial datasets for 2002, including inputs from wastewater treatment plants (WWTPs), and calibration sites throughout the MARB. Previous studies found that highest N and P yields were from the north-central part of the MARB (Corn Belt). Based on the MARB SPARROW models, highest N yields were still from the Corn Belt but centered over Iowa and Indiana, and highest P yields were widely distributed throughout the center of the MARB. Similar to that found in other studies, agricultural inputs were found to be the largest N and P sources throughout most of the MARB: farm fertilizers were the largest N source, whereas farm fertilizers, manure, and urban inputs were dominant P sources. The MARB models enable individual N and P sources to be defined at scales ranging from SPARROW catchments (∼50 km2) to the entire area of the MARB. Inputs of P from WWTPs and urban areas were more important than found in most other studies. Information from this study will help to reduce nutrient loading from the MARB by providing managers with a description of where each of the sources of N and P are most important, thus providing a basis for prioritizing management actions and ultimately reducing the extent of Gulf hypoxia.

  17. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2016-05-15

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.

  18. Evaluation of multiple-sphere head models for MEG source localization

    Energy Technology Data Exchange (ETDEWEB)

    Lalancette, M; Cheyne, D [Department of Diagnostic Imaging, The Hospital for Sick Children, 555 University Ave., Toronto, Ontario M5G 1X8 (Canada); Quraan, M, E-mail: marc.lalancette@sickkids.ca, E-mail: douglas.cheyne@utoronto.ca [Krembil Neuroscience Centre, Toronto Western Research Institute, University Health Network, Toronto, Ontario M5T 2S8 (Canada)

    2011-09-07

    Magnetoencephalography (MEG) source analysis has largely relied on spherical conductor models of the head to simplify forward calculations of the brain's magnetic field. Multiple- (or overlapping, local) sphere models, where an optimal sphere is selected for each sensor, are considered an improvement over single-sphere models and are computationally simpler than realistic models. However, there is limited information available regarding the different methods used to generate these models and their relative accuracy. We describe a variety of single- and multiple-sphere fitting approaches, including a novel method that attempts to minimize the field error. An accurate boundary element method simulation was used to evaluate the relative field measurement error (12% on average) and dipole fit localization bias (3.5 mm) of each model over the entire brain. All spherical models can contribute in the order of 1 cm to the localization bias in regions of the head that depart significantly from a sphere (inferior frontal and temporal). These spherical approximation errors can give rise to larger localization differences when all modeling effects are taken into account and with more complex source configurations or other inverse techniques, as shown with a beamformer example. Results differed noticeably depending on the source location, making it difficult to recommend a fitting method that performs best in general. Given these limitations, it may be advisable to expand the use of realistic head models.

  19. Evaluation of multiple-sphere head models for MEG source localization.

    Science.gov (United States)

    Lalancette, M; Quraan, M; Cheyne, D

    2011-09-07

    Magnetoencephalography (MEG) source analysis has largely relied on spherical conductor models of the head to simplify forward calculations of the brain's magnetic field. Multiple- (or overlapping, local) sphere models, where an optimal sphere is selected for each sensor, are considered an improvement over single-sphere models and are computationally simpler than realistic models. However, there is limited information available regarding the different methods used to generate these models and their relative accuracy. We describe a variety of single- and multiple-sphere fitting approaches, including a novel method that attempts to minimize the field error. An accurate boundary element method simulation was used to evaluate the relative field measurement error (12% on average) and dipole fit localization bias (3.5 mm) of each model over the entire brain. All spherical models can contribute in the order of 1 cm to the localization bias in regions of the head that depart significantly from a sphere (inferior frontal and temporal). These spherical approximation errors can give rise to larger localization differences when all modeling effects are taken into account and with more complex source configurations or other inverse techniques, as shown with a beamformer example. Results differed noticeably depending on the source location, making it difficult to recommend a fitting method that performs best in general. Given these limitations, it may be advisable to expand the use of realistic head models.

  20. Blind source separation of ship-radiated noise based on generalized Gaussian model

    Institute of Scientific and Technical Information of China (English)

    Kong Wei; Yang Bin

    2006-01-01

    When the distribution of the sources cannot be estimated accurately, the ICA algorithms failed to separate the mixtures blindly. The generalized Gaussian model (GGM) is presented in ICA algorithm since it can model nonGaussian statistical structure of different source signals easily. By inferring only one parameter, a wide class of statistical distributions can be characterized. By using maximum likelihood (ML) approach and natural gradient descent, the learning rules of blind source separation (BSS) based on GGM are presented. The experiment of the ship-radiated noise demonstrates that the GGM can model the distributions of the ship-radiated noise and sea noise efficiently, and the learning rules based on GGM gives more successful separation results after comparing it with several conventional methods such as high order cumulants and Gaussian mixture density function.

  1. EQRM: An open-source event-based earthquake risk modeling program

    Science.gov (United States)

    Robinson, D. J.; Dhu, T.; Row, P.

    2007-12-01

    Geoscience Australia's Earthquake Risk Model (EQRM) is an event-based tool for earthquake scenario ground motion and scenario loss modeling as well as probabilistic seismic hazard (PSHA) and risk (PSRA) modeling. It has been used to conduct PSHA and PSRA for many of Australia's largest cities and it has become an important tool for the emergency management community which use it for scneario response planning. It has the potential to link with earthquake monitoring programs to provide automatic loss estimates from network recorded events. An open-source alpha-release version of the software is freely available on SourceForge. It can be used for hazard or risk analyses in any region of the world by supplying appropriately formatted input files. Source code is also supplied so advanced users can modify individual components to suit their needs.

  2. A dynamical model for FR II type radio sources with terminated jet activity

    Science.gov (United States)

    Kuligowska, Elżbieta

    2017-02-01

    Context. The extension of the KDA analytical model of FR II-type source evolution originally assuming a continuum injection process in the jet-IGM interaction towards a case of the jet's termination is presented and briefly discussed. Aims: The dynamical evolution of FR II-type sources predicted with this extended model, hereafter referred to as KDA EXT, and its application to the chosen radio sources. Methods: Following the classical approach based on the source's continuous injection and self-similarity, I propose the effective formulae describing the length and luminosity evolution of the lobes during an absence of the jet flow, and present the resulting diagrams for the characteristics mentioned. Results: Using an algorithm based on the numerical integration of a modified formula for jet power, the KDA EXT model is fitted to three radio galaxies. Their predicted spectra are then compared to the observed spectra, proving that these fits are better than the best spectral fit provided by the original KDA model of the FR II-type sources dynamical evolution.

  3. Minimum-norm cortical source estimation in layered head models is robust against skull conductivity error.

    Science.gov (United States)

    Stenroos, Matti; Hauk, Olaf

    2013-11-01

    The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG+EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG+EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG+EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only.

  4. Source rock contributions to the Lower Cretaceous heavy oil accumulations in Alberta: a basin modeling study

    Science.gov (United States)

    Berbesi, Luiyin Alejandro; di Primio, Rolando; Anka, Zahie; Horsfield, Brian; Higley, Debra K.

    2012-01-01

    The origin of the immense oil sand deposits in Lower Cretaceous reservoirs of the Western Canada sedimentary basin is still a matter of debate, specifically with respect to the original in-place volumes and contributing source rocks. In this study, the contributions from the main source rocks were addressed using a three-dimensional petroleum system model calibrated to well data. A sensitivity analysis of source rock definition was performed in the case of the two main contributors, which are the Lower Jurassic Gordondale Member of the Fernie Group and the Upper Devonian–Lower Mississippian Exshaw Formation. This sensitivity analysis included variations of assigned total organic carbon and hydrogen index for both source intervals, and in the case of the Exshaw Formation, variations of thickness in areas beneath the Rocky Mountains were also considered. All of the modeled source rocks reached the early or main oil generation stages by 60 Ma, before the onset of the Laramide orogeny. Reconstructed oil accumulations were initially modest because of limited trapping efficiency. This was improved by defining lateral stratigraphic seals within the carrier system. An additional sealing effect by biodegraded oil may have hindered the migration of petroleum in the northern areas, but not to the east of Athabasca. In the latter case, the main trapping controls are dominantly stratigraphic and structural. Our model, based on available data, identifies the Gordondale source rock as the contributor of more than 54% of the oil in the Athabasca and Peace River accumulations, followed by minor amounts from Exshaw (15%) and other Devonian to Lower Jurassic source rocks. The proposed strong contribution of petroleum from the Exshaw Formation source rock to the Athabasca oil sands is only reproduced by assuming 25 m (82 ft) of mature Exshaw in the kitchen areas, with original total organic carbon of 9% or more.

  5. Source tracking using microbial community fingerprints: Method comparison with hydrodynamic modelling.

    Science.gov (United States)

    McCarthy, D T; Jovanovic, D; Lintern, A; Teakle, I; Barnes, M; Deletic, A; Coleman, R; Rooney, G; Prosser, T; Coutts, S; Hipsey, M R; Bruce, L C; Henry, R

    2017-02-01

    Urban estuaries around the world are experiencing contamination from diffuse and point sources, which increases risks to public health. To mitigate and manage risks posed by elevated levels of contamination in urban waterways, it is critical to identify the primary water sources of contamination within catchments. Source tracking using microbial community fingerprints is one tool that can be used to identify sources. However, results derived from this approach have not yet been evaluated using independent datasets. As such, the key objectives of this investigation were: (1) to identify the major sources of water responsible for bacterial loadings within an urban estuary using microbial source tracking (MST) using microbial communities; and (2) to evaluate this method using a 3-dimensional hydrodynamic model. The Yarra River estuary, which flows through the city of Melbourne in South-East Australia was the focus of this study. We found that the water sources contributing to the bacterial community in the Yarra River estuary varied temporally depending on the estuary's hydrodynamic conditions. The water source apportionment determined using microbial community MST correlated to those determined using a 3-dimensional hydrodynamic model of the transport and mixing of a tracer in the estuary. While there were some discrepancies between the two methods, this investigation demonstrated that MST using bacterial community fingerprints can identify the primary water sources of microorganisms in an estuarine environment. As such, with further optimization and improvements, microbial community MST has the potential to become a powerful tool that could be practically applied in the mitigation of contaminated aquatic systems.

  6. A Unified Impedance Model of Voltage-Source Converters with Phase-Locked Loop Effect

    DEFF Research Database (Denmark)

    Wang, Xiongfei; Harnefors, Lennart; Blaabjerg, Frede

    2016-01-01

    This paper proposes a unified impedance model for analyzing the effect of Phase-Locked Loop (PLL) on the stability of grid-connected voltage-source converters. In the approach, the dq-frame impedance model is transformed into the stationary αβ-frame by means of complex transfer functions and comp......This paper proposes a unified impedance model for analyzing the effect of Phase-Locked Loop (PLL) on the stability of grid-connected voltage-source converters. In the approach, the dq-frame impedance model is transformed into the stationary αβ-frame by means of complex transfer functions...... characterized for the current control in the rotating dq-frame and the stationary αβ-frame. Case studies based on the unified impedance model are presented, which are then verified in the time-domain simulations and experiments. The results closely correlate with the impedance-based analysis....

  7. Open Source Software for Mapping Human Impacts on Marine Ecosystems with an Additive Model

    Directory of Open Access Journals (Sweden)

    Andy Stock

    2016-06-01

    Full Text Available This paper describes an easy-to-use open source software tool implementing a commonly used additive model (Halpern et al., 'Science', 2008 for mapping human impacts on marine ecosystems. The tool has been used to map the potential for cumulative human impacts in Arctic marine waters and can support future human impact mapping projects by 1 making the model easier to use; 2 making updates of model results straightforward when better input data become available; 3 storing input data and information about processing steps in a defined format and thus facilitating data sharing and reproduction of modeling results; 4 supporting basic visualization of model inputs and outputs without the need for advanced technical skills. The tool, called EcoImpactMapper, was implemented in Java and is thus platform-independent. A tutorial, example data, the tool and the source code are available online.

  8. A Unified Impedance Model of Voltage-Source Converters with Phase-Locked Loop Effect

    DEFF Research Database (Denmark)

    Wang, Xiongfei; Harnefors, Lennart; Blaabjerg, Frede

    2016-01-01

    This paper proposes a unified impedance model for analyzing the effect of Phase-Locked Loop (PLL) on the stability of grid-connected voltage-source converters. In the approach, the dq-frame impedance model is transformed into the stationary αβ-frame by means of complex transfer functions and comp......This paper proposes a unified impedance model for analyzing the effect of Phase-Locked Loop (PLL) on the stability of grid-connected voltage-source converters. In the approach, the dq-frame impedance model is transformed into the stationary αβ-frame by means of complex transfer functions...... and complex space vectors, which not only predicts the stability impact of the PLL, but reveals also its frequency coupling effect in the phase domain. Thus, the impedance models previously developed in the different domains can be unified. Moreover, the impedance shaping effects of PLL are structurally...

  9. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modelling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial valley earthquake in California (USA). The results of the study indicate that while all three approaches can succesfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  10. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    1997-01-01

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modeling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial Valley earthquake in California (U .S .A.). The results of the study indicate that while all three approaches can successfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  11. Open-source direct simulation Monte Carlo chemistry modeling for hypersonic flows

    OpenAIRE

    Scanlon, Thomas J.; White, Craig; Borg, Matthew K.; Palharini, Rodrigo C.; Farbar, Erin; Boyd, Iain D.; Reese, Jason M.; Brown, Richard E

    2015-01-01

    An open source implementation of chemistry modelling for the direct simulationMonte Carlo (DSMC) method is presented. Following the recent work of Bird [1] an approach known as the quantum kinetic (Q-K) method has been adopted to describe chemical reactions in a 5-species air model using DSMC procedures based on microscopic gas information. The Q-K technique has been implemented within the framework of the dsmcFoam code, a derivative of the open source CFD code OpenFOAM. Results for vibration...

  12. Investigating whether it is optimal to make replenishments simultaneously in dual source model

    DEFF Research Database (Denmark)

    Abginehchi, Soheil; Larsen, Christian

    be relaxed. We study a dual source system with non-identical suppliers and model the problem as a semi-Markov decision model, allowing the decision maker the choice whether he will simultaneously issue two orders to both suppliers or he will issue the orders to suppliers at two different times.......In multiple sourcing, when supplier lead times are stochastic it makes sense to split any replenishment order into several smaller orders to pool lead time risks. In literature it is always assumed these orders are issued simultaneously to the suppliers. Here we let this simultaneousness assumption...

  13. Source mask optimization using 3D mask and compact resist models

    Science.gov (United States)

    El-Sewefy, Omar; Chen, Ao; Lafferty, Neal; Meiring, Jason; Chung, Angeline; Foong, Yee Mei; Adam, Kostas; Sturtevant, John

    2016-03-01

    Source Mask Optimization (SMO) has played an important role in technology setup and ground rule definition since the 2x nm technology node. While improvements in SMO algorithms have produced higher quality and more consistent results, the accuracy of the overall solution is critically linked to how faithfully the entire patterning system is modeled, from mask down to substrate. Fortunately, modeling technology has continued to advance to provide greater accuracy in modeling 3D mask effects, 3D resist behavior, and resist phenomena. Specifically, the Domain Decomposition Method (DDM) approximates the 3D mask response as a superposition of edge-responses.1 The DDM can be applied to a sectorized illumination source based on Hybrid-Hopkins Abbe approximation,2 which provides an accurate and fast solution for the modeling of 3D mask effects and has been widely used in OPC modeling. The implementation of DDM in the SMO flow, however, is more challenging because the shape and intensity of the source, unlike the case in OPC modeling, is evolving along the optimization path. As a result, it gets more complicated. It is accepted that inadequate pupil sectorization results in reduced accuracy in any application, however in SMO the required uniformity and density of pupil sampling is higher than typical OPC and modeling cases. In this paper, we describe a novel method to implement DDM in the SMO flow. The source sectorization is defined by following the universal pixel sizes used in SMO. Fast algorithms are developed to enable computation of edge signals from each fine pixel of the source. In this case, each pixel has accurate information to describe its contribution to imaging and the overall objective function. A more continuous angular spectrum from 3D mask scattering is thus captured, leading to accurate modeling of 3D mask effects throughout source optimization. This method is applied on a 2x nm middle-of-line layer test case. The impact of the 3D mask model accuracy on

  14. Family of Quantum Sources for Improving Near Field Accuracy in Transducer Modeling by the Distributed Point Source Method

    Directory of Open Access Journals (Sweden)

    Dominique Placko

    2016-10-01

    Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.

  15. An emission source inversion model based on satellite data and its application in air quality forecasts

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    This paper aims at constructing an emission source inversion model using a variational processing method and adaptive nudging scheme for the Community Multiscale Air Quality Model (CMAQ) based on satellite data to investigate the applicability of high resolution OMI (Ozone Monitoring Instrument) column concentration data for air quality forecasts over the North China. The results show a reasonable consistency and good correlation between the spatial distributions of NO2 from surface and OMI satellite measurements in both winter and summer. Such OMI products may be used to implement integrated variational analysis based on observation data on the ground. With linear and variational corrections made, the spatial distribution of OMI NO2 clearly revealed more localized distributing characteristics of NO2 concentration. With such information, emission sources in the southwest and southeast of North China are found to have greater impacts on air quality in Beijing. When the retrieved emission source inventory based on high-resolution OMI NO2 data was used, the coupled Weather Research Forecasting CMAQ model (WRF-CMAQ) performed significantly better in forecasting NO2 concentration level and its tendency as reflected by the more consistencies between the NO2 concentrations from surface observation and model result. In conclusion, satellite data are particularly important for simulating NO2 concentrations on urban and street-block scale. High-resolution OMI NO2 data are applicable for inversing NOx emission source inventory, assessing the regional pollution status and pollution control strategy, and improving the model forecasting results on urban scale.

  16. A Reordering Model Using a Source-Side Parse-Tree for Statistical Machine Translation

    Science.gov (United States)

    Hashimoto, Kei; Yamamoto, Hirofumi; Okuma, Hideo; Sumita, Eiichiro; Tokuda, Keiichi

    This paper presents a reordering model using a source-side parse-tree for phrase-based statistical machine translation. The proposed model is an extension of IST-ITG (imposing source tree on inversion transduction grammar) constraints. In the proposed method, the target-side word order is obtained by rotating nodes of the source-side parse-tree. We modeled the node rotation, monotone or swap, using word alignments based on a training parallel corpus and source-side parse-trees. The model efficiently suppresses erroneous target word orderings, especially global orderings. Furthermore, the proposed method conducts a probabilistic evaluation of target word reorderings. In English-to-Japanese and English-to-Chinese translation experiments, the proposed method resulted in a 0.49-point improvement (29.31 to 29.80) and a 0.33-point improvement (18.60 to 18.93) in word BLEU-4 compared with IST-ITG constraints, respectively. This indicates the validity of the proposed reordering model.

  17. Cell sources for in vitro human liver cell culture models.

    Science.gov (United States)

    Zeilinger, Katrin; Freyer, Nora; Damm, Georg; Seehofer, Daniel; Knöspel, Fanny

    2016-09-01

    In vitro liver cell culture models are gaining increasing importance in pharmacological and toxicological research. The source of cells used is critical for the relevance and the predictive value of such models. Primary human hepatocytes (PHH) are currently considered to be the gold standard for hepatic in vitro culture models, since they directly reflect the specific metabolism and functionality of the human liver; however, the scarcity and difficult logistics of PHH have driven researchers to explore alternative cell sources, including liver cell lines and pluripotent stem cells. Liver cell lines generated from hepatomas or by genetic manipulation are widely used due to their good availability, but they are generally altered in certain metabolic functions. For the past few years, adult and pluripotent stem cells have been attracting increasing attention, due their ability to proliferate and to differentiate into hepatocyte-like cells in vitro However, controlling the differentiation of these cells is still a challenge. This review gives an overview of the major human cell sources under investigation for in vitro liver cell culture models, including primary human liver cells, liver cell lines, and stem cells. The promises and challenges of different cell types are discussed with a focus on the complex 2D and 3D culture approaches under investigation for improving liver cell functionality in vitro Finally, the specific application options of individual cell sources in pharmacological research or disease modeling are described.

  18. Using Reactive Transport Modeling to Evaluate the Source Term at Yucca Mountain

    Energy Technology Data Exchange (ETDEWEB)

    Y. Chen

    2001-12-19

    The conventional approach of source-term evaluation for performance assessment of nuclear waste repositories uses speciation-solubility modeling tools and assumes pure phases of radioelements control their solubility. This assumption may not reflect reality, as most radioelements (except for U) may not form their own pure phases. As a result, solubility limits predicted using the conventional approach are several orders of magnitude higher then the concentrations of radioelements measured in spent fuel dissolution experiments. This paper presents the author's attempt of using a non-conventional approach to evaluate source term of radionuclide release for Yucca Mountain. Based on the general reactive-transport code AREST-CT, a model for spent fuel dissolution and secondary phase precipitation has been constructed. The model accounts for both equilibrium and kinetic reactions. Its predictions have been compared against laboratory experiments and natural analogues. It is found that without calibrations, the simulated results match laboratory and field observations very well in many aspects. More important is the fact that no contradictions between them have been found. This provides confidence in the predictive power of the model. Based on the concept of Np incorporated into uranyl minerals, the model not only predicts a lower Np source-term than that given by conventional Np solubility models, but also produces results which are consistent with laboratory measurements and observations. Moreover, two hypotheses, whether Np enters tertiary uranyl minerals or not, have been tested by comparing model predictions against laboratory observations, the results favor the former. It is concluded that this non-conventional approach of source term evaluation not only eliminates over-conservatism in conventional solubility approach to some extent, but also gives a realistic representation of the system of interest, which is a prerequisite for truly understanding the long

  19. OpenFLUID: an open-source software environment for modelling fluxes in landscapes

    Science.gov (United States)

    Fabre, Jean-Christophe; Rabotin, Michaël; Crevoisier, David; Libres, Aline; Dagès, Cécile; Moussa, Roger; Lagacherie, Philippe; Raclot, Damien; Voltz, Marc

    2013-04-01

    Integrative landscape functioning has become a common concept in environmental management. Landscapes are complex systems where many processes interact in time and space. In agro-ecosystems, these processes are mainly physical processes, including hydrological-processes, biological processes and human activities. Modelling such systems requires an interdisciplinary approach, coupling models coming from different disciplines, developed by different teams. In order to support collaborative works, involving many models coupled in time and space for integrative simulations, an open software modelling platform is a relevant answer. OpenFLUID is an open source software platform for modelling landscape functioning, mainly focused on spatial fluxes. It provides an advanced object-oriented architecture allowing to i) couple models developed de novo or from existing source code, and which are dynamically plugged to the platform, ii) represent landscapes as hierarchical graphs, taking into account multi-scale, spatial heterogeneities and landscape objects connectivity, iii) run and explore simulations in many ways : using the OpenFLUID software interfaces for users (command line interface, graphical user interface), or using external applications such as GNU R through the provided ROpenFLUID package. OpenFLUID is developed in C++ and relies on open source libraries only (Boost, libXML2, GLib/GTK, OGR/GDAL, …). For modelers and developers, OpenFLUID provides a dedicated environment for model development, which is based on an open source toolchain, including the Eclipse editor, the GCC compiler and the CMake build system. OpenFLUID is distributed under the GPLv3 open source license, with a special exception allowing to plug existing models licensed under any license. It is clearly in the spirit of sharing knowledge and favouring collaboration in a community of modelers. OpenFLUID has been involved in many research applications, such as modelling of hydrological network

  20. Using Bayesian Belief Network (BBN) modelling for Rapid Source Term Prediction. RASTEP Phase 1

    Energy Technology Data Exchange (ETDEWEB)

    Knochenhauer, M.; Swaling, V.H.; Alfheim, P. [Scandpower AB, Sundbyberg (Sweden)

    2012-09-15

    The project is connected to the development of RASTEP, a computerized source term prediction tool aimed at providing a basis for improving off-site emergency management. RASTEP uses Bayesian belief networks (BBN) to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, timing, and pathway of released radio-nuclides). The output is a set of possible source terms with associated probabilities. In the NKS project, a number of complex issues associated with the integration of probabilistic and deterministic analyses are addressed. This includes issues related to the method for estimating source terms, signal validation, and sensitivity analysis. One major task within Phase 1 of the project addressed the problem of how to make the source term module flexible enough to give reliable and valid output throughout the accident scenario. Of the alternatives evaluated, it is recommended that RASTEP is connected to a fast running source term prediction code, e.g., MARS, with a possibility of updating source terms based on real-time observations. (Author)

  1. Methodological study and application of advanced receptor modeling to airborne particulate sources

    Science.gov (United States)

    Chueinta, Wanna

    Two aspects of air quality management, aerosol mass measurement and pollution source identification, were studied. A beta gauge was developed to determine particulate mass collected on filter. Two advanced receptor models were applied to resolve possible sources of pollutants on local and regional scales by use of positive matrix factorization (PMF) and multilinear engine (ME), respectively. A simple, low cost beta gauge was designed, constructed, and tested to determine if it provided the necessary performance and reliability in collected aerosol mass measurements. The beta gauge was calibrated and evaluated by experiments with different sized particles. The results showed that the unit provided a satisfactory accuracy and precision with respect to the gravimetric method. (PMF) is a least-square approach to factor analysis. In this study, PMF was applied to investigate the possible sources of airborne particulate matter (APM) collected at an urban residential area of Bangkok from June 1995 to May 1996 and at a suburban residential area in Pathumthani from September 1993 to August 1994. The data consisting of the fine and coarse fractions were analyzed separately. The analysis used the robust analysis mode and rotations to produce six source factors for both the fine and coarse fractions at the urban site and five factors for the fine and coarse fractions at the suburban site. Examination of the influence of wind direction showed the correspondence of some specific factors such as sea salt and vehicle sources with known area sources. ME is a new algorithm for solving a broad range of multilinear problems. A model was developed for the analysis of spatial patterns and possible sources affecting haze and its visual effects in the southwestern United States. The data from the project Measurement of Haze and Visual Effects (MOHAVE) collected during the late winter and mid-summer of 1992 at the monitoring sites in four states, i.e., California, Arizona, Nevada and Utah

  2. Model-basedapproachtoaccountforthevariationofprimaryVOCemissions over time in the identification of indoor VOC sources

    DEFF Research Database (Denmark)

    Han, KwangHoon; Zhang, Jensen S.; Wargocki, Pawel

    2012-01-01

    identification method since materials age over time in real indoor environments. The method is based on the mixed air sample measurements containing pollutants from multiple aged materials and theemission signatures ofindividual new materials determined by PTR-MS. Three emission decay source models were employed......The study objectives were to improve the understanding ofthe long-term variationofVOCemission chromatograms of building materials and to develop a method toaccountfor this variation in the identification of individual sources ofVOCemissions. This is of importance forthe application ofthe source...... exhaust air was sampled by PTR-MS to construct a temporal profile ofemission signature unique to individual product type. The similar process was taken to measure mixture emissions from multiple materials, which is for applying and validating the developed method for source identification enhancement...

  3. Subtraction of point sources from interferometric radio images through an algebraic forward modeling scheme

    CERN Document Server

    Bernardi, G; Ord, S M; Greenhill, L J; Pindor, B; Wayth, R B; Wyithe, J S B

    2010-01-01

    We present a method for subtracting point sources from interferometric radio images via forward modeling of the instrument response and involving an algebraic nonlinear minimization. The method is applied to simulated maps of the Murchison Wide-field Array but is generally useful in cases where only image data are available. After source subtraction, the residual maps have no statistical difference to the expected thermal noise distribution at all angular scales, indicating high effectiveness in the subtraction. Simulations indicate that the errors in recovering the source parameters decrease with increasing signal-to-noise ratio, which is consistent with the theoretical measurement errors. In applying the technique to simulated snapshot observations with the Murchison Wide-field Array, we found that all 101 sources present in the simulation were recovered with an average position error of 10 arcsec and an average flux density error of 0.15%. This led to a dynamic range increase of approximately 3 orders of m...

  4. Accretion Disk Model of Short-Timescale Intermittent Activity in Young Radio Sources

    CERN Document Server

    Czerny, Bozena; Janiuk, Agnieszka; Nikiel-Wroczynski, Blazej; Stawarz, Lukasz

    2009-01-01

    We associate the existence of short-lived compact radio sources with the intermittent activity of the central engine caused by a radiation pressure instability within an accretion disk. Such objects may constitute a numerous sub-class of Giga-Hertz Peaked Spectrum sources, in accordance with the population studies of radio-loud active galaxies, as well as detailed investigations of their radio morphologies. We perform the model computations assuming the viscosity parametrization as proportional to a geometrical mean of the total and gas pressure. The implied timescales are consistent with the observed ages of the sources. The duration of an active phase for a moderate accretion rate is short enough (< 10^3-10^4 years) that the ejecta are confined within the host galaxy and thus these sources cannot evolve into large size radio galaxies unless they are close to the Eddington limit.

  5. Contaminant point source localization error estimates as functions of data quantity and model quality

    Science.gov (United States)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  6. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Directory of Open Access Journals (Sweden)

    Dragutin KERMEK

    2016-04-01

    Full Text Available In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student assignments and projects that involve a lot of source code files these tools are not so effective. Also, issues may occur when source code is given to students in class so they can copy it. In such cases these tools do not provide satisfying results and reports. In this study, we present an improved process model for plagiarism detection when multiple student files exist and allowed source code is present. In the research in this paper we use the Sherlock detection tool, although the presented process model can be combined with any plagiarism detection engine. The proposed model is tested on assignments in three courses in two subsequent academic years.

  7. Second Order Fluid Glow Discharge Model Sustained by Different Source Terms%Second Order Fluid Glow Discharge Model Sustained by Different Source Terms

    Institute of Scientific and Technical Information of China (English)

    D. GUENDOUZ; A. HAMID; A. HENNAD

    2011-01-01

    Behavior of charged particles in a DC low pressure glow discharge is studied. The electric properties of the glow discharge in argon, maintained by a constant source term with uni- form electron and ion generation, between two plane electrodes or by secondary electron emission at the cathode, are determined. A fluid model is used to solve self-consistently the first three moments of the Boltzmann equation coupled with the Poisson equation. The stationary spatial distribution of the electron and ion densities, the electric potential, the electric field, and the electron energy, in a two-dimensional (2D) configuration, are presented.

  8. Comparing predictive models of glioblastoma multiforme built using multi-institutional and local data sources.

    Science.gov (United States)

    Singleton, Kyle W; Hsu, William; Bui, Alex A T

    2012-01-01

    The growing amount of electronic data collected from patient care and clinical trials is motivating the creation of national repositories where multiple institutions share data about their patient cohorts. Such efforts aim to provide sufficient sample sizes for data mining and predictive modeling, ultimately improving treatment recommendations and patient outcome prediction. While these repositories offer the potential to improve our understanding of a disease, potential issues need to be addressed to ensure that multi-site data and resultant predictive models are useful to non-contributing institutions. In this paper we examine the challenges of utilizing National Cancer Institute datasets for modeling glioblastoma multiforme. We created several types of prognostic models and compared their results against models generated using data solely from our institution. While overall model performance between the data sources was similar, different variables were selected during model generation, suggesting that mapping data resources between models is not a straightforward issue.

  9. Significant impacts of irrigation water sources and methods on modeling irrigation effects in the ACME Land Model

    Energy Technology Data Exchange (ETDEWEB)

    Leng, Guoyong; Leung, Lai-Yung; Huang, Maoyi

    2017-07-01

    An irrigation module that considers both irrigation water sources and irrigation methods has been incorporated into the ACME Land Model (ALM). Global numerical experiments were conducted to evaluate the impacts of irrigation water sources and irrigation methods on the simulated irrigation effects. All simulations shared the same irrigation soil moisture target constrained by a global census dataset of irrigation amounts. Irrigation has large impacts on terrestrial water balances especially in regions with extensive irrigation. Such effects depend on the irrigation water sources: surface-water-fed irrigation leads to decreases in runoff and water table depth, while groundwater-fed irrigation increases water table depth, with positive or negative effects on runoff depending on the pumping intensity. Irrigation effects also depend significantly on the irrigation methods. Flood irrigation applies water in large volumes within short durations, resulting in much larger impacts on runoff and water table depth than drip and sprinkler irrigations. Differentiating the irrigation water sources and methods is important not only for representing the distinct pathways of how irrigation influences the terrestrial water balances, but also for estimating irrigation water use efficiency. Specifically, groundwater pumping has lower irrigation water use efficiency due to enhanced recharge rates. Different irrigation methods also affect water use efficiency, with drip irrigation the most efficient followed by sprinkler and flood irrigation. Our results highlight the importance of explicitly accounting for irrigation sources and irrigation methods, which are the least understood and constrained aspects in modeling irrigation water demand, water scarcity and irrigation effects in Earth System Models.

  10. Computer Modeling and Simulation Evaluation of High Power LED Sources for Secondary Optical Design

    Institute of Scientific and Technical Information of China (English)

    SU Hong-dong; WANG Ya-jun; DONG Ji-yang; CHEN Zhong

    2007-01-01

    Proposed and demonstrated is a novel computer modeling method for high power light emitting diodes(LEDs). It contains geometrical structure and optical property of high power LED as well as LED dies definition with its spatial and angular distribution. Merits and non-merits of traditional modeling methods when applied to high power LEDs based on secondary optical design are discussed. Two commercial high power LEDs are simulated using the proposed computer modeling method. Correlation coefficient is proposed to compare and analyze the simulation results and manufacturing specifications. The source model is precisely demonstrated by obtaining above 99% in correlation coefficient with different surface incident angle intervals.

  11. Acoustic radiation field of the truncated parametric source generated by a piston radiator model and experiment

    Institute of Scientific and Technical Information of China (English)

    ZHAO Xiaoliang; ZHU Zhemin; DU Gonghuan; TANG Haiqing; LI Shui; MIAO Rongxing

    2001-01-01

    A theoretical model is presented to describe the parametric acoustic field generated by a piston radiator. In the model, the high-frequency primary wave interaction region that is truncated by a low-pass acoustic filter can be viewed as a cylindrical source within the Rayleigh distance of the piston. When the radius of the piston is much smaller than the length of the parametric region, this model is reduced to the Berketey's End-Fire Line Array model. Comparison between numerical calculations and experimental measurement show that the generated parametric sound field (especially near the axis) agrees well with the experiment results.

  12. Electroencephalography (EEG) Forward Modeling via H(div) Finite Element Sources with Focal Interpolation

    CERN Document Server

    Pursiainen, Sampsa; Wolters, Carsten H

    2016-01-01

    The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method....

  13. A Model for the Origin of High Density in Loop-top X-ray Sources

    CERN Document Server

    Longcope, D W

    2011-01-01

    Super-hot looptop sources, detected in some large solar flares, are compact sources of HXR emission with spectra matching thermal electron populations exceeding 30 megakelvins. High observed emission measure, as well as inference of electron thermalization within the small source region, both provide evidence of high densities at the looptop; typically more than an order of magnitude above ambient. Where some investigators have suggested such density enhancement results from a rapid enhancement in the magnetic field strength, we propose an alternative model, based on Petschek reconnection, whereby looptop plasma is heated and compressed by slow magnetosonic shocks generated self-consistently through flux retraction following reconnection. Under steady conditions such shocks can enhance density by no more than a factor of four. These steady shock relations (Rankine-Hugoniot relations) turn out to be inapplicable to Petschek's model owing to transient effects of thermal conduction. The actual density enhancemen...

  14. Differences in directional sound source behavior and perception between assorted computer room models

    DEFF Research Database (Denmark)

    Vigeant, M. C.; Wang, L. M.; Rindel, Jens Holger

    2004-01-01

    Source directivity is an important input variable when using room acoustic computer modeling programs to generate auralizations. Previous research has shown that using a multichannel anechoic recording can produce a more natural sounding auralization, particularly as the number of channels....... The effect of changing the room's material properties was studied in relation to turning the source around 180 deg and on the range of acoustic parameters from the four- and 13 beams. As the room becomes increasingly diffuse, the importance of the modeled directivity decreases when considering reverberation...... time. However, for the three other parameters evaluated (sound-pressure level, clarity index, and lateral fraction), the changing diffusivity of the room does not diminish the importance of the directivity. The study therefore shows the importance of considering source directivity when using computer...

  15. Beam-based model of broad-band impedance of the Diamond Light Source

    Science.gov (United States)

    Smaluk, Victor; Martin, Ian; Fielder, Richard; Bartolini, Riccardo

    2015-06-01

    In an electron storage ring, the interaction between a single-bunch beam and a vacuum chamber impedance affects the beam parameters, which can be measured rather precisely. So we can develop beam-based numerical models of longitudinal and transverse impedances. At the Diamond Light Source (DLS) to get the model parameters, a set of measured data has been used including current-dependent shift of betatron tunes and synchronous phase, chromatic damping rates, and bunch lengthening. A matlab code for multiparticle tracking has been developed. The tracking results and analytical estimations are quite consistent with the measured data. Since Diamond has the shortest natural bunch length among all light sources in standard operation, the studies of collective effects with short bunches are relevant to many facilities including next generation of light sources.

  16. An eighth-scale speech source for subjective assessments in acoustic models

    Science.gov (United States)

    Orlowski, R. J.

    1981-08-01

    The design of a source is described which is suitable for making speech recordings in eighth-scale acoustic models of auditoria. An attempt was made to match the directionality of the source with the directionality of the human voice using data reported in the literature. A narrow aperture was required for the design which was provided by mounting an inverted conical horn over the diaphragm of a high frequency loudspeaker. Resonance problems were encountered with the use of a horn and a description is given of the electronic techniques adopted to minimize the effect of these resonances. Subjective and objective assessments on the completed speech source have proved satisfactory. It has been used in a modelling exercise concerned with the acoustic design of a theatre with a thrust-type stage.

  17. Dynamic corner frequency in source spectral model for stochastic synthesis of ground motion

    Institute of Scientific and Technical Information of China (English)

    Xiaodan Sun; Xiaxin Tao; Guoxin Wang; Taojun Liu

    2009-01-01

    The static corner frequency and dynamic corner frequency in stochastic synthesis of ground motion from finite-fault modeling are introduced, and conceptual disadvantages of the two are discussed in this paper. Furthermore, the non-uniform radiation of seismic wave on the fault plane, as well as the trend of the larger rupture area, the lower corner frequency, can be described by the source spectral model developed by the authors. A new dynamic corner frequency can be developed directly from the model. The dependence of ground motion on the size of subfault can be eliminated if this source spectral model is adopted in the synthesis. Finally, the approach presented is validated from the comparison between the synthesized and observed ground motions at six rock stations during the Northridge earthquake in 1994.

  18. Source-Flux-Fate Modelling of Priority Pollutants in Stormwater Systems

    DEFF Research Database (Denmark)

    Vezzaro, Luca

    the significant level of uncertainty affecting stormwater quality models, the identification of sources of uncertainty (based on Global Sensitivity Analysis - GSA) and quantification of model prediction bounds (based on pseudo-Bayesian methods, such as the Generalized Likelihood Uncertainty Estimation - GLUE......) are presented as crucial elements in modelling of stormwater PP. Special focus is on assessing the use of combined informal likelihood measures assigning equal weights at different model outputs (flow and quality measurements). Management of the spatially heterogeneous sources of stormwater PP requires...... a detailed catchment characterization, based on land use and the use of information stored in Geographical Information System (GIS). The analysis carried out in the thesis, which compares different characterization approaches with different level of detail, suggests in fact that this approach allows...

  19. Modeling and measuring the transport and scattering of energetic debris in an extreme ultraviolet plasma source

    Science.gov (United States)

    Sporre, John R.; Elg, Daniel T.; Kalathiparambil, Kishor K.; Ruzic, David N.

    2016-01-01

    A theoretical model for describing the propagation and scattering of energetic species in an extreme ultraviolet (EUV) light lithography source is presented. An EUV light emitting XTREME XTS 13-35 Z-pinch plasma source is modeled with a focus on the effect of chamber pressure and buffer gas mass on energetic ion and neutral debris transport. The interactions of the energetic debris species, which is generated by the EUV light emitting plasma, with the buffer gas and chamber walls are considered as scattering events in the model, and the trajectories of the individual atomic species involved are traced using a Monte Carlo algorithm. This study aims to establish the means by which debris is transported to the intermediate focus with the intent to verify the various mitigation techniques currently employed to increase EUV lithography efficiency. The modeling is compared with an experimental investigation.

  20. Application of crowd-sourced data to multi-scale evolutionary exposure and vulnerability models

    Science.gov (United States)

    Pittore, Massimiliano

    2016-04-01

    Seismic exposure, defined as the assets (population, buildings, infrastructure) exposed to earthquake hazard and susceptible to damage, is a critical -but often neglected- component of seismic risk assessment. This partly stems from the burden associated with the compilation of a useful and reliable model over wide spatial areas. While detailed engineering data have still to be collected in order to constrain exposure and vulnerability models, the availability of increasingly large crowd-sourced datasets (e. g. OpenStreetMap) opens up the exciting possibility to generate incrementally evolving models. Integrating crowd-sourced and authoritative data using statistical learning methodologies can reduce models uncertainties and also provide additional drive and motivation to volunteered geoinformation collection. A case study in Central Asia will be presented and discussed.

  1. Source identification of benzene emissions in Texas City using an adjoint neighborhood scale transport model

    Science.gov (United States)

    Guven, B.; Olaguer, E. P.; Herndon, S. C.; Kolb, C. E.; Cuclis, A.

    2012-12-01

    During the "Formaldehyde and Olefins from Large Industrial Sources" (FLAIR) study in 2009, the Aerodyne Research Inc. (ARI) mobile laboratory performed real-time in situ measurements of VOCs, NOx and HCHO in Texas City, TX on May 7, 2009 from 11 am to 3 pm. This high resolution dataset collected in a predominantly industrial area provides an ideal test bed for advanced source attribution. Our goal was to identify and quantify emission sources within the largest facility in Texas City most likely responsible for measured benzene concentrations. For this purpose, fine horizontal resolution (200 m x 200 m) 4D variational (4Dvar) inverse modeling was performed by running the HARC air quality transport model in adjoint mode based on ambient concentrations measured by the mobile laboratory. The simulations were conducted with a horizontal domain size of 4 km x 4 km for a four-hour period (11 am to 3 pm). Potential emission unit locations within the facility were specified using a high spatial resolution digital model of the largest industrial complex in the area. The HARC model was used to infer benzene emission rates from all potential source locations that would account for the benzene concentrations measured by the Aerodyne mobile laboratory in the vicinity of the facility. A Positive Matrix Factorization receptor model was also applied to the concentrations of other compounds measured by the mobile lab to support the source attribution by the inverse model. Although previous studies attributed measured benzene concentrations during the same time period to a cooling tower unit at the industrial complex, this study found that some of the flare units in the facility were also associated with the elevated benzene concentrations. The emissions of some of these flare units were found to be greater than reported in emission inventories, by up to two orders of magnitude.

  2. Dosimetric comparison between model 9011 and 6711 sources in prostate implants.

    Science.gov (United States)

    Zhang, Hualin; Beyer, David

    2013-01-01

    The purpose of this work is to evaluate the model 9011 iodine-125 ((125)I) in prostate implants by comparing dosimetric coverage provided by the 6711 vs 9011 source implants. Postimplant dosimetry was performed in 18 consecutively implanted patients with prostate cancer. Two were implanted with the 9011 source and 16 with the 6711 source. For purposes of comparison, each implant was then recalculated assuming use of the other source. The same commercially available planning system was used and the specific source data for both 6711 and 9011 products were entered. The results of these calculations are compared side by side in the terms of the isodose values covering 100% (D100) and 90% (D90) of prostate volume, and the percentages of volumes of prostate, bladder, rectum, and urethra covered by 200% (V200), 150% (V150), 100% (V100), 50% (V50), and 20% (V20) of the prescribed dose as well. The 6711 source data overestimate coverage by 6.4% (ranging from 4.9% to 6.9%; median 6.6%) at D100 and by 6.6% (ranging from 6.2% to 6.8%; median 6.6%) at D90 compared with actual 9011 data. Greater discrepancies of up to 67% are seen at higher dose levels: average reduction for V100 is 2.7% (ranging from 0.6% to 7.7%; median 2.3%), for V150 is 14.6% (ranging from 6.1% to 20.5%; median 15.3%), for V200 is 14.9% (ranging from 4.8% to 19.1%; median 16%); similarly seen in bladder, rectal, and urethral coverage. This work demonstrates a clear difference in dosimetric behavior between the 9011 and 6711 sources. Using the 6711 source data for 9011 source implants would create a pronounced error in dose calculation. This study provides evidence that the 9011 source can provide the same dosimetric quality as the 6711 source, if properly used; however, the 6711 source data should not be considered as a surrogate for the 9011 source implants. Copyright © 2013 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  3. Dosimetric comparison between model 9011 and 6711 sources in prostate implants

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Hualin, E-mail: zhang248@iupui.edu [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States); Arizona Oncology Services, Phoenix, AZ (United States); Beyer, David [Arizona Oncology Services, Phoenix, AZ (United States)

    2013-07-01

    The purpose of this work is to evaluate the model 9011 iodine-125 ({sup 125}I) in prostate implants by comparing dosimetric coverage provided by the 6711 vs 9011 source implants. Postimplant dosimetry was performed in 18 consecutively implanted patients with prostate cancer. Two were implanted with the 9011 source and 16 with the 6711 source. For purposes of comparison, each implant was then recalculated assuming use of the other source. The same commercially available planning system was used and the specific source data for both 6711 and 9011 products were entered. The results of these calculations are compared side by side in the terms of the isodose values covering 100% (D100) and 90% (D90) of prostate volume, and the percentages of volumes of prostate, bladder, rectum, and urethra covered by 200% (V200), 150% (V150), 100% (V100), 50% (V50), and 20% (V20) of the prescribed dose as well. The 6711 source data overestimate coverage by 6.4% (ranging from 4.9% to 6.9%; median 6.6%) at D100 and by 6.6% (ranging from 6.2% to 6.8%; median 6.6%) at D90 compared with actual 9011 data. Greater discrepancies of up to 67% are seen at higher dose levels: average reduction for V100 is 2.7% (ranging from 0.6% to 7.7%; median 2.3%), for V150 is 14.6% (ranging from 6.1% to 20.5%; median 15.3%), for V200 is 14.9% (ranging from 4.8% to 19.1%; median 16%); similarly seen in bladder, rectal, and urethral coverage. This work demonstrates a clear difference in dosimetric behavior between the 9011 and 6711 sources. Using the 6711 source data for 9011 source implants would create a pronounced error in dose calculation. This study provides evidence that the 9011 source can provide the same dosimetric quality as the 6711 source, if properly used; however, the 6711 source data should not be considered as a surrogate for the 9011 source implants.

  4. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    Directory of Open Access Journals (Sweden)

    Stålring Jonna C

    2011-07-01

    Full Text Available Abstract Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the

  5. Room acoustics computer modelling: Study of the effect of source directivity on auralizations

    DEFF Research Database (Denmark)

    Vigeant, Michelle C.; Wang, Lily M.; Rindel, Jens Holger

    2006-01-01

    Auralizations are very useful in the design of performing arts spaces, where auralization is the process of rendering audible the sound field in a space, in such a way as to simulate the binaural listening experience at a given position in the modelled space. One of the fundamental modeling inputs...... was that subjects rated the auralizations made with an increasing number of channels as sounding more realistic, indication that when more accurate source directivity information is used a more realistic auralization is possible....

  6. Sources of uncertainties in modelling black carbon at the global scale

    OpenAIRE

    2010-01-01

    Our understanding of the global black carbon (BC) cycle is essentially qualitative due to uncertainties in our knowledge of its properties. This work investigates two source of uncertainties in modelling black carbon: those due to the use of different schemes for BC ageing and its removal rate in the global Transport-Chemistry model TM5 and those due to the uncertainties in the definition and quantification of the observations, which propagate through to both the emission inventories, and the...

  7. Earthquake source model using strong motion displacement as response of finite elastic media

    Indian Academy of Sciences (India)

    R N Iyengar; Shailesh K R Agrawal

    2001-03-01

    The strong motion displacement records available during an earthquake can be treated as the response of the earth as the a structural system to unknown forces acting at unknown locations. Thus, if the part of the earth participating in ground motion is modelled as a known finite elastic medium, one can attempt to model the source location and forces generated during an earthquake as an inverse problem in structural dynamics. Based on this analogy, a simple model for the basic earthquake source is proposed. The unknown source is assumed to be a sequence of impulses acting at locations yet to be found. These unknown impulses and their locations are found using the normal mode expansion along with a minimization of mean square error. The medium is assumed to be finite, elastic, homogeneous, layered and horizontal with a specific set of boundary conditions. Detailed results are obtained for Uttarkashi earthquake. The impulse locations exhibit a linear structure closely associated with the causative fault. The results obtained are shown to be in good agreement with reported values. The proposed engineering model is then used to simulate the acceleration time histories at a few recording stations. The earthquake source in terms of a sequence of impulses acting at different locations is applied on a 2D finite elastic medium and acceleration time histories are found using finite element methods. The synthetic accelerations obtained are in close match with the recorded accelerations.

  8. Thermal imager sources of non-uniformities: modeling of static and dynamic contributions during operations

    Science.gov (United States)

    Sozzi, B.; Olivieri, M.; Mariani, P.; Giunti, C.; Zatti, S.; Porta, A.

    2014-05-01

    Due to the fast-growing of cooled detector sensitivity in the last years, on the image 10-20 mK temperature difference between adjacent objects can theoretically be discerned if the calibration algorithm (NUC) is capable to take into account and compensate every spatial noise source. To predict how the NUC algorithm is strong in all working condition, the modeling of the flux impinging on the detector becomes a challenge to control and improve the quality of a properly calibrated image in all scene/ambient conditions including every source of spurious signal. In literature there are just available papers dealing with NU caused by pixel-to-pixel differences of detector parameters and by the difference between the reflection of the detector cold part and the housing at the operative temperature. These models don't explain the effects on the NUC results due to vignetting, dynamic sources out and inside the FOV, reflected contributions from hot spots inside the housing (for example thermal reference far of the optical path). We propose a mathematical model in which: 1) detector and system (opto-mechanical configuration and scene) are considered separated and represented by two independent transfer functions 2) on every pixel of the array the amount of photonic signal coming from different spurious sources are considered to evaluate the effect on residual spatial noise due to dynamic operative conditions. This article also contains simulation results showing how this model can be used to predict the amount of spatial noise.

  9. OpenMx: An Open Source Extended Structural Equation Modeling Framework

    Science.gov (United States)

    Boker, Steven; Neale, Michael; Maes, Hermine; Wilde, Michael; Spiegel, Michael; Brick, Timothy; Spies, Jeffrey; Estabrook, Ryne; Kenny, Sarah; Bates, Timothy; Mehta, Paras; Fox, John

    2011-01-01

    OpenMx is free, full-featured, open source, structural equation modeling (SEM) software. OpenMx runs within the "R" statistical programming environment on Windows, Mac OS-X, and Linux computers. The rationale for developing OpenMx is discussed along with the philosophy behind the user interface. The OpenMx data structures are…

  10. OpenMx: An Open Source Extended Structural Equation Modeling Framework

    Science.gov (United States)

    Boker, Steven; Neale, Michael; Maes, Hermine; Wilde, Michael; Spiegel, Michael; Brick, Timothy; Spies, Jeffrey; Estabrook, Ryne; Kenny, Sarah; Bates, Timothy; Mehta, Paras; Fox, John

    2011-01-01

    OpenMx is free, full-featured, open source, structural equation modeling (SEM) software. OpenMx runs within the "R" statistical programming environment on Windows, Mac OS-X, and Linux computers. The rationale for developing OpenMx is discussed along with the philosophy behind the user interface. The OpenMx data structures are…

  11. Three-dimensional inverse modelling of magnetic anomaly sources based on a genetic algorithm

    Science.gov (United States)

    Montesinos, Fuensanta G.; Blanco-Montenegro, Isabel; Arnoso, José

    2016-04-01

    We present a modelling method to estimate the 3-D geometry and location of homogeneously magnetized sources from magnetic anomaly data. As input information, the procedure needs the parameters defining the magnetization vector (intensity, inclination and declination) and the Earth's magnetic field direction. When these two vectors are expected to be different in direction, we propose to estimate the magnetization direction from the magnetic map. Then, using this information, we apply an inversion approach based on a genetic algorithm which finds the geometry of the sources by seeking the optimum solution from an initial population of models in successive iterations through an evolutionary process. The evolution consists of three genetic operators (selection, crossover and mutation), which act on each generation, and a smoothing operator, which looks for the best fit to the observed data and a solution consisting of plausible compact sources. The method allows the use of non-gridded, non-planar and inaccurate anomaly data and non-regular subsurface partitions. In addition, neither constraints for the depth to the top of the sources nor an initial model are necessary, although previous models can be incorporated into the process. We show the results of a test using two complex synthetic anomalies to demonstrate the efficiency of our inversion method. The application to real data is illustrated with aeromagnetic data of the volcanic island of Gran Canaria (Canary Islands).

  12. Sources of uncertainties in modelling black carbon at the global scale

    NARCIS (Netherlands)

    Vignati, E.; Karl, M.; Krol, M.C.; Wilson, J.; Stier, P.; Cavalli, F.

    2010-01-01

    Our understanding of the global black carbon (BC) cycle is essentially qualitative due to uncertainties in our knowledge of its properties. This work investigates two source of uncertainties in modelling black carbon: those due to the use of different schemes for BC ageing and its removal rate in th

  13. Mathematical Model and the Simulation of Electrical Arc Welding as Moving Source in Protector Gas Welding

    Directory of Open Access Journals (Sweden)

    Lenuta Suciu

    2006-10-01

    Full Text Available The works presents the mathematical model of electrical arc welding, simulation of the electrical arc as a moving source with help programs software Ansys, passing through three stage of simulation: pre- processing, processing (solution and post-processing.

  14. Sources of uncertainties in modelling black carbon at the global scale

    NARCIS (Netherlands)

    Vignati, E.; Karl, M.; Krol, M.C.; Wilson, J.; Stier, P.; Cavalli, F.

    2010-01-01

    Our understanding of the global black carbon (BC) cycle is essentially qualitative due to uncertainties in our knowledge of its properties. This work investigates two source of uncertainties in modelling black carbon: those due to the use of different schemes for BC ageing and its removal rate in

  15. Modelling the Impact of Ground Planes on Antenna Radiation Using the Method of Auxiliary Sources

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2007-01-01

    The Method of Auxiliary Sources is employed to model the impact of finite ground planes on the radiation from antennas. In many cases the computational cost of available commercial tools restricts the simulations to include only a small ground plane or, by use of the image principle, the infinitely...

  16. The AAM-API: An Open Source Active Appearance Model Implementation

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2003-01-01

    This paper presents a public domain implementation of the Active Appearance Model framework and gives examples using it for segmentation and analysis of medical images. The software is open source, designed with efficiency in mind, and has been thoroughly tested and evaluated in several medical a...

  17. Identifying the origin of waterbird carcasses in Lake Michigan using a neural network source tracking model

    Science.gov (United States)

    Kenow, Kevin P.; Ge, Zhongfu; Fara, Luke J.; Houdek, Steven C.; Lubinski, Brian R.

    2016-01-01

    Avian botulism type E is responsible for extensive waterbird mortality on the Great Lakes, yet the actual site of toxin exposure remains unclear. Beached carcasses are often used to describe the spatial aspects of botulism mortality outbreaks, but lack specificity of offshore toxin source locations. We detail methodology for developing a neural network model used for predicting waterbird carcass motions in response to wind, wave, and current forcing, in lieu of a complex analytical relationship. This empirically trained model uses current velocity, wind velocity, significant wave height, and wave peak period in Lake Michigan simulated by the Great Lakes Coastal Forecasting System. A detailed procedure is further developed to use the model for back-tracing waterbird carcasses found on beaches in various parts of Lake Michigan, which was validated using drift data for radiomarked common loon (Gavia immer) carcasses deployed at a variety of locations in northern Lake Michigan during September and October of 2013. The back-tracing model was further used on 22 non-radiomarked common loon carcasses found along the shoreline of northern Lake Michigan in October and November of 2012. The model-estimated origins of those cases pointed to some common source locations offshore that coincide with concentrations of common loons observed during aerial surveys. The neural network source tracking model provides a promising approach for identifying locations of botulinum neurotoxin type E intoxication and, in turn, contributes to developing an understanding of the dynamics of toxin production and possible trophic transfer pathways.

  18. Evaluation of the influence of uncertain forward models on the EEG source reconstruction problem

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    Introduction Electro-encephalography (EEG) holds great promise for functional brain imaging, due to its high temporal resolution, low cost equipment and the possibility of performing the experiments under much more realistic conditions as compared to functional magnetic resonance imaging and posi......Introduction Electro-encephalography (EEG) holds great promise for functional brain imaging, due to its high temporal resolution, low cost equipment and the possibility of performing the experiments under much more realistic conditions as compared to functional magnetic resonance imaging...... and positron emission tomography. Today's EEG brain imaging methods operate with the assumption that the forward model is known when the source estimation is performed. Many sources of uncertainty are involved in the formulation of the forward model like tissue segmentation, tissue conductivities......, and electrode locations. In this contribution we investigate how forward model uncertainty influences source localization. Methods The analysis were based on 3-spheres models, where a high-resolution reference head model denoted as the ‘true forward model’ were compared with lower resolution forward models...

  19. Comparison of lead isotopes with source apportionment models, including SOM, for air particulates.

    Science.gov (United States)

    Gulson, Brian; Korsch, Michael; Dickson, Bruce; Cohen, David; Mizon, Karen; Davis, J Michael

    2007-08-01

    We have measured high precision lead isotopes in PM(2.5) particulates from a highly-trafficked site (Mascot) and rural site (Richmond) in the Sydney Basin, New South Wales, Australia to compare with isotopic data from total suspended particulates (TSP) from other sites in the Sydney Basin and evaluate relationships with source fingerprints obtained from multi-element PM(2.5) data. The isotopic data for the period 1998 to 2004 show seasonal peaks and troughs that are more pronounced in the rural site for the PM(2.5).samples but are consistent with the TSP. The Self Organising Map (SOM) method has been applied to the multi-element PM(2.5) data to evaluate its use in obtaining fingerprints for comparison with standard statistical procedures (ANSTO model). As seasonal effects are also significant for the multi-element data, the SOM modelling is reported as site and season dependent. At the Mascot site, the ANSTO model exhibits decreasing (206)Pb/(204)Pb ratios with increasing contributions of fingerprints for "secondary smoke" (industry), "soil", "smoke" and "seaspray". Similar patterns were shown by SOM winter fingerprints for both sites. At the rural site, there are large isotopic variations but for the majority of samples these are not associated with increased contributions from the main sources with the ANSTO model. For two winter sampling times, there are increased contributions from "secondary industry", "smoke", "soil" and seaspray with one time having a source or sources of Pb similar to that of Mascot. The only positive relationship between increasing (206)Pb/(204)Pb ratio and source contributions is found at the rural site using the SOM summer fingerprints, both of which show a significant contribution from sulphur. Several of the fingerprints using either model have significant contributions from black carbon (BC) and/or sulphur (S) that probably derive from diesel fuels and industrial sources. Increased contributions from sources with the SOM summer

  20. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.

    2012-07-14

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due to uncertainties in input data, model assumptions, and modeling procedures. However, fault model uncertainties have usually been ignored in stress-triggering studies and have not been propagated to assess the reliability of Coulomb failure stress change (ΔCFS) calculations. We show how these uncertainties can be used to provide confidence intervals for co-seismic ΔCFS-values. We demonstrate this for the MW = 5.9 June 2000 Kleifarvatn earthquake in southwest Iceland and systematically map these uncertainties. A set of 2500 candidate source models from the full posterior fault-parameter distribution was used to compute 2500 ΔCFS maps. We assess the reliability of the ΔCFS-values from the coefficient of variation (CV) and deem ΔCFS-values to be reliable where they are at least twice as large as the standard deviation (CV ≤ 0.5). Unreliable ΔCFS-values are found near the causative fault and between lobes of positive and negative stress change, where a small change in fault strike causes ΔCFS-values to change sign. The most reliable ΔCFS-values are found away from the source fault in the middle of positive and negative ΔCFS-lobes, a likely general pattern. Using the reliability criterion, our results support the static stress-triggering hypothesis. Nevertheless, our analysis also suggests that results from previous stress-triggering studies not considering source model uncertainties may have lead to a biased interpretation of the importance of static stress-triggering.

  1. Modeling and analysis of secondary sources coupling for active sound field reduction in confined spaces

    Science.gov (United States)

    Montazeri, Allahyar; Taylor, C. James

    2017-10-01

    This article addresses the coupling of acoustic secondary sources in a confined space in a sound field reduction framework. By considering the coupling of sources in a rectangular enclosure, the set of coupled equations governing its acoustical behavior are solved. The model obtained in this way is used to analyze the behavior of multi-input multi-output (MIMO) active sound field control (ASC) systems, where the coupling of sources cannot be neglected. In particular, the article develops the analytical results to analyze the effect of coupling of an array of secondary sources on the sound pressure levels inside an enclosure, when an array of microphones is used to capture the acoustic characteristics of the enclosure. The results are supported by extensive numerical simulations showing how coupling of loudspeakers through acoustic modes of the enclosure will change the strength and hence the driving voltage signal applied to the secondary loudspeakers. The practical significance of this model is to provide a better insight on the performance of the sound reproduction/reduction systems in confined spaces when an array of loudspeakers and microphones are placed in a fraction of wavelength of the excitation signal to reduce/reproduce the sound field. This is of particular importance because the interaction of different sources affects their radiation impedance depending on the electromechanical properties of the loudspeakers.

  2. Constraints on coronal turbulence models from source sizes of noise storms at 327 MHz

    CERN Document Server

    Subramanian, Prasad

    2010-01-01

    We seek to reconcile observations of small source sizes in the solar corona at 327 MHz with predictions of scattering models that incorporate refractive index effects, inner scale effects and a spherically diverging wavefront. We use an empirical prescription for the turbulence amplitude $C_{N}^{2}(R)$ based on VLBI observations by Spangler and coworkers of compact radio sources against the solar wind for heliocentric distances $R \\approx$ 10--50 $R_{\\odot}$. We use the Coles & Harmon model for the inner scale $l_{i}(R)$, that is presumed to arise from cyclotron damping. In view of the prevalent uncertainty in the power law index that characterizes solar wind turbulence at various heliocentric distances, we retain this index as a free parameter. We find that the inclusion of spherical divergence effects suppresses the predicted source size substantially. We also find that inner scale effects significantly reduce the predicted source size. An important general finding for solar sources is that the calculat...

  3. Model of municipal solid waste source separation activity: a case study of Beijing.

    Science.gov (United States)

    Yang, Lei; Li, Zhen-Shan; Fu, Hui-Zhen

    2011-02-01

    One major challenge faced by Beijing is dealing with the enormous amount of municipal solid waste (MSW) generated, which contains a high percentage of food waste. Source separation is considered an effective means of reducing waste and enhancing recycling. However, few studies have focused on quantification of the mechanism of source separation activity. Therefore, this study was conducted to establish a mathematical model of source separation activity (MSSA) that correlates the source separation ratio with the following parameters: separation facilities, awareness, separation transportation, participation atmosphere, environmental profit, sense of honor, and economic profit. The MSSA consisted of two equations, one related to the behavior generation stage and one related to the behavior stability stage. The source separation ratios of the residential community, office building, and primary and middle school were calculated using the MSSA. Data for analysis were obtained from a 1-yr investigation and a questionnaire conducted at 128 MSW clusters around Beijing. The results revealed that office buildings had an initial separation ratio of 80% and a stable separation ratio of 65.86%, whereas residential communities and primary and middle schools did not have a stable separation ratio. The MSSA curve took on two shapes. In addition, internal motivations and the separation transportation ratio were found to be key parameters of the MSSA. This model can be utilized for other cities and countries.

  4. Source Data Impacts on Epistemic Uncertainty for Launch Vehicle Fault Tree Models

    Science.gov (United States)

    Al Hassan, Mohammad; Novack, Steven; Ring, Robert

    2016-01-01

    Launch vehicle systems are designed and developed using both heritage and new hardware. Design modifications to the heritage hardware to fit new functional system requirements can impact the applicability of heritage reliability data. Risk estimates for newly designed systems must be developed from generic data sources such as commercially available reliability databases using reliability prediction methodologies, such as those addressed in MIL-HDBK-217F. Failure estimates must be converted from the generic environment to the specific operating environment of the system in which it is used. In addition, some qualification of applicability for the data source to the current system should be made. Characterizing data applicability under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This paper will demonstrate a data-source applicability classification method for suggesting epistemic component uncertainty to a target vehicle based on the source and operating environment of the originating data. The source applicability is determined using heuristic guidelines while translation of operating environments is accomplished by applying statistical methods to MIL-HDK-217F tables. The paper will provide one example for assigning environmental factors uncertainty when translating between operating environments for the microelectronic part-type components. The heuristic guidelines will be followed by uncertainty-importance routines to assess the need for more applicable data to reduce model uncertainty.

  5. Source apportionment of PAHs using Unmix model for Yantai costal surface sediments, China.

    Science.gov (United States)

    Lang, Yin-Hai; Yang, Wei

    2014-01-01

    16 Polycyclic aromatic hydrocarbons (PAHs) in 20 surface sediments from Yantai offshore area were measured. The total PAHs concentrations varied from 450.0 to 4,299.0 ng/g, with a mean of 2,492.9 ng/g. The high molecular weight (HMW) PAHs were most abundant and the ratio ranged from 54.9 % to 81.6 % in all sampling stations, indicating that pyrogenic sources were a predominant contribution to PAHs pollution. The source contributions of PAHs were estimated based on the EPA Unmix 6.0 receptor model. The data were well simulated due to a high correlation coefficient between predicted and measured PAHs concentration (R(2) = 0.99). A mixed source of coal combustion and traffic pollution contributed to 38.9 % of the measured PAHs, followed by diesel emission (38.8 %) and a mixed source of biomass combustion and gasoline engine emissions (22.3 %). The current findings further validated that Unmix model could be applied to apportion the sources of PAHs in sediments.

  6. Determination of Original Infection Source of H7N9 Avian Influenza by Dynamical Model

    Science.gov (United States)

    Zhang, Juan; Jin, Zhen; Sun, Gui-Quan; Sun, Xiang-Dong; Wang, You-Ming; Huang, Baoxu

    2014-05-01

    H7N9, a newly emerging virus in China, travels among poultry and human. Although H7N9 has not aroused massive outbreaks, recurrence in the second half of 2013 makes it essential to control the spread. It is believed that the most effective control measure is to locate the original infection source and cut off the source of infection from human. However, the original infection source and the internal transmission mechanism of the new virus are not totally clear. In order to determine the original infection source of H7N9, we establish a dynamical model with migratory bird, resident bird, domestic poultry and human population, and view migratory bird, resident bird, domestic poultry as original infection source respectively to fit the true dynamics during the 2013 pandemic. By comparing the date fitting results and corresponding Akaike Information Criterion (AIC) values, we conclude that migrant birds are most likely the original infection source. In addition, we obtain the basic reproduction number in poultry and carry out sensitivity analysis of some parameters.

  7. Using Bayesian Belief Network (BBN) modelling for rapid source term prediction. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Knochenhauer, M.; Swaling, V.H.; Dedda, F.D.; Hansson, F.; Sjoekvist, S.; Sunnegaerd, K. [Lloyd' s Register Consulting AB, Sundbyberg (Sweden)

    2013-10-15

    The project presented in this report deals with a number of complex issues related to the development of a tool for rapid source term prediction (RASTEP), based on a plant model represented as a Bayesian belief network (BBN) and a source term module which is used for assigning relevant source terms to BBN end states. Thus, RASTEP uses a BBN to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, composition, timing, and release path of released radio-nuclides). The output is a set of possible source terms with associated probabilities. One major issue has been associated with the integration of probabilistic and deterministic analyses are addressed, dealing with the challenge of making the source term determination flexible enough to give reliable and valid output throughout the accident scenario. The potential for connecting RASTEP to a fast running source term prediction code has been explored, as well as alternative ways of improving the deterministic connections of the tool. As part of the investigation, a comparison of two deterministic severe accident analysis codes has been performed. A second important task has been to develop a general method where experts' beliefs can be included in a systematic way when defining the conditional probability tables (CPTs) in the BBN. The proposed method includes expert judgement in a systematic way when defining the CPTs of a BBN. Using this iterative method results in a reliable BBN even though expert judgements, with their associated uncertainties, have been used. It also simplifies verification and validation of the considerable amounts of quantitative data included in a BBN. (Author)

  8. Noise optimization of the source follower of a CMOS pixel using BSIM3 noise model

    Science.gov (United States)

    Mahato, Swaraj; Meynants, Guy; Raskin, Gert; De Ridder, J.; Van Winckel, H.

    2016-07-01

    CMOS imagers are becoming increasingly popular in astronomy. A very low noise level is required to observe extremely faint targets and to get high-precision flux measurements. Although CMOS technology offers many advantages over CCDs, a major bottleneck is still the read noise. To move from an industrial CMOS sensor to one suitable for scientific applications, an improved design that optimizes the noise level is essential. Here, we study the 1/f and thermal noise performance of the source follower (SF) of a CMOS pixel in detail. We identify the relevant design parameters, and analytically study their impact on the noise level using the BSIM3v3 noise model with an enhanced model of gate capacitance. Our detailed analysis shows that the dependence of the 1/f noise on the geometrical size of the source follower is not limited to minimum channel length, compared to the classical approach to achieve the minimum 1/f noise. We derive the optimal gate dimensions (the width and the length) of the source follower that minimize the 1/f noise, and validate our results using numerical simulations. By considering the thermal noise or white noise along with 1/f noise, the total input noise of the source follower depends on the capacitor ratio CG/CFD and the drain current (Id). Here, CG is the total gate capacitance of the source follower and CFD is the total floating diffusion capacitor at the input of the source follower. We demonstrate that the optimum gate capacitance (CG) depends on the chosen bias current but ranges from CFD/3 to CFD to achieve the minimum total noise of the source follower. Numerical calculation and circuit simulation with 180nm CMOS technology are performed to validate our results.

  9. Modeling tsunamis from earthquake sources near Gorringe Bank southwest of Portugal

    Science.gov (United States)

    Gjevik, B.; Pedersen, G.; Dybesland, E.; Harbitz, C. B.; Miranda, P. M. A.; Baptista, M. A.; Mendes-Victor, L.; Heinrich, P.; Roche, R.; Guesmia, M.

    1997-12-01

    The Azores-Gibraltar fracture zone with the huge bathymetric reliefs in the area southwest of Portugal is believed to have been the source of large historic tsunami events. This report describes simulations of tsunami generation and propagation from sources near the Gorringe Bank. The well-documented 1969 tsunami event is examined both with a ray-tracing technique and with finite difference models based on various shallow water equations. Both methods show that the most likely source location is southeast of the Gorringe Bank near the epicenter location determined from seismic data. The tsunami source is calculated by formulas given by Okada [1985] for surface deformation of an elastic half-space caused by faulting. Observed wave amplitude and travel time and values computed from an initial wave field according to Okada [1985] formulas show acceptable agreement for most stations along the coast of Portugal and Spain. However, in order to explain a large primary wave with downward displacement observed on the coast of Morocco, an alternative source model with a larger area of downward displacement has been introduced. This also leads to a better overall fit with observed travel time. Implications for disastrous events, as the one in 1755, are also discussed. Linear hydrostatic shallow water models are used for most of the simulations, but the importance of nonlinearity and dispersion is examined with the Boussinesq equations. The sensitivity of the solution to changes in the location and the strength of the source is discussed, and a series of grid refinement studies are performed in order to assess the accuracy of the simulations.

  10. Non-spherical source-surface model of the heliosphere: a scalar formulation

    Directory of Open Access Journals (Sweden)

    M. Schulz

    Full Text Available The source-surface method offers an alternative to full MHD simulation of the heliosphere. It entails specification of a surface from which the solar wind flows normally outward along straight lines. Compatibility with MHD results requires this (source surface to be non-spherical in general and prolate (aligned with the solar dipole axis in prototypical axisymmetric cases. Mid-latitude features on the source surface thus map to significantly lower latitudes in the heliosphere. The model is usually implemented by deriving the B field (in the region surrounded by the source surface from a scalar potential formally expanded in spherical harmonics, with coefficients chosen so as to minimize the mean-square tangential component of B over this surface. In the simplified (scalar version the quantity minimized is instead the variance of the scalar potential over the source surface. The scalar formulation greatly reduces the time required to compute required matrix elements, while imposing essentially the same physical boundary condition as the vector formulation (viz., that the coronal magnetic field be, as nearly as possible, normal to the source surface for continuity with the heliosphere. The source surface proposed for actual application is a surface of constant r-k, where r is the heliocentric distance and is the scalar magnitude of the B field produced by currents inside the Sun. Comparison with MHD simulations suggests that k ≈ 1.4 is a good choice for the adjustable exponent. This value has been shown to map the neutral line on the source surface during Carrington Rotation 1869 (May–June 1993 to a range of latitudes that would have just grazed the position of Ulysses during that month in which sector structure disappeared from Ulysses' magnetometer observations.

  11. Probabilistic conditional reasoning: Disentangling form and content with the dual-source model.

    Science.gov (United States)

    Singmann, Henrik; Klauer, Karl Christoph; Beller, Sieghard

    2016-08-01

    The present research examines descriptive models of probabilistic conditional reasoning, that is of reasoning from uncertain conditionals with contents about which reasoners have rich background knowledge. According to our dual-source model, two types of information shape such reasoning: knowledge-based information elicited by the contents of the material and content-independent information derived from the form of inferences. Two experiments implemented manipulations that selectively influenced the model parameters for the knowledge-based information, the relative weight given to form-based versus knowledge-based information, and the parameters for the form-based information, validating the psychological interpretation of these parameters. We apply the model to classical suppression effects dissecting them into effects on background knowledge and effects on form-based processes (Exp. 3) and we use it to reanalyse previous studies manipulating reasoning instructions. In a model-comparison exercise, based on data of seven studies, the dual-source model outperformed three Bayesian competitor models. Overall, our results support the view that people make use of background knowledge in line with current Bayesian models, but they also suggest that the form of the conditional argument, irrespective of its content, plays a substantive, yet smaller, role.

  12. TG-43 U1 based dosimetric characterization of model 67-6520 Cs-137 brachytherapy source

    Energy Technology Data Exchange (ETDEWEB)

    Meigooni, Ali S.; Wright, Clarissa; Koona, Rafiq A.; Awan, Shahid B.; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo [Department of Radiation Medicine, North Shore University Hospital, 300 Community Drive, Manhasset, New York 11030 and Department of Radiation Medicine, University of Kentucky Chandler Medical Center, Lexington, Kentucky 40536-0084 (United States); Department of Radiation Medicine, University of Kentucky Chandler Medical Center, Lexington, Kentucky 40536-0084 (United States); Department of Radiation Physics, ERESA, Hospital General Universitario, Avenida Tres Cruces, 2, E-46014 Valencia (Spain); Department of Oncology, Physics Section, ' ' La Fe' ' University Hospital, Avenida Campanar 21, E-46009 Valencia (Spain); Department of Atomic, Molecular and Nuclear Physics, University of Valencia, C/ Dr. Moliner 50, E-46100 Burjassot, Spain and Instituto de Fisica Corpuscular (IFIC), C/ Dr. Moliner 50, E-46100 Burjassot (Spain)

    2009-10-15

    Purpose: Brachytherapy treatment has been a cornerstone for management of various cancer sites, particularly for the treatment of gynecological malignancies. In low dose rate brachytherapy treatments, {sup 137}Cs sources have been used for several decades. A new {sup 137}Cs source design has been introduced (model 67-6520, source B3-561) by Isotope Products Laboratories (IPL) for clinical application. The goal of the present work is to implement the TG-43 U1 protocol in the characterization of the aforementioned {sup 137}Cs source. Methods: The dosimetric characteristics of the IPL {sup 137}Cs source are measured using LiF thermoluminescent dosimeters in a Solid Water phantom material and calculated using Monte Carlo simulations with the GEANT4 code in Solid Water and liquid water. The dose rate constant, radial dose function, and two-dimensional anisotropy function of this source model were obtained following the TG-43 U1 recommendations. In addition, the primary and scatter dose separation (PSS) formalism that could be used in convolution/superposition methods to calculate dose distributions around brachytherapy sources in heterogeneous media was studied. Results: The measured and calculated dose rate constants of the IPL {sup 137}Cs source in Solid Water were found to be 0.930({+-}7.3%) and 0.928({+-}2.6%) cGy h{sup -1} U{sup -1}, respectively. The agreement between these two methods was within our experimental uncertainties. The Monte Carlo calculated value in liquid water of the dose rate constant was {Lambda}=0.948({+-}2.6%) cGy h{sup -1} U{sup -1}. Similarly, the agreement between measured and calculated radial dose functions and the anisotropy functions was found to be within {+-}5%. In addition, the tabulated data that are required to characterize the source using the PSS formalism were derived. Conclusions: In this article the complete dosimetry of the newly designed {sup 137}Cs IPL source following the AAPM TG-43 U1 dosimetric protocol and the PSS

  13. Modeling the Flow Regime Near the Source in Underwater Gas Releases

    Institute of Scientific and Technical Information of China (English)

    Lakshitha T. Premathilake; Poojitha D. Yapa; Indrajith D. Nissanka; Pubudu Kumarage

    2016-01-01

    Recent progress in calculating gas bubble sizes in a plume, based on phenomenological approaches using the release conditions is a significant improvement to make the gas plume models self-reliant. Such calculations require details of conditions Near the Source of Plume (NSP); (i.e. the plume/jet velocity and radius near the source), which inspired the present work. Determining NSP conditions for gas plumes are far more complex than that for oil plumes due to the substantial density difference between gas and water. To calculate NSP conditions, modeling the early stage of the plume is important. A novel method of modeling the early stage of an underwater gas release is presented here. Major impact of the present work is to define the correct NSP conditions for underwater gas releases, which is not possible with available methods as those techniques are not based on the physics of flow region near the source of the plume/jet. We introduce super Gaussian profiles to model the density and velocity variations of the early stages of plume, coupled with the laws of fluid mechanics to define profile parameters. This new approach, models the velocity profile variation from near uniform, across the section at the release point to Gaussian some distance away. The comparisons show that experimental data agrees well with the computations.

  14. Modeling and interpreting speckle pattern formation in swept-source optical coherence tomography (Conference Presentation)

    Science.gov (United States)

    Demidov, Valentin; Vitkin, I. Alex; Doronin, Alexander; Meglinski, Igor

    2017-03-01

    We report on the development of a unified Monte-Carlo based computational model for exploring speckle pattern formation in swept-source optical coherence tomography (OCT). OCT is a well-established optical imaging modality capable of acquiring cross-sectional images of turbid media, including biological tissues, utilizing back scattered low coherence light. The obtained OCT images include characteristic features known as speckles. Currently, there is a growing interest to the OCT speckle patterns due to their potential application for quantitative analysis of medium's optical properties. Here we consider the mechanisms of OCT speckle patterns formation for swept-source OCT approaches and introduce further developments of a Monte-Carlo based model for simulation of OCT signals and images. The model takes into account polarization and coherent properties of light, mutual interference of back-scattering waves, and their interference with the reference waves. We present a corresponding detailed description of the algorithm for modeling these light-medium interactions. The developed model is employed for generation of swept-source OCT images, analysis of OCT speckle formation and interpretation of the experimental results. The obtained simulation results are compared with selected analytical solutions and experimental studies utilizing various sizes / concentrations of scattering microspheres.

  15. Modeling the flow regime near the source in underwater gas releases

    Science.gov (United States)

    Premathilake, Lakshitha T.; Yapa, Poojitha D.; Nissanka, Indrajith D.; Kumarage, Pubudu

    2016-12-01

    Recent progress in calculating gas bubble sizes in a plume, based on phenomenological approaches using the release conditions is a significant improvement to make the gas plume models self-reliant. Such calculations require details of conditions Near the Source of Plume (NSP); (i.e. the plume/jet velocity and radius near the source), which inspired the present work. Determining NSP conditions for gas plumes are far more complex than that for oil plumes due to the substantial density difference between gas and water. To calculate NSP conditions, modeling the early stage of the plume is important. A novel method of modeling the early stage of an underwater gas release is presented here. Major impact of the present work is to define the correct NSP conditions for underwater gas releases, which is not possible with available methods as those techniques are not based on the physics of flow region near the source of the plume/jet. We introduce super Gaussian profiles to model the density and velocity variations of the early stages of plume, coupled with the laws of fluid mechanics to define profile parameters. This new approach, models the velocity profile variation from near uniform, across the section at the release point to Gaussian some distance away. The comparisons show that experimental data agrees well with the computations.

  16. A Triple-energy-source Model for Superluminous Supernova iPTF13ehe

    Science.gov (United States)

    Wang, S. Q.; Liu, L. D.; Dai, Z. G.; Wang, L. J.; Wu, X. F.

    2016-09-01

    Almost all superluminous supernovae (SLSNe) whose peak magnitudes are ≲ -21 mag can be explained by the 56Ni-powered model, the magnetar-powered (highly magnetized pulsar) model, or the ejecta-circumstellar medium (CSM) interaction model. Recently, iPTF13ehe challenged these energy-source models, because the spectral analysis shows that ˜ 2.5{M}⊙ of 56Ni have been synthesized, but are inadequate to power the peak bolometric emission of iPTF13ehe, while the rebrightening of the late-time light curve (LC) and the Hα emission lines indicate that the ejecta-CSM interaction must play a key role in powering the late-time LC. Here we propose a triple-energy-source model, in which a magnetar together with some amount (≲ 2.5{M}⊙ ) of 56Ni may power the early LC of iPTF13ehe, while the late-time rebrightening can be quantitatively explained by an ejecta-CSM interaction. Furthermore, we suggest that iPTF13ehe is a genuine core-collapse supernova rather than a pulsational pair-instability supernova candidate. Further studies on similar SLSNe in the future would eventually shed light on their explosion and energy-source mechanisms.

  17. The S-Web Model for the Sources of the Slow Solar Wind

    Science.gov (United States)

    Antiochos, Spiro K.; Karpen, Judith T.; DeVore, C. Richard

    2012-01-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: The slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind has large angular width, up to 60 degrees, suggesting that its source extends far from the open-closed boundary. We describe a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices (the S-Web) and quasi-separatrix layers in the heliosphere. We discuss the dynamics of the S-Web model and its implications for present observations and for the upcoming observations from Solar Orbiter and Solar Probe Plus.

  18. Boundary control of bidomain equations with state-dependent switching source functions in the ionic model

    Science.gov (United States)

    Chamakuri, Nagaiah; Engwer, Christian; Kunisch, Karl

    2014-09-01

    Optimal control for cardiac electrophysiology based on the bidomain equations in conjunction with the Fenton-Karma ionic model is considered. This generic ventricular model approximates well the restitution properties and spiral wave behavior of more complex ionic models of cardiac action potentials. However, it is challenging due to the appearance of state-dependent discontinuities in the source terms. A computational framework for the numerical realization of optimal control problems is presented. Essential ingredients are a shape calculus based treatment of the sensitivities of the discontinuous source terms and a marching cubes algorithm to track iso-surface of excitation wavefronts. Numerical results exhibit successful defibrillation by applying an optimally controlled extracellular stimulus.

  19. A data parsimonious model for capturing snapshots of groundwater pollution sources.

    Science.gov (United States)

    Chaubey, Jyoti; Kashyap, Deepak

    2017-02-01

    Presented herein is a data parsimonious model for identification of regional and local groundwater pollution sources at a reference time employing corresponding fields of head, concentration and its time derivative. The regional source flux, assumed to be uniformly distributed, is viewed as the causative factor for the widely prevalent background concentration. The localized concentration-excesses are attributed to flux from local sources distributed around the respective centroids. The groundwater pollution is parameterized by flux from regional and local sources, and distribution parameters of the latter. These parameters are estimated by minimizing the sum of squares of differences between the observed and simulated concentration fields. The concentration field is simulated by a numerical solution of the transient solute transport equation. The equation is solved assuming the temporal derivative term to be known a priori and merging it with the sink term. This strategy circumvents the requirement of dynamic concentration data. The head field is generated using discrete point head data employing a specially devised interpolator that controls the numerical-differentiation errors and simultaneously ensures micro-level mass balance. This measure eliminates the requirement of flow modeling without compromising the sanctity of head field. The model after due verification has been illustrated employing available and simulated data from an area lying between two rivers Yamuna and Krishni in India. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. SENR, A Super-Efficient Code for Gravitational Wave Source Modeling: Latest Results

    Science.gov (United States)

    Ruchlin, Ian; Etienne, Zachariah; Baumgarte, Thomas

    2017-01-01

    The science we extract from gravitational wave observations will be limited by our theoretical understanding, so with the recent breakthroughs by LIGO, reliable gravitational wave source modeling has never been more critical. Due to efficiency considerations, current numerical relativity codes are very limited in their applicability to direct LIGO source modeling, so it is important to develop new strategies for making our codes more efficient. We introduce SENR, a Super-Efficient, open-development numerical relativity (NR) code aimed at improving the efficiency of moving-puncture-based LIGO gravitational wave source modeling by 100x. SENR builds upon recent work, in which the BSSN equations are evolved in static spherical coordinates, to allow dynamical coordinates with arbitrary spatial distributions. The physical domain is mapped to a uniform-resolution grid on which derivative operations are approximated using standard central finite difference stencils. The source code is designed to be human-readable, efficient, parallelized, and readily extensible. We present the latest results from the SENR code.

  1. A data parsimonious model for capturing snapshots of groundwater pollution sources

    Science.gov (United States)

    Chaubey, Jyoti; Kashyap, Deepak

    2017-02-01

    Presented herein is a data parsimonious model for identification of regional and local groundwater pollution sources at a reference time employing corresponding fields of head, concentration and its time derivative. The regional source flux, assumed to be uniformly distributed, is viewed as the causative factor for the widely prevalent background concentration. The localized concentration-excesses are attributed to flux from local sources distributed around the respective centroids. The groundwater pollution is parameterized by flux from regional and local sources, and distribution parameters of the latter. These parameters are estimated by minimizing the sum of squares of differences between the observed and simulated concentration fields. The concentration field is simulated by a numerical solution of the transient solute transport equation. The equation is solved assuming the temporal derivative term to be known a priori and merging it with the sink term. This strategy circumvents the requirement of dynamic concentration data. The head field is generated using discrete point head data employing a specially devised interpolator that controls the numerical-differentiation errors and simultaneously ensures micro-level mass balance. This measure eliminates the requirement of flow modeling without compromising the sanctity of head field. The model after due verification has been illustrated employing available and simulated data from an area lying between two rivers Yamuna and Krishni in India.

  2. Finite line-source model for borehole heat exchangers. Effect of vertical temperature variations

    Energy Technology Data Exchange (ETDEWEB)

    Bandos, Tatyana V.; Fernandez, Esther; Santander, Juan Luis G.; Isidro, Jose Maria; Perez, Jezabel; Cordoba, Pedro J. Fernandez de [Instituto Universitario de Matematica Pura y Aplicada, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain); Montero, Alvaro; Urchueguia, Javier F. [Instituto de Ingenieria Energetica, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain)

    2009-06-15

    A solution to the three-dimensional finite line-source (FLS) model for borehole heat exchangers (BHEs) that takes into account the prevailing geothermal gradient and allows arbitrary ground surface temperature changes is presented. Analytical expressions for the average ground temperature are derived by integrating the exact solution over the line-source depth. A self-consistent procedure to evaluate the in situ thermal response test (TRT) data is outlined. The effective thermal conductivity and the effective borehole thermal resistance can be determined by fitting the TRT data to the time-series expansion obtained for the average temperature. (author)

  3. Evaluation of HPGe detector efficiency for point sources using virtual point detector model

    Energy Technology Data Exchange (ETDEWEB)

    Mohammadi, M.A. [Department of Physics, Faculty of Science, University of Isfahan, Isfahan 81747-73441 (Iran, Islamic Republic of); Abdi, M.R., E-mail: r.abdi@phys.ui.ac.i [Department of Physics, Faculty of Science, University of Isfahan, Isfahan 81747-73441 (Iran, Islamic Republic of); Kamali, M., E-mail: m.kamali@chem.ui.ac.i [Department of Nuclear Engineering, Faculty of Advanced Sciences and Technologies, University of Isfahan, Isfahan 81746-73441 (Iran, Islamic Republic of); Chemical Processes Research Department, Engineering Research Center, University of Isfahan, Isfahan 81746-73441 (Iran, Islamic Republic of); Mostajaboddavati, M.; Zare, M.R. [Department of Physics, Faculty of Science, University of Isfahan, Isfahan 81747-73441 (Iran, Islamic Republic of)

    2011-02-15

    The concept of a virtual point detector (VPD) has been developed and validated in the past for Ge(Li) and HPGe detectors. In the present research, a new semi-empirical equation involving photon energy and source-virtual point detector distance for the efficiency of point sources by HPGe detectors is introduced , which is based on the VPD model. The calculated efficiencies for both coaxial and off-axis geometries by this equation are in good agreement with experimental data. The estimated uncertainties are less than 4%.

  4. Kinetic modeling of particle dynamics in H{sup −} negative ion sources (invited)

    Energy Technology Data Exchange (ETDEWEB)

    Hatayama, A., E-mail: akh@ppl.appi.keio.ac.jp; Shibata, T.; Nishioka, S.; Ohta, M.; Yasumoto, M.; Nishida, K.; Yamamoto, T. [Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama, 223-8522 (Japan); Miyamoto, K. [Naruto University of Education, 748 Nakashima, Takashima, Naruto-cho, Naruto-shi, Tokushima 772-8502 (Japan); Fukano, A. [Monozukuri Department, Tokyo Metropolitan College of Industrial Technology, Shinagawa, Tokyo 140-0011 (Japan); Mizuno, T. [Department of Management Science, College of Engineering, Tamagawa University, Machida, Tokyo 194-8610 (Japan)

    2014-02-15

    Progress in the kinetic modeling of particle dynamics in H{sup −} negative ion source plasmas and their comparisons with experiments are reviewed, and discussed with some new results. Main focus is placed on the following two topics, which are important for the research and development of large negative ion sources and high power H{sup −} ion beams: (i) Effects of non-equilibrium features of EEDF (electron energy distribution function) on H{sup −} production, and (ii) extraction physics of H{sup −} ions and beam optics.

  5. Sources and Sinks: A Stochastic Model of Evolution in Heterogeneous Environments

    Science.gov (United States)

    Hermsen, Rutger; Hwa, Terence

    2010-12-01

    We study evolution driven by spatial heterogeneity in a stochastic model of source-sink ecologies. A sink is a habitat where mortality exceeds reproduction so that a local population persists only due to immigration from a source. Immigrants can, however, adapt to conditions in the sink by mutation. To characterize the adaptation rate, we derive expressions for the first arrival time of adapted mutants. The joint effects of migration, mutation, birth, and death result in two distinct parameter regimes. These results may pertain to the rapid evolution of drug-resistant pathogens and insects.

  6. A Mathematical Calculation Model Using Biomarkers to Quantitatively Determine the Relative Source Proportion of Mixed Oils

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    It is difficult to identify the source(s) of mixed oils from multiple source rocks, and in particular the relative contribution of each source rock. Artificial mixing experiments using typical crude oils and ratios of different biomarkers show that the relative contribution changes are non-linear when two oils with different concentrations of biomarkers mix with each other. This may result in an incorrect conclusion if ratios of biomarkers and a simple binary linear equation are used to calculate the contribution proportion of each end-member to the mixed oil. The changes of biomarker ratios with the mixing proportion of end-member oils in the trinal mixing model are more complex than in the binary mixing model. When four or more oils mix, the contribution proportion of each end-member oil to the mixed oil cannot be calculated using biomarker ratios and a simple formula. Artificial mixing experiments on typical oils reveal that the absolute concentrations of biomarkers in the mixed oil cause a linear change with mixing proportion of each end-member. Mathematical inferences verify such linear changes. Some of the mathematical calculation methods using the absolute concentrations or ratios of biomarkers to quantitatively determine the proportion of each end-member in the mixed oils are deduced from the results of artificial experiments and by theoretical inference. Ratio of two biomarker compounds changes as a hyperbola with the mixing proportion in the binary mixing model,as a hyperboloid in the trinal mixing model, and as a hypersurface when mixing more than three endmembers. The mixing proportion of each end-member can be quantitatively determined with these mathematical models, using the absolute concentrations and the ratios of biomarkers. The mathematical calculation model is more economical, convenient, accurate and reliable than conventional artificial mixing methods.

  7. Model thermal response to minor radiative energy sources and sinks in the middle atmosphere

    Science.gov (United States)

    Fomichev, V. I.; Fu, C.; de Grandpré, J.; Beagley, S. R.; Ogibalov, V. P.; McConnell, J. C.

    2004-10-01

    This paper presents the thermal response of the Canadian middle atmosphere model (CMAM) to minor radiative energy sources and sinks. These include chemical heating, infrared (IR) H2O cooling, sphericity effect in solar heating, and solar heating in the near-IR CO2 bands. All of these energy sources/sinks can be considered as minor ones either in terms of their magnitude or in terms of the limited height region where they are of importance or both. To examine the thermal response of the middle atmosphere, a version of the CMAM with an interactive gas phase chemistry scheme has been used in a series of multiyear experiments for conditions of perpetual July. Each of the analyzed mechanisms may provide a noticeable contribution into the model energy balance that results in a statistically significant model response. Various forcing terms due to minor energy sources/sinks have different spatial and temporal distributions. Their magnitudes vary from tenths K d-1 for the sphericity effect up to ˜10 K d-1 for chemical heating that provides corresponding thermal responses of a few to about 20 K in the middle atmosphere. The model thermal response depends on the magnitude of the applied forcing but is not always local and can be spread beyond the regions where the forcing terms are initially applied. On a globally averaged basis the local strength of the model response is nearly proportional to the magnitude of the small forcing terms but shows nonlinearity when forcing due to chemical heating exceeds ˜1 K d-1 in the mesosphere. Accounting for the combined effects of the minor energy sources and sinks leads to a better agreement between the model temperature field and observations.

  8. A predictive model for microbial counts on beaches where intertidal sand is the primary source.

    Science.gov (United States)

    Feng, Zhixuan; Reniers, Ad; Haus, Brian K; Solo-Gabriele, Helena M; Wang, John D; Fleming, Lora E

    2015-05-15

    Human health protection at recreational beaches requires accurate and timely information on microbiological conditions to issue advisories. The objective of this study was to develop a new numerical mass balance model for enterococci levels on nonpoint source beaches. The significant advantage of this model is its easy implementation, and it provides a detailed description of the cross-shore distribution of enterococci that is useful for beach management purposes. The performance of the balance model was evaluated by comparing predicted exceedances of a beach advisory threshold value to field data, and to a traditional regression model. Both the balance model and regression equation predicted approximately 70% the advisories correctly at the knee depth and over 90% at the waist depth. The balance model has the advantage over the regression equation in its ability to simulate spatiotemporal variations of microbial levels, and it is recommended for making more informed management decisions.

  9. Modeling mass transport in aquifers: The distributed-source problem. Research report, July 1988-June 1990

    Energy Technology Data Exchange (ETDEWEB)

    Serrano, S.E.

    1990-08-01

    A new methodology to model the time and space evolution of groundwater variables in a system of acquifers when certain components of the model, such as the geohydrologic information, the boundary conditions, the magnitude and variability of the sources or physical parameters are uncertain and defined in stochastic terms. This facilitates a more realistic statistical representation of groundwater flow and groundwater pollution forecasting for either the saturated or the unsaturated zone. The method is based on applications of modern mathematics to the solution of the resulting stochastic transport equations. The procedure exhibits considerable advantages over the existing stochastic modeling techniques.

  10. Optimal Homotopy Asymptotic Solution for Exothermic Reactions Model with Constant Heat Source in a Porous Medium

    Directory of Open Access Journals (Sweden)

    Fazle Mabood

    2015-01-01

    Full Text Available The heat flow patterns profiles are required for heat transfer simulation in each type of the thermal insulation. The exothermic reaction models in porous medium can prescribe the problems in the form of nonlinear ordinary differential equations. In this research, the driving force model due to the temperature gradients is considered. A governing equation of the model is restricted into an energy balance equation that provides the temperature profile in conduction state with constant heat source on the steady state. The proposed optimal homotopy asymptotic method (OHAM is used to compute the solutions of the exothermic reactions equation.

  11. A Unified Impedance Model of Grid-Connected Voltage-Source Converters

    DEFF Research Database (Denmark)

    Wang, Xiongfei; Harnefors, Lennart; Blaabjerg, Frede

    2017-01-01

    This paper proposes a unified impedance model of grid-connected voltage-source converters for analyzing dynamic influences of the Phase-Locked Loop (PLL) and current control. The mathematical relations between the impedance models in the different domains are first explicitly revealed by means...... of complex transfer functions and complex space vectors. A stationary (αβ-) frame impedance model is then proposed, which not only predicts the stability impact of the PLL, but reveals also its frequency coupling effect explicitly. Furthermore, the impedance shaping effect of the PLL on the current control...

  12. Assessing the contribution of binaural cues for apparent source width perception via a functional model

    DEFF Research Database (Denmark)

    Käsbach, Johannes; Hahmann, Manuel; May, Tobias;

    2016-01-01

    differences (ITDs), interaural level differences (ILDs) and the interaural coherence (IC). To quantify their contribution to ASW, a functional model of ASW perception was exploited using the TWO!EARS auditory-front-end (AFE) toolbox. The model determines the leftand right-most boundary of a sound source using...... a statistical representation of ITDs and ILDs based on percentiles integrated over time and frequency. The model’s performance was evaluated against psychoacoustic data obtained with noise, speech and music signals in loudspeakerbased experiments. A robust model prediction of ASW was achieved using a cross...

  13. PENGEMBANGAN MODEL APLIKASI ADMINISTRASI PELAYANAN KESEHATAN DI PUSKEMAS DENGAN CLOUD COMPUTING BERBASISKAN OPEN SOURCE

    Directory of Open Access Journals (Sweden)

    Honni

    2013-11-01

    Puskesmas, which utilizes cloud computing technology and development architectures that are both modular and dynamic. The application model combines the benefits of open-source applications with a flexible design system. It also supports mobile devices to improve the quality of patient care. Web-based network structure allows both online and intersection between institutions which can be accessed anytime, anywhere, through mobile devices. Development application model is also adapted to the function of the business processes and administrative processes that exist in Puskesmas throughout Indonesia. Each model is also expected to be integrated to optimize efficiency and has been adapted to the service system of Dinas Kesehatan and Health Ministery.

  14. Optimum load distribution between heat sources based on the Cournot model

    Science.gov (United States)

    Penkovskii, A. V.; Stennikov, V. A.; Khamisov, O. V.

    2015-08-01

    One of the widespread models of the heat supply of consumers, which is represented in the "Single buyer" format, is considered. The methodological base proposed for its description and investigation presents the use of principles of the theory of games, basic propositions of microeconomics, and models and methods of the theory of hydraulic circuits. The original mathematical model of the heat supply system operating under conditions of the "Single buyer" organizational structure provides the derivation of a solution satisfying the market Nash equilibrium. The distinctive feature of the developed mathematical model is that, along with problems solved traditionally within the bounds of bilateral relations of heat energy sources-heat consumer, it considers a network component with its inherent physicotechnical properties of the heat network and business factors connected with costs of the production and transportation of heat energy. This approach gives the possibility to determine optimum levels of load of heat energy sources. These levels provide the given heat energy demand of consumers subject to the maximum profit earning of heat energy sources and the fulfillment of conditions for formation of minimum heat network costs for a specified time. The practical realization of the search of market equilibrium is considered by the example of a heat supply system with two heat energy sources operating on integrated heat networks. The mathematical approach to the solution search is represented in the graphical form and illustrates computations based on the stepwise iteration procedure for optimization of levels of loading of heat energy sources (groping procedure by Cournot) with the corresponding computation of the heat energy price for consumers.

  15. Modelled isotopic fractionation and transient diffusive release of methane from potential subsurface sources on Mars

    Science.gov (United States)

    Stevens, Adam H.; Patel, Manish R.; Lewis, Stephen R.

    2017-01-01

    We calculate transport timescales of martian methane and investigate the effect of potential release mechanisms into the atmosphere using a numerical model that includes both Fickian and Knudsen diffusion. The incorporation of Knudsen diffusion, which improves on a Fickian description of transport given the low permeability of the martian regolith, means that transport timescales from sources collocated with a putative martian water table are very long, up to several million martian years. Transport timescales also mean that any temporally varying source process, even in the shallow subsurface, would not result in a significant, observable variation in atmospheric methane concentration since changes resulting from small variations in flux would be rapidly obscured by atmospheric transport. This means that a short-lived 'plume' of methane, as detected by Mumma et al. (2009) and Webster et al. (2014), cannot be reconciled with diffusive transport from any reasonable depth and instead must invoke alternative processes such as fracturing or convective plumes. It is shown that transport through the martian regolith will cause a significant change in the isotopic composition of the gas, meaning that methane release from depth will produce an isotopic signature in the atmosphere that could be significantly different than the source composition. The deeper the source, the greater the change, and the change in methane composition in both δ13C and δD approaches -1000 ‰ for sources at a depth greater than around 1 km. This means that signatures of specific sources, in particular the methane produced by biogenesis that is generally depleted in 13CH4 and CH3D, could be obscured. We find that an abiogenic source of methane could therefore display an isotopic fractionation consistent with that expected for biogenic source processes if the source was at sufficient depth. The only unambiguous inference that can be made from measurements of methane isotopes alone is a measured

  16. An open source hydroeconomic model for California's water supply system: PyVIN

    Science.gov (United States)

    Dogan, M. S.; White, E.; Herman, J. D.; Hart, Q.; Merz, J.; Medellin-Azuara, J.; Lund, J. R.

    2016-12-01

    Models help operators and decision makers explore and compare different management and policy alternatives, better allocate scarce resources, and predict the future behavior of existing or proposed water systems. Hydroeconomic models are useful tools to increase benefits or decrease costs of managing water. Bringing hydrology and economics together, these models provide a framework for different disciplines that share similar objectives. This work proposes a new model to evaluate operation and adaptation strategies under existing and future hydrologic conditions for California's interconnected water system. This model combines the network structure of CALVIN, a statewide optimization model for California's water infrastructure, along with an open source solver written in the Python programming language. With the flexibilities of the model, reservoir operations, including water supply and hydropower, groundwater pumping, and the Delta water operations and requirements can now be better represented. Given time series of hydrologic inputs to the model, typical outputs include urban, agricultural and wildlife refuge water deliveries and shortage costs, conjunctive use of surface and groundwater systems, and insights into policy and management decisions, such as capacity expansion and groundwater management policies. Water market operations also represented in the model, allocating water from lower-valued users to higher-valued users. PyVIN serves as a cross-platform, extensible model to evaluate systemwide water operations. PyVIN separates data from the model structure, enabling model to be easily applied to other parts of the world where water is a scarce resource.

  17. Evaluating environmental modeling and sampling data with biomarker data to identify sources and routes of exposure

    Science.gov (United States)

    Shin, Hyeong-Moo; McKone, Thomas E.; Bennett, Deborah H.

    2013-04-01

    Exposure to environmental chemicals results from multiple sources, environmental media, and exposure routes. Ideally, modeled exposures should be compared to biomonitoring data. This study compares the magnitude and variation of modeled polycyclic aromatic hydrocarbons (PAHs) exposures resulting from emissions to outdoor and indoor air and estimated exposure inferred from biomarker levels. Outdoor emissions result in both inhalation and food-based exposures. We modeled PAH intake doses using U.S. EPA's 2002 National Air Toxics Assessment (NATA) county-level emissions data for outdoor inhalation, the CalTOX model for food ingestion (based on NATA emissions), and indoor air concentrations from field studies for indoor inhalation. We then compared the modeled intake with the measured urine levels of hydroxy-PAH metabolites from the 2001-2002 National Health and Nutrition Examination Survey (NHANES) survey as quantifiable human intake of PAH parent-compounds. Lognormal probability plots of modeled intakes and estimated intakes inferred from biomarkers suggest that a primary route of exposure to naphthalene, fluorene, and phenanthrene for the U.S. population is likely inhalation from indoor sources. For benzo(a)pyrene, the predominant exposure route is likely from food ingestion resulting from multi-pathway transport and bioaccumulation due to outdoor emissions. Multiple routes of exposure are important for pyrene. We also considered the sensitivity of the predicted exposure to the proportion of the total naphthalene production volume emitted to the indoor environment. The comparison of PAH biomarkers with exposure variability estimated from models and sample data for various exposure pathways supports that both indoor and outdoor models are needed to capture the sources and routes of exposure to environmental contaminants.

  18. Development of a plume-in-grid model for industrial point and volume sources: application to power plant and refinery sources in the Paris region

    Science.gov (United States)

    Kim, Y.; Seigneur, C.; Duclaux, O.

    2014-04-01

    Plume-in-grid (PinG) models incorporating a host Eulerian model and a subgrid-scale model (usually a Gaussian plume or puff model) have been used for the simulations of stack emissions (e.g., fossil fuel-fired power plants and cement plants) for gaseous and particulate species such as nitrogen oxides (NOx), sulfur dioxide (SO2), particulate matter (PM) and mercury (Hg). Here, we describe the extension of a PinG model to study the impact of an oil refinery where volatile organic compound (VOC) emissions can be important. The model is based on a reactive PinG model for ozone (O3), which incorporates a three-dimensional (3-D) Eulerian model and a Gaussian puff model. The model is extended to treat PM, with treatments of aerosol chemistry, particle size distribution, and the formation of secondary aerosols, which are consistent in both the 3-D Eulerian host model and the Gaussian puff model. Furthermore, the PinG model is extended to include the treatment of volume sources to simulate fugitive VOC emissions. The new PinG model is evaluated over Greater Paris during July 2009. Model performance is satisfactory for O3, PM2.5 and most PM2.5 components. Two industrial sources, a coal-fired power plant and an oil refinery, are simulated with the PinG model. The characteristics of the sources (stack height and diameter, exhaust temperature and velocity) govern the surface concentrations of primary pollutants (NOx, SO2 and VOC). O3 concentrations are impacted differently near the power plant than near the refinery, because of the presence of VOC emissions at the latter. The formation of sulfate is influenced by both the dispersion of SO2 and the oxidant concentration; however, the former tends to dominate in the simulations presented here. The impact of PinG modeling on the formation of secondary organic aerosol (SOA) is small and results mostly from the effect of different oxidant concentrations on biogenic SOA formation. The investigation of the criteria for injecting

  19. Using Bayesian hierarchical models to better understand nitrate sources and sinks in agricultural watersheds.

    Science.gov (United States)

    Xia, Yongqiu; Weller, Donald E; Williams, Meghan N; Jordan, Thomas E; Yan, Xiaoyuan

    2016-11-15

    Export coefficient models (ECMs) are often used to predict nutrient sources and sinks in watersheds because ECMs can flexibly incorporate processes and have minimal data requirements. However, ECMs do not quantify uncertainties in model structure, parameters, or predictions; nor do they account for spatial and temporal variability in land characteristics, weather, and management practices. We applied Bayesian hierarchical methods to address these problems in ECMs used to predict nitrate concentration in streams. We compared four model formulations, a basic ECM and three models with additional terms to represent competing hypotheses about the sources of error in ECMs and about spatial and temporal variability of coefficients: an ADditive Error Model (ADEM), a SpatioTemporal Parameter Model (STPM), and a Dynamic Parameter Model (DPM). The DPM incorporates a first-order random walk to represent spatial correlation among parameters and a dynamic linear model to accommodate temporal correlation. We tested the modeling approach in a proof of concept using watershed characteristics and nitrate export measurements from watersheds in the Coastal Plain physiographic province of the Chesapeake Bay drainage. Among the four models, the DPM was the best--it had the lowest mean error, explained the most variability (R(2) = 0.99), had the narrowest prediction intervals, and provided the most effective tradeoff between fit complexity (its deviance information criterion, DIC, was 45.6 units lower than any other model, indicating overwhelming support for the DPM). The superiority of the DPM supports its underlying hypothesis that the main source of error in ECMs is their failure to account for parameter variability rather than structural error. Analysis of the fitted DPM coefficients for cropland export and instream retention revealed some of the factors controlling nitrate concentration: cropland nitrate exports were positively related to stream flow and watershed average slope

  20. Extracting Data from Disparate Sources for Agent-Based Disease Spread Models

    Directory of Open Access Journals (Sweden)

    M. Laskowski

    2012-01-01

    Full Text Available This paper presents a review and evaluation of real data sources relative to their role and applicability in an agent-based model (ABM simulating respiratory infection spread a large geographic area. The ABM is a spatial-temporal model inclusive of behavior and interaction patterns between individual agents. The agent behaviours in the model (movements and interactions are fed by census/demographic data, integrated with real data from a telecommunication service provider (cellular records, traffic survey data, as well as person-person contact data obtained via a custom 3G smartphone application that logs Bluetooth connectivity between devices. Each source provides data of varying type and granularity, thereby enhancing the robustness of the model. The work demonstrates opportunities in data mining and fusion and the role of data in calibrating and validating ABMs. The data become real-world inputs into susceptible-exposed-infected-recovered (SEIR disease spread models and their variants, thereby building credible and nonintrusive models to qualitatively model public health interventions at the population level.

  1. Scanning Health Information Sources: Applying and Extending the Comprehensive Model of Information Seeking.

    Science.gov (United States)

    Ruppel, Erin K

    2016-01-01

    Information scanning, or attention to information via incidental or routine exposure or browsing, is relatively less understood than information seeking. To (a) provide a more theoretical understanding of information scanning and (b) extend existing information seeking theory to information scanning, the current study used data from the National Cancer Institute's Health Information National Trends Survey to examine cancer information scanning using the comprehensive model of information seeking (CMIS). Consistent with the CMIS, health-related factors were associated with the information-carrier factor of trust, and health-related factors and trust were associated with attention to information sources. Some of these associations differed between entertainment-oriented sources, information-oriented sources, and the Internet. The current findings provide a clearer picture of information scanning and suggest future avenues of research and practice using the CMIS.

  2. Eastern oyster (Crassostrea virginica) δ15N as a bioindicator of nitrogen sources: Observations and modeling

    Science.gov (United States)

    Fertig, B.; Carruthers, T.J.B.; Dennison, W.C.; Fertig, E.J.; Altabet, M.A.

    2013-01-01

    Stable nitrogen isotopes (δ15N) in bioindicators are increasingly employed to identify nitrogen sources in many ecosystems and biological characteristics of the eastern oyster (Crassostrea virginica) make it an appropriate species for this purpose. To assess nitrogen isotopic fractionation associated with assimilation and baseline variations in oyster mantle, gill, and muscle tissue δ15N, manipulative fieldwork in Chesapeake Bay and corresponding modeling exercises were conducted. This study (1) determined that five individuals represented an optimal sample size; (2) verified that δ15N in oysters from two locations converged after shared deployment to a new location reflecting a change in nitrogen sources; (3) identified required exposure time and temporal integration (four months for muscle, two to three months for gill and mantle); and (4) demonstrated seasonal δ15N increases in seston (summer) and oysters (winter). As bioindicators, oysters can be deployed for spatial interpolation of nitrogen sources, even in areas lacking extant populations. PMID:20381097

  3. Modelling of a laser-pumped light source for endoscopic surgery

    Science.gov (United States)

    Nadeau, Valerie J.; Elson, Daniel S.; Hanna, George B.; Neil, Mark A. A.

    2008-09-01

    A white light source, based on illumination of a yellow phosphor with a fibre-coupled blue-violet diode laser, has been designed and built for use in endoscopic surgery. This narrow light probe can be integrated into a standard laparoscope or inserted into the patient separately via a needle. We present a Monte Carlo model of light scattering and phosphorescence within the phosphor/silicone matrix at the probe tip, and measurements of the colour, intensity, and uniformity of the illumination. Images obtained under illumination with this light source are also presented, demonstrating the improvement in illumination quality over existing endoscopic light sources. This new approach to endoscopic lighting has the advantages of compact design, improved ergonomics, and more uniform illumination in comparison with current technologies.

  4. Electromagnetic, complex image model of a large area RF resonant antenna as inductive plasma source

    Science.gov (United States)

    Guittienne, Ph; Jacquier, R.; Howling, A. A.; Furno, I.

    2017-03-01

    A large area antenna generates a plasma by both inductive and capacitive coupling; it is an electromagnetically coupled plasma source. In this work, experiments on a large area planar RF antenna source are interpreted in terms of a multi-conductor transmission line coupled to the plasma. This electromagnetic treatment includes mutual inductive coupling using the complex image method, and capacitive matrix coupling between all elements of the resonant network and the plasma. The model reproduces antenna input impedance measurements, with and without plasma, on a 1.2× 1.2 m2 antenna used for large area plasma processing. Analytic expressions are given, and results are obtained by computation of the matrix solution. This method could be used to design planar inductive sources in general, by applying the termination impedances appropriate to each antenna type.

  5. A New Lattice Bhatnagar-Gross-Krook Model for the Convection-Diffusion Equation with a Source Term

    Institute of Scientific and Technical Information of China (English)

    DENG Bin; SHI Bao-Chang; WANG Guang-Chao

    2005-01-01

    @@ A new lattice Bhatnagar-Gross-Krook (LBGK) model for the convection-diffusion equation with a source term is proposed. Unlike the models proposed previously, the present model does not require any additional assumption on the source term. Numerical results are found to be in excellent agreement with the analytical solutions. It is also found that the numerical accuracy of the model is much better than that of the existing models.

  6. Comprehensive model-based prediction of micropollutants from diffuse sources in the Swiss river network

    Science.gov (United States)

    Strahm, Ivo; Munz, Nicole; Braun, Christian; Gälli, René; Leu, Christian; Stamm, Christian

    2014-05-01

    Water quality in the Swiss river network is affected by many micropollutants from a variety of diffuse sources. This study compares, for the first time, in a comprehensive manner the diffuse sources and the substance groups that contribute the most to water contamination in Swiss streams and highlights the major regions for water pollution. For this a simple but comprehensive model was developed to estimate emission from diffuse sources for the entire Swiss river network of 65 000 km. Based on emission factors the model calculates catchment specific losses to streams for more than 15 diffuse sources (such as crop lands, grassland, vineyards, fruit orchards, roads, railways, facades, roofs, green space in urban areas, landfills, etc.) and more than 130 different substances from 5 different substance groups (pesticides, biocides, heavy metals, human drugs, animal drugs). For more than 180 000 stream sections estimates of mean annual pollutant loads and mean annual concentration levels were modeled. This data was validated with a set of monitoring data and evaluated based on annual average environmental quality standards (AA-EQS). Model validation showed that the estimated mean annual concentration levels are within the range of measured data. Therefore simulations were considered as adequately robust for identifying the major sources of diffuse pollution. The analysis depicted that in Switzerland widespread pollution of streams can be expected. Along more than 18 000 km of the river network one or more simulated substances has a concentration exceeding the AA-EQS. In single stream sections it could be more than 50 different substances. Moreover, the simulations showed that in two-thirds of small streams (Strahler order 1 and 2) at least one AA-EQS is always exceeded. The highest number of substances exceeding the AA-EQS are in areas with large fractions of arable cropping, vineyards and fruit orchards. Urban areas are also of concern even without considering

  7. A comparison of PCA and PMF models for source identification of fugitive methane emissions

    Science.gov (United States)

    Assan, Sabina; Baudic, Alexia; Bsaibes, Sandy; Gros, Valerie; Ciais, Philippe; Staufer, Johannes; Robinson, Rod; Vogel, Felix

    2017-04-01

    Methane (CH_4) is a greenhouse gas with a global warming potential 28-32 times that of carbon dioxide (CO_2) on a 100 year period, and even greater on shorter timescales [Etminan, et al., 2016, Allen, 2014]. Thus, despite its relatively short life time and smaller emission quantities compared to CO_2, CH4 emissions contribute to approximately 20{%} of today's anthropogenic greenhouse gas warming [Kirschke et al., 2013]. Major anthropogenic sources include livestock (enteric fermentation), oil and gas production and distribution, landfills, and wastewater emissions [EPA, 2011]. Especially in densely populated areas multiple CH4 sources can be found in close vicinity. Thus, when measuring CH4 emissions at local scales it is necessary to distinguish between different CH4 source categories to effectively quantify the contribution of each sector and aid the implementation of greenhouse gas reduction strategies. To this end, source apportionment models can be used to aid the interpretation of spatial and temporal patterns in order to identify and characterise emission sources. The focus of this study is to evaluate two common linear receptor models, namely Principle Component Analysis (PCA) and Positive Matrix Factorisation (PMF) for CH4 source apportionment. The statistical models I will present combine continuous in-situ CH4 , C_2H_6, δ^1^3CH4 measured using a Cavity Ring Down Spectroscopy (CRDS) instrument [Assan et al. 2016] with volatile organic compound (VOC) observations performed using Gas Chromatography (GC) in order to explain the underlying variance of the data. The strengths and weaknesses of both models are identified for data collected in multi-source environments in the vicinity of four different types of sites; an agricultural farm with cattle, a natural gas compressor station, a wastewater treatment plant, and a pari-urban location in the Ile de France region impacted by various sources. To conclude, receptor model results to separate statistically the

  8. a Model Analysis of the Spatial Distribution and Temporal Trends of Nitrous Oxide Sources and Sinks

    Science.gov (United States)

    Nevison, Cynthia Dale

    1994-01-01

    Nitrous oxide ({N_ {2}O}), an atmospheric trace gas that contributes to both greenhouse warming and stratospheric ozone depletion, is increasing at an annual rate of about 0.25%/yr. By use of a global model of the changing terrestrial nitrogen cycle, the timing and magnitude of this increase are shown to be consistent with enhanced microbial N _2O production due to fertilizer, land clearing, livestock manure, and human sewage. Fertilizer appears to be a particularly important source. Increasing emissions from additional anthropogenic N_2O sources, including fossil fuel combustion and nylon production are also shown to coincide with and contribute to N _2O's annual atmospheric increase. Collectively, these industrial, combustion-related, and enhanced microbial N_2O emissions add up to a total anthropogenic source of about 5 Tg N/yr. Natural N_2O emissions from microbial activity in soils and oceans and from natural fires are estimated to produce an annual source of about 11 Tg N/yr, of which the oceans contribute a substantially larger fraction than reported in most current budgets. In contrast to anthropogenic emissions, which are increasing rapidly, natural emissions are predicted to remain relatively constant from 1860 to 2050, although this prediction ignores possible enhancements in microbial N_2O production due to global warming. Also in contrast to anthropogenic emissions, which are heavily dominated by the northern hemisphere, the natural source is fairly evenly distributed over the Earth. The predicted magnitude of the natural source is checked against an estimate of the N_2O stratospheric sink, while the predicted present day distribution of natural and anthropogenic sources is tested in a 3-dimensional transport model run. This run reproduces the observed 1ppb interhemispheric gradient (higher in the north), and suggests that larger gradients may exist over strong continental source regions. Substantial increases in most anthropogenic N _2O sources are

  9. Alternative 3D Modeling Approaches Based on Complex Multi-Source Geological Data Interpretation

    Institute of Scientific and Technical Information of China (English)

    李明超; 韩彦青; 缪正建; 高伟

    2014-01-01

    Due to the complex nature of multi-source geological data, it is difficult to rebuild every geological struc-ture through a single 3D modeling method. The multi-source data interpretation method put forward in this analysis is based on a database-driven pattern and focuses on the discrete and irregular features of geological data. The geological data from a variety of sources covering a range of accuracy, resolution, quantity and quality are classified and inte-grated according to their reliability and consistency for 3D modeling. The new interpolation-approximation fitting construction algorithm of geological surfaces with the non-uniform rational B-spline (NURBS) technique is then pre-sented. The NURBS technique can retain the balance among the requirements for accuracy, surface continuity and data storage of geological structures. Finally, four alternative 3D modeling approaches are demonstrated with reference to some examples, which are selected according to the data quantity and accuracy specification. The proposed approaches offer flexible modeling patterns for different practical engineering demands.

  10. Combining data sources to characterise climatic variability for hydrological modelling in high mountain catchments

    Science.gov (United States)

    Pritchard, David; Fowler, Hayley; Bardossy, Andras; O'Donnell, Greg; Forsythe, Nathan

    2016-04-01

    Robust hydrological modelling of high mountain catchments to support water resources management depends critically on the accuracy of climatic input data. However, the hydroclimatological complexity and sparse measurement networks typically characteristic of these environments present significant challenges for determining the structure of spatial and temporal variability in key climatic variables. Focusing on the Upper Indus Basin (UIB), this research explores how different data sources can be combined in order to characterise climatic patterns and related uncertainties at the scales required in hydrological modelling. Analysis of local observations with respect to underlying climatic processes and variability is extended relative to previous studies in this region, which forms a basis for evaluating the domains of applicability and potential insights associated with selected remote sensing and reanalysis products. As part of this, the information content of recent high resolution simulations for understanding climatic patterns is assessed, with particular reference to the High Asia Refined Analysis (HAR). A strategy for integrating these different data sources to obtain plausible realisations of the distributed climatic fields needed for hydrological modelling is developed on the basis of this analysis, which provides a platform for exploring uncertainties arising from potential biases and other sources of error. The interaction between uncertainties in climatic input data and alternative approaches to process parameterisation in hydrological and cryospheric modelling is explored.

  11. A simplified model of the source channel of the Leksell GammaKnife tested with PENELOPE.

    Science.gov (United States)

    Al-Dweri, Feras M O; Lallena, Antonio M; Vilches, Manuel

    2004-06-21

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3 degrees with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, v, z = 236 mm) show strong correlations between rho = (x2 + y2)(1/2) and their polar angle theta, on one side, and between tan(-1)(y/x) and their azimuthal angle phi, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.

  12. Three-dimensional neutron source models for toroidal fusion energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Slaybaugh, R.N. [Fusion Technology Institute, University of Wisconsin, 1500 Engineering Dr., Madison, WI 53706 (United States)], E-mail: slaybaugh@wisc.edu; Wilson, P.P.H. [Fusion Technology Institute, University of Wisconsin, 1500 Engineering Dr., Madison, WI 53706 (United States)], E-mail: wilsonp@engr.wisc.edu; El-Guebaly, L.A.; Marriott, E.P. [Fusion Technology Institute, University of Wisconsin, 1500 Engineering Dr., Madison, WI 53706 (United States)

    2009-06-15

    Developments in computer architecture and neutronics code capabilities have enabled high-resolution analysis of complex 3D geometries. Thus, accurately modeling 3D source distributions has become important for nuclear analyses. In this work two methods are described which generate and sample such 3D sources based directly on the plasma parameters of a fusion device and which facilitate the ability to update the neutron source following changes to the plasma physics configuration. The cylindrical mesh method is for toroidally symmetric machines and utilizes data in a standard file format which represents the poloidal magnetic flux on an R-Z grid. The conformal hexahedral mesh method takes plasma physics data generated in an idealized toroidal coordinate system and uses a Jacobian transformation and a functional expansion to generate the source. This work describes each methodology and associated test cases. The cylindrical mesh method was applied to ARIES-RS and the conformal hexahedral mesh method was applied to a uniform torus and ARIES-CS. The results of the test cases indicate that these improved source definitions can have important effects on pertinent engineering parameters, such as neutron wall loading, and should therefore be used for high-resolution nuclear analyses of all toroidal devices.

  13. Source and destination memory in face-to-face interaction: A multinomial modeling approach.

    Science.gov (United States)

    Fischer, Nele M; Schult, Janette C; Steffens, Melanie C

    2015-06-01

    Arguing that people are often in doubt concerning to whom they have presented what information, Gopie and MacLeod (2009) introduced a new memory component, destination memory: remembering the destination of output information (i.e., "Who did you tell this to?"). They investigated source (i.e., "Who told you that?") versus destination memory in computer-based imagined interactions. The present study investigated destination memory in real interaction situations. In 2 experiments with mixed-gender (N = 53) versus same-gender (N = 89) groups, source and destination memory were manipulated by creating a setup similar to speed dating. In dyads, participants completed phrase fragments with personal information, taking turns. At recognition, participants decided whether fragments were new or old and, if old, whether they were listened to or spoken and which depicted person was the source or the destination of the information. A multinomial model was used for analyses. Source memory significantly exceeded destination memory, whereas information itself was better remembered in the destination than in the source condition. These findings corroborate the trade-off hypothesis: Context is better remembered in input than in output events, but information itself is better remembered in output than in input events. We discuss the implications of these findings for real-world conversation situations. (c) 2015 APA, all rights reserved).

  14. Source apportionment based on an atmospheric dispersion model and multiple linear regression analysis

    Science.gov (United States)

    Fushimi, Akihiro; Kawashima, Hiroto; Kajihara, Hideo

    Understanding the contribution of each emission source of air pollutants to ambient concentrations is important to establish effective measures for risk reduction. We have developed a source apportionment method based on an atmospheric dispersion model and multiple linear regression analysis (MLR) in conjunction with ambient concentrations simultaneously measured at points in a grid network. We used a Gaussian plume dispersion model developed by the US Environmental Protection Agency called the Industrial Source Complex model (ISC) in the method. Our method does not require emission amounts or source profiles. The method was applied to the case of benzene in the vicinity of the Keiyo Central Coastal Industrial Complex (KCCIC), one of the biggest industrial complexes in Japan. Benzene concentrations were simultaneously measured from December 2001 to July 2002 at sites in a grid network established in the KCCIC and the surrounding residential area. The method was used to estimate benzene emissions from the factories in the KCCIC and from automobiles along a section of a road, and then the annual average contribution of the KCCIC to the ambient concentrations was estimated based on the estimated emissions. The estimated contributions of the KCCIC were 65% inside the complex, 49% at 0.5-km sites, 35% at 1.5-km sites, 20% at 3.3-km sites, and 9% at a 5.6-km site. The estimated concentrations agreed well with the measured values. The estimated emissions from the factories and the road were slightly larger than those reported in the first Pollutant Release and Transfer Register (PRTR). These results support the reliability of our method. This method can be applied to other chemicals or regions to achieve reasonable source apportionments.

  15. Two-Component Jet Models of Gamma-Ray Burst Sources

    CERN Document Server

    Peng, F; Granot, J; Peng, Fang; Konigl, Arieh; Granot, Jonathan

    2004-01-01

    Recent observational and theoretical studies have raised the possibility that the collimated outflows in gamma-ray burst (GRB) sources have two distinct components: a narrow (opening half-angle $\\theta_{\\rm n}$), highly relativistic (initial Lorentz factor $\\eta_\\rmn \\gtrsim 10^2$) outflow, from which the $\\gamma$-ray emission originates, and a wider ($\\theta_{\\rm w} \\lesssim 3 \\theta_{\\rm n}$), moderately relativistic ($\\eta_{\\rm w}\\sim 10$) surrounding flow. Using a simple synchrotron emission model, we calculate the R-band afterglow lightcurves expected in this scenario and derive algebraic expressions for the flux ratios of the emission from the two jet components at the main transition times in the lightcurve. We apply this model to GRB sources, for explaining the structure of afterglows and source energetics, as well as to X-ray flash sources, which we interpret as GRB jets viewed at an angle $\\theta_{\\rm obs} > \\theta_{\\rm n}$. Finally, we argue that a neutron-rich hydromagnetic outflow may naturally g...

  16. Investigation of solar wind source regions using Ulysses composition data and a PFSS model

    Science.gov (United States)

    Peleikis, Thies; Kruse, Martin; Berger, Lars; Drews, Christian; Wimmer-Schweingruber, Robert F.

    2016-03-01

    In this work we study the source regions for different solar wind types. While it is well known that the fast solar wind originates from inside Coronal Holes, the source regions for the slow solar wind are still under debate. For our study we use Ulysses compositional and plasma measurements and map them back to the solar corona. Here we use a potential field source surface model to model the coronal magnetic field. On the source surface we assign individual open field lines to the ballistic foot points of Ulysses. We do not only consider the photospheric origin of these field lines, but rather attempt to trace them across several height levels through the corona. We calculate the proximity of the field lines to the coronal hole border for every height level. The results are height profiles of these field lines. By applying velocity and charge state ratio filters to the height profiles, we can demonstrate that slow wind is produced close to the coronal hole border. In particular, we find that not only the proximity to the border matters, but also that the bending of the field lines with respect to the coronal hole border plays a crucial role in determining the solar wind type.

  17. A Model for the Origin of High Density in Looptop X-Ray Sources

    Science.gov (United States)

    Longcope, D. W.; Guidoni, S. E.

    2011-10-01

    Super-hot (SH) looptop sources, detected in some large solar flares, are compact sources of HXR emission with spectra matching thermal electron populations exceeding 30 MK. High observed emission measure (EM) and inference of electron thermalization within the small source region both provide evidence of high densities at the looptop, typically more than an order of magnitude above ambient. Where some investigators have suggested such density enhancement results from a rapid enhancement in the magnetic field strength, we propose an alternative model, based on Petschek reconnection, whereby looptop plasma is heated and compressed by slow magnetosonic shocks generated self-consistently through flux retraction following reconnection. Under steady conditions such shocks can enhance density by no more than a factor of four. These steady shock relations (Rankine-Hugoniot relations) turn out to be inapplicable to Petschek's model owing to transient effects of thermal conduction. The actual density enhancement can in fact exceed a factor of 10 over the entire reconnection outflow. An ensemble of flux tubes retracting following reconnection at an ensemble of distinct sites will have a collective EM proportional to the rate of flux tube production. This rate, distinct from the local reconnection rate within a single tube, can be measured separately through flare ribbon motion. Typical flux transfer rates and loop parameters yield EMs comparable to those observed in SH sources.

  18. Characterization methods and modelling of ultracapacitors for use as peak power sources

    Energy Technology Data Exchange (ETDEWEB)

    Lajnef, W.; Vinassa, J.-M.; Briat, O.; Azzopardi, S.; Woirgard, E. [Laboratoire IXL CNRS UMR 5818 - ENSEIRB, Universite Bordeaux 1, 351 Cours de la Liberation, 33405 Talence Cedex (France)

    2007-06-01

    This paper suggests both a methodology to characterize ultracapacitors and to model their electrical behaviour. Current levels, frequency intervals, and voltage ranges are adapted to ultracapacitors testing. Experimental data results in the determination of the ultracapacitors performances in terms of energy and power densities, the quantification of the capacitance dependence on voltage, and the modelling of the dynamic behaviour of the device. Then, an electric model is proposed taking into account the ultracapacitors characteristics and their future use as peak power source for hybrid and electric vehicles. After, the parameters identification procedure is explained. Finally, the model validation, both in frequency and time domains, proves the validity of this methodology and the performances of the proposed model. (author)

  19. Using Computational Cognitive Modeling to Diagnose Possible Sources of Aviation Error

    Science.gov (United States)

    Byrne, M. D.; Kirlik, Alex

    2003-01-01

    We present a computational model of a closed-loop, pilot-aircraft-visual scene-taxiway system created to shed light on possible sources of taxi error. Creating the cognitive aspects of the model using ACT-R required us to conduct studies with subject matter experts to identify experiential adaptations pilots bring to taxiing. Five decision strategies were found, ranging from cognitively-intensive but precise, to fast, frugal but robust. We provide evidence for the model by comparing its behavior to a NASA Ames Research Center simulation of Chicago O'Hare surface operations. Decision horizons were highly variable; the model selected the most accurate strategy given time available. We found a signature in the simulation data of the use of globally robust heuristics to cope with short decision horizons as revealed by errors occurring most frequently at atypical taxiway geometries or clearance routes. These data provided empirical support for the model.

  20. QSAR modeling: um novo pacote computacional open source para gerar e validar modelos QSAR QSAR modeling: a new open source computational package to generate and validate QSAR models

    Directory of Open Access Journals (Sweden)

    João Paulo A. Martins

    2013-01-01

    Full Text Available QSAR modeling is a novel computer program developed to generate and validate QSAR or QSPR (quantitative structure- activity or property relationships models. With QSAR modeling, users can build partial least squares (PLS regression models, perform variable selection with the ordered predictors selection (OPS algorithm, and validate models by using y-randomization and leave-N-out cross validation. An additional new feature is outlier detection carried out by simultaneous comparison of sample leverage with the respective Studentized residuals. The program was developed using Java version 6, and runs on any operating system that supports Java Runtime Environment version 6. The use of the program is illustrated. This program is available for download at lqta.iqm.unicamp.br.

  1. Inverse modeling of the Chernobyl source term using atmospheric concentration and deposition measurements

    Directory of Open Access Journals (Sweden)

    N. Evangeliou

    2017-07-01

    Full Text Available This paper describes the results of an inverse modeling study for the determination of the source term of the radionuclides 134Cs, 137Cs and 131I released after the Chernobyl accident. The accident occurred on 26 April 1986 in the Former Soviet Union and released about 1019 Bq of radioactive materials that were transported as far away as the USA and Japan. Thereafter, several attempts to assess the magnitude of the emissions were made that were based on the knowledge of the core inventory and the levels of the spent fuel. More recently, when modeling tools were further developed, inverse modeling techniques were applied to the Chernobyl case for source term quantification. However, because radioactivity is a sensitive topic for the public and attracts a lot of attention, high-quality measurements, which are essential for inverse modeling, were not made available except for a few sparse activity concentration measurements far from the source and far from the main direction of the radioactive fallout. For the first time, we apply Bayesian inversion of the Chernobyl source term using not only activity concentrations but also deposition measurements from the most recent public data set. These observations refer to a data rescue attempt that started more than 10 years ago, with a final goal to provide available measurements to anyone interested. In regards to our inverse modeling results, emissions of 134Cs were estimated to be 80 PBq or 30–50 % higher than what was previously published. From the released amount of 134Cs, about 70 PBq were deposited all over Europe. Similar to 134Cs, emissions of 137Cs were estimated as 86 PBq, on the same order as previously reported results. Finally, 131I emissions of 1365 PBq were found, which are about 10 % less than the prior total releases. The inversion pushes the injection heights of the three radionuclides to higher altitudes (up to about 3 km than previously assumed (≈ 2.2 km in order

  2. Development of Source-Receptor matrix over South Korea in support of GAINS-Korea model

    Science.gov (United States)

    Choi, K. C.; Woo, J. H.; Kim, H. K.; Lee, Y. M.; Kim, Y.; Heyes, C.; Lee, J. B.; Song, C. K.; Han, J.

    2014-12-01

    A comprehensive and combined analysis of air pollution and climate change could reveal important synergies of emission control measures, which could be of high policy relevance. IIASA's GAINS model (The Greenhouse gas - Air pollution Interactions and Synergies) has been developed as a tool to identify emission control strategies that achieve given targets on air quality and greenhouse gas emissions at least costs. The GAINS-Korea Model, which is being jointly developed by Konkuk University and IIASA, should play an important role in understanding the impact of air quality improvements across the regions in Korea. Source-Receptor relationships (S-R) is an useful methodology in air pollution studies to determine the areas of origin of chemical compounds at receptor point, and thus be able to target actions to reduce pollutions. The GAINS model can assess the impact of emission reductions of sources on air quality in receptor regions based on S-R matrix, derived from chemical transport model. In order to develop S-R matrix for GAINS-Korea, the CAMx model with PSAT/OSAT tools was applied in this study. The coarse domain covers East Asia, and a nesting domain as main research area was used for Korea peninsula. To evaluate of S-R relationships, a modeling domain is divided into sixteen regions over South Korea with three outside of S. Korea countries (China, N. Korea and Japan) for estimating transboundary contributions. The results of our analysis will be presented at the conference.

  3. Evaluation of source water protection strategies: a fuzzy-based model.

    Science.gov (United States)

    Islam, Nilufar; Sadiq, Rehan; Rodriguez, Manuel J; Francisque, Alex

    2013-05-30

    Source water protection (SWP) is an important step in the implementation of a multi-barrier approach that ensures the delivery of safe drinking water. Available decision-making models for SWP primarily use complex mathematical formulations that require large data sets to perform analysis, which limit their use. Moreover, most of them cannot handle interconnection and redundancy among the parameters, or missing information. A fuzzy-based model is proposed in this study to overcome the above limitations. This model can estimate a reduction in the pollutant loads based on selected SWP strategies (e.g., storm water management ponds, vegetated filter strips). The proposed model employs an export coefficient approach and account for the number of animals to estimate the pollutant loads generated by different land usages (e.g., agriculture, forests, highways, livestock, and pasture land). Water quality index is used for the assessment of water quality once these pollutant loads are discharged into the receiving waters. To demonstrate the application of the proposed model, a case study of Page Creek was performed in the Clayburn watershed (British Columbia, Canada). The results show that increasing urban development and poorly managed agricultural areas have the most adverse effects on source water quality. The proposed model can help decision makers to make informed decisions related to the land use and resource allocation.

  4. Model of the heat source of the Cerro Prieto magma-hydrothermal system, Baja California, Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Elders, W.A.; Bird, D.K.; Williams, A.E.; Schiffman, P.; Cox, B.

    1982-08-10

    Earlier studies at Cerro Prieto by UCR have led to the development of a qualitative model for field flow in the geothermal system before it was drilled and perturbed by production. Current efforts are directed towards numerical modelling of heat and mass transfer in the system in this undisturbed state. A two-dimensional model assumes that the heat sources were a single basalt/gabbro intrusion which provided heat to the system as it cooled. After compiling various information on the physical properties of the reservoir, the enthalpy contained in two 1cm thick section across the reservoir orthogonal to each other was calculated. Next various shapes, sizes and depths for the intrusion as initial conditions and boundary conditions for the calculation of heat transfer were considered. A family of numerical models which so far gives the best matches to the conditions observed in the field today have in common a funnel-shaped intrusion with a top 4km wide emplaced at a depth of 5km some 30,000 to 50,000 years ago, providing heat to the geothermal system. Numerical modelling is still in progress. Although none of the models so far computed may be a perfect match for the thermal history of the reservoir, they all indicate that the intrusive heat source is young, close and large.

  5. Open Knee: Open Source Modeling & Simulation to Enable Scientific Discovery and Clinical Care in Knee Biomechanics

    Science.gov (United States)

    Erdemir, Ahmet

    2016-01-01

    Virtual representations of the knee joint can provide clinicians, scientists, and engineers the tools to explore mechanical function of the knee and its tissue structures in health and disease. Modeling and simulation approaches such as finite element analysis also provide the possibility to understand the influence of surgical procedures and implants on joint stresses and tissue deformations. A large number of knee joint models are described in the biomechanics literature. However, freely accessible, customizable, and easy-to-use models are scarce. Availability of such models can accelerate clinical translation of simulations, where labor intensive reproduction of model development steps can be avoided. The interested parties can immediately utilize readily available models for scientific discovery and for clinical care. Motivated by this gap, this study aims to describe an open source and freely available finite element representation of the tibiofemoral joint, namely Open Knee, which includes detailed anatomical representation of the joint's major tissue structures, their nonlinear mechanical properties and interactions. Three use cases illustrate customization potential of the model, its predictive capacity, and its scientific and clinical utility: prediction of joint movements during passive flexion, examining the role of meniscectomy on contact mechanics and joint movements, and understanding anterior cruciate ligament mechanics. A summary of scientific and clinically directed studies conducted by other investigators are also provided. The utilization of this open source model by groups other than its developers emphasizes the premise of model sharing as an accelerator of simulation-based medicine. Finally, the imminent need to develop next generation knee models are noted. These are anticipated to incorporate individualized anatomy and tissue properties supported by specimen-specific joint mechanics data for evaluation, all acquired in vitro from varying age

  6. Mass balance source apportionment modeling of indoor air pollution exposures during the Ethiopian coffee ceremony.

    Science.gov (United States)

    Keil, Chris; Coleman, Quincy; Brown, Alex; Kassa, Hailu

    2014-01-01

    Mass balance modeling was used to apportion previously measured carbon monoxide and respirable particle exposures of women preparing coffee during Ethiopian coffee ceremonies. The coffee ceremony generates smoke indoors from the use of charcoal and incense. This creates inhalation exposures, particularly for the women preparing the coffee. Understanding the health risks associated with this practice will be improved with knowledge of the relative contribution to combustion byproduct exposures from the different sources. Source fingerprints were developed in the laboratory for carbon monoxide and respirable particle emissions from charcoal and incense. A mass balance model determined that the majority of the carbon monoxide exposures were from charcoal use and that the respirable particle exposures were approximately half from incense and half from charcoal. Efforts to decrease health risks from these exposures must be directed by Ethiopian cultural stakeholders who understand the exposure conditions, the health risks, and the societal context.

  7. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  8. Source Apportionment of Final Particulate Matterin North China Plain based on Air Quality Modeling

    Science.gov (United States)

    Xing, J.; Wu, W.; Chang, X.; Wang, S.; Hao, J.

    2016-12-01

    Most Chinese cities in North China Plain are suffering from serious air pollution. To develop the regional air pollution control policies, we need to identify the major source contributions to such pollution and to design the control policy which is accurate, efficient and effective. This study used the air quality model with serval advanced technologies including ISAM and ERSM, to assess the source contributions from individual pollutants (incl. SO2, NOx, VOC, NH3, primary PM), sectors (incl. power plants, industry, transportation and domestic), and regions (Beijing, Hebei, Tianjing and surrounding provinces). The modeling period is two months in 2012 as January and July which represent winter and summer respectively. The non-linear relationship between air pollutant emissions and air quality will be addressed, and the integrated control of multi-pollutants and multi-regions in China will be suggested.

  9. A stochastic inventory management model for a dual sourcing supply chain with disruptions

    Science.gov (United States)

    Iakovou, Eleftherios; Vlachos, Dimitrios; Xanthopoulos, Anastasios

    2010-03-01

    As companies continue to globalise their operations and outsource significant portion of their value chain activities, they often end up relying heavily on order replenishments from distant suppliers. The explosion in long-distance sourcing is exposing supply chains and shareholder value at ever increasing operational and disruption risks. It is well established, both in academia and in real-world business environments, that resource flexibility is an effective method for hedging against supply chain disruption risks. In this contextual framework, we propose a single period stochastic inventory decision-making model that could be employed for capturing the trade-off between inventory policies and disruption risks for an unreliable dual sourcing supply network for both the capacitated and uncapacitated cases. Through the developed model, we obtain some important managerial insights and evaluate the merit of contingency strategies in managing uncertain supply chains.

  10. A transient MHD model applicable for the source of solar cosmic ray acceleration

    Science.gov (United States)

    Dryer, M.; Wu, S. T.

    1981-01-01

    A two-dimensional, time-dependent magnetohydrodynamic model is used to describe the possible mechanisms for the source of solar cosmic ray acceleration following a solar flare. The hypothesis is based on the propagation of fast mode MHD shocks following a sudden release of energy. In this presentation, the effects of initial magnetic topology and strength on the formation of MHD shocks have been studied. The plasma beta (thermal pressure/magnetic pressure) is considered as a measure of the initial, relative strength of the field. During dynamic mass motion, the Alfven Mach number is the more appropriate measure of the magnetic field's ability to control the outward motion. It is suggested that this model (computed self-consistently) provides the shock waves and the disturbed mass motion behind it as likely sources for solar cosmic ray acceleration.

  11. Modelling geosmin concentrations in three sources of raw water in Quebec, Canada.

    Science.gov (United States)

    Parinet, Julien; Rodriguez, Manuel J; Sérodes, Jean-Baptiste

    2013-01-01

    The presence of off-flavour compounds such as geosmin, often found in raw water, significantly reduces the organoleptic quality of distributed water and diverts the consumer from its use. To adapt water treatment processes to eliminate these compounds, it is necessary to be able to identify them quickly. Routine analysis could be considered a solution, but it is expensive and delays associated with obtaining the results of analysis are often important, thereby constituting a serious disadvantage. The development of decision-making tools such as predictive models seems to be an economic and feasible solution to counterbalance the limitations of analytical methods. Among these tools, multi-linear regression and principal component regression are easy to implement. However, due to certain disadvantages inherent in these methods (multicollinearity or non-linearity of the processes), the use of emergent models involving artificial neurons networks such as multi-layer perceptron could prove to be an interesting alternative. In a previous paper (Parinet et al., Water Res 44: 5847-5856, 2010), the possible parameters that affect the variability of taste and odour compounds were investigated using principal component analysis. In the present study, we expand the research by comparing the performance of three tools using different modelling scenarios (multi-linear regression, principal component regression and multi-layer perceptron) to model geosmin in drinking water sources using 38 microbiological and physicochemical parameters. Three very different sources of water, in terms of quality, were selected for the study. These sources supply drinking water to the Québec City area (Canada) and its vicinity, and were monitored three times per month over a 1-year period. Seven different modelling methods were tested for predicting geosmin in these sources. The comparison of the seven different models showed that simple models based on multi-linear regression provide sufficient

  12. A systematic literature review of open source software quality assessment models.

    Science.gov (United States)

    Adewumi, Adewole; Misra, Sanjay; Omoregbe, Nicholas; Crawford, Broderick; Soto, Ricardo

    2016-01-01

    Many open source software (OSS) quality assessment models are proposed and available in the literature. However, there is little or no adoption of these models in practice. In order to guide the formulation of newer models so they can be acceptable by practitioners, there is need for clear discrimination of the existing models based on their specific properties. Based on this, the aim of this study is to perform a systematic literature review to investigate the properties of the existing OSS quality assessment models by classifying them with respect to their quality characteristics, the methodology they use for assessment, and their domain of application so as to guide the formulation and development of newer models. Searches in IEEE Xplore, ACM, Science Direct, Springer and Google Search is performed so as to retrieve all relevant primary studies in this regard. Journal and conference papers between the year 2003 and 2015 were considered since the first known OSS quality model emerged in 2003. A total of 19 OSS quality assessment model papers were selected. To select these models we have developed assessment criteria to evaluate the quality of the existing studies. Quality assessment models are classified into five categories based on the quality characteristics they possess namely: single-attribute, rounded category, community-only attribute, non-community attribute as well as the non-quality in use models. Our study reflects that software selection based on hierarchical structures is found to be the most popular selection method in the existing OSS quality assessment models. Furthermore, we found that majority (47%) of the existing models do not specify any domain of application. In conclusion, our study will be a valuable contribution to the community and helps the quality assessment model developers in formulating newer models and also to the practitioners (software evaluators) in selecting suitable OSS in the midst of alternatives.

  13. Modelling and control of cholera on networks with a common water source.

    Science.gov (United States)

    Shuai, Zhisheng; van den Driessche, P

    2015-01-01

    A mathematical model is formulated for the transmission and spread of cholera in a heterogeneous host population that consists of several patches of homogeneous host populations sharing a common water source. The basic reproduction number ℛ0 is derived and shown to determine whether or not cholera dies out. Explicit formulas are derived for target/type reproduction numbers that measure the control strategies required to eradicate cholera from all patches.

  14. Dual-Source Linear Energy Prediction (LINE-P) Model in the Context of WSNs.

    Science.gov (United States)

    Ahmed, Faisal; Tamberg, Gert; Le Moullec, Yannick; Annus, Paul

    2017-07-20

    Energy harvesting technologies such as miniature power solar panels and micro wind turbines are increasingly used to help power wireless sensor network nodes. However, a major drawback of energy harvesting is its varying and intermittent characteristic, which can negatively affect the quality of service. This calls for careful design and operation of the nodes, possibly by means of, e.g., dynamic duty cycling and/or dynamic frequency and voltage scaling. In this context, various energy prediction models have been proposed in the literature; however, they are typically compute-intensive or only suitable for a single type of energy source. In this paper, we propose Linear Energy Prediction "LINE-P", a lightweight, yet relatively accurate model based on approximation and sampling theory; LINE-P is suitable for dual-source energy harvesting. Simulations and comparisons against existing similar models have been conducted with low and medium resolutions (i.e., 60 and 22 min intervals/24 h) for the solar energy source (low variations) and with high resolutions (15 min intervals/24 h) for the wind energy source. The results show that the accuracy of the solar-based and wind-based predictions is up to approximately 98% and 96%, respectively, while requiring a lower complexity and memory than the other models. For the cases where LINE-P's accuracy is lower than that of other approaches, it still has the advantage of lower computing requirements, making it more suitable for embedded implementation, e.g., in wireless sensor network coordinator nodes or gateways.

  15. Regensim – Matlab toolbox for renewable energy sources modelling and simulation

    Directory of Open Access Journals (Sweden)

    Cristian Dragoş Dumitru

    2011-12-01

    Full Text Available This paper deals with the implementation and development of a Matlab Simulink library named RegenSim designed for modeling, simulations and analysis of real hybrid solarwind-hydro systems connected to local grids. Blocks like wind generators, hydro generators, solar photovoltaic modules and accumulators are implemented. The main objective is the study of the hybrid power system behavior, which allows employing renewable and variable in time energy sources while providing a continuous supply.

  16. Sources of CP violation from E{sub 6} inspired heterotic string model

    Energy Technology Data Exchange (ETDEWEB)

    Boussahel, M.; Mebarki, N. [Departement de physique Faculte des sciences Universite de M' sila 28000 (Algeria); Laboratoire de Physique Mathematique et Subatomique Mentouri University, Constantine (Algeria)

    2012-06-27

    Sources of the weak CP violation from the SU{sub L}(3)x SU{sub R}(3)x SU{sub c}(3) subgroup of the E{sub 6} inspired heterotic string model are discussed. It is shown that the number of the Cabibo-Kobayachi-Maskawa like matrices depends on the spontaneous breakdown of the E{sub 6} gauge symmetry and/or supersymmetry.

  17. A probabilistic graphical model approach in 30 m land cover mapping with multiple data sources

    OpenAIRE

    Wang, Jie; Ji, Luyan; Huang, Xiaomeng; Fu, Haohuan; Xu, Shiming; Li, Congcong

    2016-01-01

    There is a trend to acquire high accuracy land-cover maps using multi-source classification methods, most of which are based on data fusion, especially pixel- or feature-level fusions. A probabilistic graphical model (PGM) approach is proposed in this research for 30 m resolution land-cover mapping with multi-temporal Landsat and MODerate Resolution Imaging Spectroradiometer (MODIS) data. Independent classifiers were applied to two single-date Landsat 8 scenes and the MODIS time-series data, ...

  18. Estimation of gaseous mercury emissions in Germany. Inverse modelling of source strengths at the contaminated industrial site BSL Werk Schkopau

    Energy Technology Data Exchange (ETDEWEB)

    Krueger, O.; Ebinghaus, R.; Kock, H.H.; Richter-Politz, I.; Geilhufe, C.

    1998-12-31

    Anthropogenic emission sources of gaseous mercury at the contaminated industrial site BSL Werk Schkopau have been determined by measurements and numerical modelling applying a local dispersion model. The investigations are based on measurements from several field campaigns in the period of time between December 1993 and June 1994. The estimation of the source strengths was performed by inverse modelling using measurements as constraints for the dispersion model. Model experiments confirmed the applicability of the inverse modelling procedure for the source strength estimation at BSL Werk Schkopau. At the factory premises investigated, the source strengths of four source areas, among them three closed chlor-alkali productions, one partly removed acetaldehyde factory and additionaly one still producing chlor-alkali factory have been identified with an approximate total gaseous mercury emission of lower than 2.5 kg/day. (orig.)

  19. Open source large-scale high-resolution environmental modelling with GEMS

    Science.gov (United States)

    Baarsma, Rein; Alberti, Koko; Marra, Wouter; Karssenberg, Derek

    2016-04-01

    Many environmental, topographic and climate data sets are freely available at a global scale, creating the opportunities to run environmental models for every location on Earth. Collection of the data necessary to do this and the consequent conversion into a useful format is very demanding however, not to mention the computational demand of a model itself. We developed GEMS (Global Environmental Modelling System), an online application to run environmental models on various scales directly in your browser and share the results with other researchers. GEMS is open-source and uses open-source platforms including Flask, Leaflet, GDAL, MapServer and the PCRaster-Python modelling framework to process spatio-temporal models in real time. With GEMS, users can write, run, and visualize the results of dynamic PCRaster-Python models in a browser. GEMS uses freely available global data to feed the models, and automatically converts the data to the relevant model extent and data format. Currently available data includes the SRTM elevation model, a selection of monthly vegetation data from MODIS, land use classifications from GlobCover, historical climate data from WorldClim, HWSD soil information from WorldGrids, population density from SEDAC and near real-time weather forecasts, most with a ±100m resolution. Furthermore, users can add other or their own datasets using a web coverage service or a custom data provider script. With easy access to a wide range of base datasets and without the data preparation that is usually necessary to run environmental models, building and running a model becomes a matter hours. Furthermore, it is easy to share the resulting maps, timeseries data or model scenarios with other researchers through a web mapping service (WMS). GEMS can be used to provide open access to model results. Additionally, environmental models in GEMS can be employed by users with no extensive experience with writing code, which is for example valuable for using models

  20. Modeling Volcanic Eruption Parameters by Near-Source Internal Gravity Waves

    Science.gov (United States)

    Ripepe, M.; Barfucci, G.; de Angelis, S.; Delle Donne, D.; Lacanna, G.; Marchetti, E.

    2016-11-01

    Volcanic explosions release large amounts of hot gas and ash into the atmosphere to form plumes rising several kilometers above eruptive vents, which can pose serious risk on human health and aviation also at several thousands of kilometers from the volcanic source. However the most sophisticate atmospheric models and eruptive plume dynamics require input parameters such as duration of the ejection phase and total mass erupted to constrain the quantity of ash dispersed in the atmosphere and to efficiently evaluate the related hazard. The sudden ejection of this large quantity of ash can perturb the equilibrium of the whole atmosphere triggering oscillations well below the frequencies of acoustic waves, down to much longer periods typical of gravity waves. We show that atmospheric gravity oscillations induced by volcanic eruptions and recorded by pressure sensors can be modeled as a compact source representing the rate of erupted volcanic mass. We demonstrate the feasibility of using gravity waves to derive eruption source parameters such as duration of the injection and total erupted mass with direct application in constraining plume and ash dispersal models.

  1. Selective source blocking for Gamma Knife radiosurgery of trigeminal neuralgia based on analytical dose modelling

    Science.gov (United States)

    Li, Kaile; Ma, Lijun

    2004-08-01

    We have developed an automatic critical region shielding (ACRS) algorithm for Gamma Knife radiosurgery of trigeminal neuralgia. The algorithm selectively blocks 201 Gamma Knife sources to minimize the dose to the brainstem while irradiating the root entry area of the trigeminal nerve with 70-90 Gy. An independent dose model was developed to implement the algorithm. The accuracy of the dose model was tested and validated via comparison with the Leksell GammaPlan (LGP) calculations. Agreements of 3% or 3 mm in isodose distributions were found for both single-shot and multiple-shot treatment plans. After the optimized blocking patterns are obtained via the independent dose model, they are imported into the LGP for final dose calculations and treatment planning analyses. We found that the use of a moderate number of source plugs (30-50 plugs) significantly lowered (~40%) the dose to the brainstem for trigeminal neuralgia treatments. Considering the small effort involved in using these plugs, we recommend source blocking for all trigeminal neuralgia treatments with Gamma Knife radiosurgery.

  2. Analytical modeling of Schottky tunneling source impact ionization MOSFET with reduced breakdown voltage

    Directory of Open Access Journals (Sweden)

    Sangeeta Singh

    2016-03-01

    Full Text Available In this paper, we have investigated a novel Schottky tunneling source impact ionization MOSFET (STS-IMOS to lower the breakdown voltage of conventional impact ionization MOS (IMOS and developed an analytical model for the same. In STS-IMOS there is an accumulative effect of both impact ionization and source induced barrier tunneling. The silicide source offers very low parasitic resistance, the outcome of which is an increment in voltage drop across the intrinsic region for the same applied bias. This reduces operating voltage and hence, it exhibits a significant reduction in both breakdown and threshold voltage. STS-IMOS shows high immunity against hot electron damage. As a result of this the device reliability increases magnificently. The analytical model for impact ionization current (Iii is developed based on the integration of ionization integral (M. Similarly, to get Schottky tunneling current (ITun expression, Wentzel–Kramers–Brillouin (WKB approximation is employed. Analytical models for threshold voltage and subthreshold slope is optimized against Schottky barrier height (ϕB variation. The expression for the drain current is computed as a function of gate-to-drain bias via integral expression. It is validated by comparing it with the technology computer-aided design (TCAD simulation results as well. In essence, this analytical framework provides the physical background for better understanding of STS-IMOS and its performance estimation.

  3. Modeling and analysis of a transcritical rankine power cycle with a low grade heat source

    DEFF Research Database (Denmark)

    Nguyen, Chan; Veje, Christian

    2011-01-01

    A transcritical carbon dioxide (CO2) Rankine power cycle has been analyzed based on first and second law of thermodynamics. Detailed simulations using distributed models for the heat exchangers have been performed in order to develop the performance characteristics in terms of e.g., thermal effic...... conditions for the high side pressure. In addition the results underline that the investment cost for additional heat exchange components such as an internal heat exchanger may be unprofitable in the case where the heat source is free.......A transcritical carbon dioxide (CO2) Rankine power cycle has been analyzed based on first and second law of thermodynamics. Detailed simulations using distributed models for the heat exchangers have been performed in order to develop the performance characteristics in terms of e.g., thermal...... efficiency, exergetic efficiency and specific net power output. A generic cycle configuration has been used for analysis of a geothermal energy heat source. This model has been validated against similar calculations using industrial waste heat as the energy source. Calculations are done with fixed...

  4. Identifying the Mysterious EGRET Sources Signatures of Polar Cap Pulsar Models

    CERN Document Server

    Baring, M G

    2001-01-01

    The advent of the next generation of gamma-ray experiments, led by GLAST, AGILE, INTEGRAL and a host of atmospheric \\v{C}erenkov telescopes coming on line in the next few years, will enable ground-breaking discoveries relating to the presently enigmatic set of EGRET/CGRO UID galactic sources that have yet to find definitive identifications. Pulsars are principal candidates for such sources, and many are expected to be detected by GLAST, some that are radio-selected, like most of the present EGRET/Comptel pulsars, and perhaps even more that are detected via independent pulsation searches. At this juncture, it is salient to outline the principal predictions of pulsar models that might aid identification of gamma-ray sources, and moreover propel subsequent interpretation of their properties. This review summarizes relevant characteristics of the polar cap model, emphasizing where possible distinctions from the competing outer gap model. Foremost among these considerations are the hard X-ray to gamma-ray spectral...

  5. Modeling Ozone in the Eastern United States Using a Fuel-Based Mobile Source Emissions Inventory

    Science.gov (United States)

    Mcdonald, B. C.; Ahmadov, R.; McKeen, S. A.; Kim, S. W.; Frost, G. J.; Trainer, M.

    2015-12-01

    A fuel-based mobile source emissions inventory of nitrogen oxides (NOx) and carbon monoxide (CO) is developed for the continental US. Emissions are mapped for the year 2013, including emissions from on-road gasoline and diesel vehicles, and off-road engines. We find that mobile source emissions of NOx in the National Emissions Inventory 2011 (NEI11) are 50-60% higher than results from this study; mobile sources contribute around half of total US anthropogenic NOx emissions. We model chemistry and transport of emissions from the NEI11 and our fuel-based inventory during the Southeast Nexus (SENEX) Study period in the summer of 2013, using the Weather Research and Forecasting with Chemistry (WRF-Chem) model. In the Eastern US, there is a consistent over-prediction of tropospheric ozone (O3) levels when simulating emissions from the NEI11, with the largest biases located in the Southeastern US. Using our fuel-based inventory, we test O3 sensitivity to lower NOx emissions. We highlight results in the Southeast, a region with significant interactions between anthropogenic and biogenic emissions of ozone precursors. Model results of NOy, CO, and O3 are compared with aircraft measurements made during SENEX.

  6. Selective source blocking for Gamma Knife radiosurgery of trigeminal neuralgia based on analytical dose modelling

    Energy Technology Data Exchange (ETDEWEB)

    Li Kaile; Ma Lijun [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD 21210 (United States)

    2004-08-07

    We have developed an automatic critical region shielding (ACRS) algorithm for Gamma Knife radiosurgery of trigeminal neuralgia. The algorithm selectively blocks 201 Gamma Knife sources to minimize the dose to the brainstem while irradiating the root entry area of the trigeminal nerve with 70-90 Gy. An independent dose model was developed to implement the algorithm. The accuracy of the dose model was tested and validated via comparison with the Leksell GammaPlan (LGP) calculations. Agreements of 3% or 3 mm in isodose distributions were found for both single-shot and multiple-shot treatment plans. After the optimized blocking patterns are obtained via the independent dose model, they are imported into the LGP for final dose calculations and treatment planning analyses. We found that the use of a moderate number of source plugs (30-50 plugs) significantly lowered ({approx}40%) the dose to the brainstem for trigeminal neuralgia treatments. Considering the small effort involved in using these plugs, we recommend source blocking for all trigeminal neuralgia treatments with Gamma Knife radiosurgery.

  7. A simplified model of the source channel of the Leksell GammaKnife tested with PENELOPE

    CERN Document Server

    Al-Dweri, F M O; Vilches, M; Al-Dweri, Feras M.O.; Lallena, Antonio M.; Vilches, Manuel

    2004-01-01

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife$^{\\circledR}$. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3$^{\\rm o}$ with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photons trajectories reaching the output helmet collimators at $(x,y,z=236 {\\rm mm})$, show strong correlations between $\\rho=(x^2+y^2)^{1/2}$ and their polar angle $\\theta$, on one side, and between $\\tan^{-1}(y/x)$ and their azimuthal angle $\\phi$, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in excellent agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are ...

  8. A Review of Source Models of the 2015 Illapel, Chile Earthquake and Insights from Tsunami Data

    Science.gov (United States)

    Satake, Kenji; Heidarzadeh, Mohammad

    2017-01-01

    The 16 September 2015 Illapel, Chile, earthquake and associated tsunami have been studied by many researchers from various aspects. This paper reviews studies on the source model of the earthquake and examines tsunami data. The Illapel earthquake occurred in the source region of previous earthquakes in 1943 and 1880. The earthquake source was studied using various geophysical data, such as near-field seismograms, teleseismic waveform and backprojection, GPS and InSAR data, and tsunami waveforms. Most seismological analyses show a duration of 100 s with a peak at 50 s. The spatial distribution has some variety, but they all have the largest slip varying from 5 to 16 m located at 31°S, 72°W, which is 70 km NW of the epicenter. The shallow slip seems to be extended to the trench axis. A deeper slip patch was proposed from high-frequency seismic data. A tsunami earthquake model with a total duration of 250 s and a third asperity south of the epicenter is also proposed, but we show that the tsunami data do not support this model.

  9. Gravitational wave source counts at high redshift and in models with extra dimensions

    CERN Document Server

    García-Bellido, Juan; Trashorras, Manuel

    2016-01-01

    Gravitational wave (GW) source counts have been recently shown to be able to test how gravitational radiation propagates with the distance from the source. Here, we extend this formalism to cosmological scales, i.e. the high redshift regime, and we also allow for models with large or compactified extra dimensions like in the Kaluza-Klein (KK) model. We found that in the high redshift regime one would potentially expect two windows where observations above the minimum signal-to-noise threshold can be made, assuming there are no higher order corrections in the redshift dependence of the signal-to-noise $S/N(z)$ for the expected prediction. Furthermore, we also considered the case of intermediate redshifts, i.e. $0source counts $\\frac{dN}{S/N}$ in terms of the cosmological parameters, like the matter density $\\Omega_{m,0}$ in the cosmological constant model and also the cosmographic parameters $(q_0,j_0,s_0)$ for a general ...

  10. Modeling effectiveness of gradual increases in source level to mitigate effects of sonar on marine mammals.

    Science.gov (United States)

    Von Benda-Beckmann, Alexander M; Wensveen, Paul J; Kvadsheim, Petter H; Lam, Frans-Peter A; Miller, Patrick J O; Tyack, Peter L; Ainslie, Michael A

    2014-02-01

    Ramp-up or soft-start procedures (i.e., gradual increase in the source level) are used to mitigate the effect of sonar sound on marine mammals, although no one to date has tested whether ramp-up procedures are effective at reducing the effect of sound on marine mammals. We investigated the effectiveness of ramp-up procedures in reducing the area within which changes in hearing thresholds can occur. We modeled the level of sound killer whales (Orcinus orca) were exposed to from a generic sonar operation preceded by different ramp-up schemes. In our model, ramp-up procedures reduced the risk of killer whales receiving sounds of sufficient intensity to affect their hearing. The effectiveness of the ramp-up procedure depended strongly on the assumed response threshold and differed with ramp-up duration, although extending the duration of the ramp up beyond 5 min did not add much to its predicted mitigating effect. The main factors that limited effectiveness of ramp up in a typical antisubmarine warfare scenario were high source level, rapid moving sonar source, and long silences between consecutive sonar transmissions. Our exposure modeling approach can be used to evaluate and optimize mitigation procedures. © 2013 Society for Conservation Biology.

  11. Climate modeling - a tool for the assessment of the paleodistribution of source and reservoir rocks

    Energy Technology Data Exchange (ETDEWEB)

    Roscher, M.; Schneider, J.W. [Technische Univ. Bergakademie Freiberg (Germany). Inst. fuer Geologie; Berner, U. [Bundesanstalt fuer Geowissenschaften und Rohstoffe, Hannover (Germany). Referat Organische Geochemie/Kohlenwasserstoff-Forschung

    2008-10-23

    In an on-going project of BGR and TU Bergakademie Freiberg, numeric paleo-climate modeling is used as a tool for the assessment of the paleo-distribution of organic rich deposits as well as of reservoir rocks. This modeling approach is based on new ideas concerning the formation of the Pangea supercontinent. The new plate tectonic concept is supported by paleo- magnetic data as it fits the 95% confidence interval of published data. Six Permocarboniferous time slices (340, 320, 300, 290, 270, 255 Ma) were chosen within a first paleo-climate modeling approach as they represent the most important changes of the Late Paleozoic climate development. The digital maps have a resolution of 2.8 x 2.8 (T42), suitable for high-resolution climate modeling, using the PLASIM model. CO{sub 2} concentrations of the paleo-atmosphere and paleo-insolation values have been estimated by published methods. For the purpose of validation, quantitative model output, had to be transformed into qualitative parameters in order to be able to compare digital data with qualitative data of geologic indicators. The model output of surface temperatures and precipitation was therefore converted into climate zones. The reconstructed occurrences of geological indicators like aeolian sands, evaporites, reefs, coals, oil source rocks, tillites, phosphorites and cherts were then compared to the computed paleo-climate zones. Examples of the Permian Pangea show a very good agreement between model results and geological indicators. From the modeling approach we are able to identify climatic processes which lead to the deposition of hydrocarbon source and reservoir rocks. The regional assessment of such atmospheric processes may be used for the identification of the paleo-distribution of organic rich deposits or rock types suitable to form hydrocarbon reservoirs. (orig.)

  12. Three-dimensional forward modeling and inversion of borehole-to-surface electrical imaging with different power sources

    Science.gov (United States)

    Bai, Ze; Tan, Mao-Jin; Zhang, Fu-Lai

    2016-09-01

    Borehole-to-surface electrical imaging (BSEI) uses a line source and a point source to generate a stable electric field in the ground. In order to study the surface potential of anomalies, three-dimensional forward modeling of point and line sources was conducted by using the finite-difference method and the incomplete Cholesky conjugate gradient (ICCG) method. Then, the damping least square method was used in the 3D inversion of the formation resistivity data. Several geological models were considered in the forward modeling and inversion. The forward modeling results suggest that the potentials generated by the two sources have different surface signatures. The inversion data suggest that the low-resistivity anomaly is outlined better than the high-resistivity anomaly. Moreover, when the point source is under the anomaly, the resistivity anomaly boundaries are better outlined than when using a line source.

  13. Modeling of Regionalized Emissions (MoRE) into Water Bodies: An Open-Source River Basin Management System

    National Research Council Canada - National Science Library

    Stephan Fuchs; Maria Kaiser; Lisa Kiemle; Steffen Kittlaus; Shari Rothvoß; Snezhina Toshovski; Adrian Wagner; Ramona Wander; Tatyana Weber; Sara Ziegler

    2017-01-01

    .... The river basin management system MoRE (Modeling of Regionalized Emissions) was developed as a flexible open-source instrument which is able to model pathway-specific emissions and river loads on a catchment scale...

  14. Using Dual Isotopes and a Bayesian Isotope Mixing Model to Evaluate Nitrate Sources of Surface Water in a Drinking Water Source Watershed, East China

    Directory of Open Access Journals (Sweden)

    Meng Wang

    2016-08-01

    Full Text Available A high concentration of nitrate (NO3− in surface water threatens aquatic systems and human health. Revealing nitrate characteristics and identifying its sources are fundamental to making effective water management strategies. However, nitrate sources in multi-tributaries and mix land use watersheds remain unclear. In this study, based on 20 surface water sampling sites for more than two years’ monitoring from April 2012 to December 2014, water chemical and dual isotopic approaches (δ15N-NO3− and δ18O-NO3− were integrated for the first time to evaluate nitrate characteristics and sources in the Huashan watershed, Jianghuai hilly region, China. Nitrate-nitrogen concentrations (ranging from 0.02 to 8.57 mg/L were spatially heterogeneous that were influenced by hydrogeological and land use conditions. Proportional contributions of five potential nitrate sources (i.e., precipitation; manure and sewage, M & S; soil nitrogen, NS; nitrate fertilizer; nitrate derived from ammonia fertilizer and rainfall were estimated by using a Bayesian isotope mixing model. The results showed that nitrate sources contributions varied significantly among different rainfall conditions and land use types. As for the whole watershed, M & S (manure and sewage and NS (soil nitrogen were major nitrate sources in both wet and dry seasons (from 28% to 36% for manure and sewage and from 24% to 27% for soil nitrogen, respectively. Overall, combining a dual isotopes method with a Bayesian isotope mixing model offered a useful and practical way to qualitatively analyze nitrate sources and transformations as well as quantitatively estimate the contributions of potential nitrate sources in drinking water source watersheds, Jianghuai hilly region, eastern China.

  15. Integrating water quality modeling with ecological risk assessment for nonpoint source pollution control: A conceptual framework

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y.D.; McCutcheon, S.C.; Rasmussen, T.C.; Nutter, W.L.; Carsel, R.F.

    1993-01-01

    The historical development of water quality protection goals and strategies in the United States is reviewed. The review leads to the identification and discussion of three components (i.e., management mechanism, environmental investigation approaches, and environmental assessment and criteria) for establishing a management framework for nonpoint source pollution control. Water quality modeling and ecological risk assessment are the two most important and promising approaches to the operation of the proposed management framework. A conceptual framework that shows the general integrative relationships between water quality modeling and ecological risk assessment is presented. (Copyright (c) 1993 IAWQ.)

  16. DISCRETE DYNAMIC MODEL OF BEVEL GEAR – VERIFICATION THE PROGRAM SOURCE CODE FOR NUMERICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    Krzysztof TWARDOCH

    2014-06-01

    Full Text Available In the article presented a new model of physical and mathematical bevel gear to study the influence of design parameters and operating factors on the dynamic state of the gear transmission. Discusses the process of verifying proper operation of copyright calculation program used to determine the solutions of the dynamic model of bevel gear. Presents the block diagram of a computing algorithm that was used to create a program for the numerical simulation. The program source code is written in an interactive environment to perform scientific and engineering calculations, MATLAB

  17. Experimental validation of a mass- efficiency model for an indium liquid-metal ion source

    CERN Document Server

    Tajmar, M

    2003-01-01

    A model is derived linking microdroplet emission of a liquid-metal ion source (LMIS) to the actual current-voltage characteristic and operating temperature. All parameters were experimentally investigated using an indium LMIS, confirming the relationships found. The model allows for the first time the optimisation of a LMIS for low droplet emission at high emission currents. This is very important for application as a thruster, which has been developed at ARC Seibersdorf research. It can be also used to extrapolate droplet emission values along the current-voltage characteristic. (orig.)

  18. Laser scanner data processing and 3D modeling using a free and open source software

    Energy Technology Data Exchange (ETDEWEB)

    Gabriele, Fatuzzo [Dept. of Industrial and Mechanical Engineering, University of Catania (Italy); Michele, Mangiameli, E-mail: amichele.mangiameli@dica.unict.it; Giuseppe, Mussumeci; Salvatore, Zito [Dept. of Civil Engineering and Architecture, University of Catania (Italy)

    2015-03-10

    The laser scanning is a technology that allows in a short time to run the relief geometric objects with a high level of detail and completeness, based on the signal emitted by the laser and the corresponding return signal. When the incident laser radiation hits the object to detect, then the radiation is reflected. The purpose is to build a three-dimensional digital model that allows to reconstruct the reality of the object and to conduct studies regarding the design, restoration and/or conservation. When the laser scanner is equipped with a digital camera, the result of the measurement process is a set of points in XYZ coordinates showing a high density and accuracy with radiometric and RGB tones. In this case, the set of measured points is called “point cloud” and allows the reconstruction of the Digital Surface Model. Even the post-processing is usually performed by closed source software, which is characterized by Copyright restricting the free use, free and open source software can increase the performance by far. Indeed, this latter can be freely used providing the possibility to display and even custom the source code. The experience started at the Faculty of Engineering in Catania is aimed at finding a valuable free and open source tool, MeshLab (Italian Software for data processing), to be compared with a reference closed source software for data processing, i.e. RapidForm. In this work, we compare the results obtained with MeshLab and Rapidform through the planning of the survey and the acquisition of the point cloud of a morphologically complex statue.

  19. Modeling Sources of Teaching Self-Efficacy for Science, Technology, Engineering, and Mathematics Graduate Teaching Assistants.

    Science.gov (United States)

    DeChenne, Sue Ellen; Koziol, Natalie; Needham, Mark; Enochs, Larry

    2015-01-01

    Graduate teaching assistants (GTAs) in science, technology, engineering, and mathematics (STEM) have a large impact on undergraduate instruction but are often poorly prepared to teach. Teaching self-efficacy, an instructor's belief in his or her ability to teach specific student populations a specific subject, is an important predictor of teaching skill and student achievement. A model of sources of teaching self-efficacy is developed from the GTA literature. This model indicates that teaching experience, departmental teaching climate (including peer and supervisor relationships), and GTA professional development (PD) can act as sources of teaching self-efficacy. The model is pilot tested with 128 GTAs from nine different STEM departments at a midsized research university. Structural equation modeling reveals that K-12 teaching experience, hours and perceived quality of GTA PD, and perception of the departmental facilitating environment are significant factors that explain 32% of the variance in the teaching self-efficacy of STEM GTAs. This model highlights the important contributions of the departmental environment and GTA PD in the development of teaching self-efficacy for STEM GTAs.

  20. Modeling regional mobile source emissions in a geographic information system framework

    Energy Technology Data Exchange (ETDEWEB)

    Bachman, W. [Georgia Inst. of Technology, Atlanta, GA (United States). Center for Geographic Information Systems; Sarasua, W. [Clemson Univ., SC (United States). Dept. of Civil Engineering; Hallmark, S. [Iowa State Univ., Ames, IA (United States). School of Civil and Construction Engineering; Guensler, R. [Georgia Inst. of Technology, Atlanta, GA (United States). School fo Civil and Environmental Engineering

    2000-07-01

    Suburban sprawl, population growth, and automobile dependency contribute directly to air pollution problems in US metropolitan areas. As metropolitan regions attempt to mitigate these problems, they are faced with the difficult task of balancing the mobility needs of a growing population and economy, while simultaneously lowering or maintaining levels of ambient pollutants. Although ambient air quality can he directly monitored, predicting the amount and fraction of the mobile source components presents special challenges. A modeling framework that can correlate spatial and temporal emission-specific vehicle activities is required for the complex photochemical models used to predict pollutant concentrations . This paper discusses the GIS-based modeling approach called the Mobile Emission Assessment System for Urban and Regional Evaluation (MEASURE). MEASURE provides researchers and planners with a means of assessing motor vehicle emission reduction strategies. Estimates of spatially resolved fleet composition and activity are combined with activity-specific emission rates to predict engine start and running exhaust emissions. Engine start emissions are estimated using aggregate zonal information. Running exhaust emissions are predicted using road segment specific information and aggregate zonal information. The paper discusses the benefits and challenges related to mobile source emissions modeling in a GIS framework and identifies future GIS mobile emissions modeling research needs. (Author)

  1. Modeling Sources of Teaching Self-Efficacy for Science, Technology, Engineering, and Mathematics Graduate Teaching Assistants

    Science.gov (United States)

    DeChenne, Sue Ellen; Koziol, Natalie; Needham, Mark; Enochs, Larry

    2015-01-01

    Graduate teaching assistants (GTAs) in science, technology, engineering, and mathematics (STEM) have a large impact on undergraduate instruction but are often poorly prepared to teach. Teaching self-efficacy, an instructor’s belief in his or her ability to teach specific student populations a specific subject, is an important predictor of teaching skill and student achievement. A model of sources of teaching self-efficacy is developed from the GTA literature. This model indicates that teaching experience, departmental teaching climate (including peer and supervisor relationships), and GTA professional development (PD) can act as sources of teaching self-efficacy. The model is pilot tested with 128 GTAs from nine different STEM departments at a midsized research university. Structural equation modeling reveals that K–12 teaching experience, hours and perceived quality of GTA PD, and perception of the departmental facilitating environment are significant factors that explain 32% of the variance in the teaching self-efficacy of STEM GTAs. This model highlights the important contributions of the departmental environment and GTA PD in the development of teaching self-efficacy for STEM GTAs. PMID:26250562

  2. Reduced Order Models of a Current Source Inverter Induction Motor Drive

    Directory of Open Access Journals (Sweden)

    Ibrahim K. Al-Abbas

    2009-01-01

    Full Text Available Problem Statement: The current source inverter induction motor (CSI-IM drive was widely used in various industries. The main disadvantage of this drive was nonlinearity and complexity. This work was done to develop a simple drive systems models. Approach: The MATLAB/SIMULINK software was used for system modeling. Three reduced models were developed by choosing specific frame, neglecting stator transients and ignoring stator equations. Results: The dynamic performance of the models was examined in open loop form for a step change in control variable (the input voltage as well as for step change in disturbance (mechanical load.Conclusion: The three models were equivalent in steady state. The error of these models in the transient response was less than 5 %, with the exception of the time performances of the transient model to step change of supply voltage. Recommendations: All three models were suggested to be used for designing torque control systems. The detailed and stator equation models were recommended to be used in speed control design.

  3. Physically-based modelling of granular flows with Open Source GIS

    Directory of Open Access Journals (Sweden)

    M. Mergili

    2012-01-01

    Full Text Available Computer models, in combination with Geographic Information Sciences (GIS, play an important role in up-to-date studies of travel distance, impact area, velocity or energy of granular flows (e.g. snow or rock avalanches, flows of debris or mud. Simple empirical-statistical relationships or mass point models are frequently applied in GIS-based modelling environments. However, they are only appropriate for rough overviews at the regional scale. In detail, granular flows are highly complex processes and physically-based, distributed models are required for detailed studies of travel distance, velocity, and energy of such phenomena. One of the most advanced theories for understanding and modelling granular flows is the Savage-Hutter type model, a system of differential equations based on the conservation of mass and momentum. The equations have been solved for a number of idealized topographies, but only few attempts to find a solution for arbitrary topography or to integrate the model with GIS are known up to now. The work presented is understood as an initiative to integrate a fully physically-based model for the motion of granular flows, based on the extended Savage-Hutter theory, with GRASS, an Open Source GIS software package. The potentials of the model are highlighted, employing the Val Pola Rock Avalanche (Northern Italy, 1987 as the test event, and the limitations as well as the most urging needs for further research are discussed.

  4. Physically-based modelling of granular flows with Open Source GIS

    Science.gov (United States)

    Mergili, M.; Schratz, K.; Ostermann, A.; Fellin, W.

    2012-01-01

    Computer models, in combination with Geographic Information Sciences (GIS), play an important role in up-to-date studies of travel distance, impact area, velocity or energy of granular flows (e.g. snow or rock avalanches, flows of debris or mud). Simple empirical-statistical relationships or mass point models are frequently applied in GIS-based modelling environments. However, they are only appropriate for rough overviews at the regional scale. In detail, granular flows are highly complex processes and physically-based, distributed models are required for detailed studies of travel distance, velocity, and energy of such phenomena. One of the most advanced theories for understanding and modelling granular flows is the Savage-Hutter type model, a system of differential equations based on the conservation of mass and momentum. The equations have been solved for a number of idealized topographies, but only few attempts to find a solution for arbitrary topography or to integrate the model with GIS are known up to now. The work presented is understood as an initiative to integrate a fully physically-based model for the motion of granular flows, based on the extended Savage-Hutter theory, with GRASS, an Open Source GIS software package. The potentials of the model are highlighted, employing the Val Pola Rock Avalanche (Northern Italy, 1987) as the test event, and the limitations as well as the most urging needs for further research are discussed.

  5. Theory for source-responsive and free-surface film modeling of unsaturated flow

    Science.gov (United States)

    Nimmo, J.R.

    2010-01-01

    A new model explicitly incorporates the possibility of rapid response, across significant distance, to substantial water input. It is useful for unsaturated flow processes that are not inherently diffusive, or that do not progress through a series of equilibrium states. The term source-responsive is used to mean that flow responds sensitively to changing conditions at the source of water input (e.g., rainfall, irrigation, or ponded infiltration). The domain of preferential flow can be conceptualized as laminar flow in free-surface films along the walls of pores. These films may be considered to have uniform thickness, as suggested by field evidence that preferential flow moves at an approximately uniform rate when generated by a continuous and ample water supply. An effective facial area per unit volume quantitatively characterizes the medium with respect to source-responsive flow. A flow-intensity factor dependent on conditions within the medium represents the amount of source-responsive flow at a given time and position. Laminar flow theory provides relations for the velocity and thickness of flowing source-responsive films. Combination with the Darcy-Buckingham law and the continuity equation leads to expressions for both fluxes and dynamic water contents. Where preferential flow is sometimes or always significant, the interactive combination of source-responsive and diffuse flow has the potential to improve prediction of unsaturated-zone fluxes in response to hydraulic inputs and the evolving distribution of soil moisture. Examples for which this approach is efficient and physically plausible include (i) rainstorm-generated rapid fluctuations of a deep water table and (ii) space- and time-dependent soil water content response to infiltration in a macroporous soil. ?? Soil Science Society of America.

  6. Modeling study of natural emissions, source apportionment, and emission control of atmospheric mercury

    Science.gov (United States)

    Shetty, Suraj K.

    Mercury (Hg) is a toxic pollutant and is important to understand its cycling in the environment. In this dissertation, a number of modeling investigations were conducted to better understand the emission from natural surfaces, the source-receptor relationship of the emissions, and emission reduction of atmospheric mercury. The first part of this work estimates mercury emissions from vegetation, soil and water surfaces using a number of natural emission processors and detailed (LAI) Leaf Area Index data from GIS (Geographic Information System) satellite products. East Asian domain was chosen as it contributes nearly 50% of the global anthropogenic mercury emissions into the atmosphere. The estimated annual natural mercury emissions (gaseous elemental mercury) in the domain are 834 Mg yr-1 with 462 Mg yr-1 contributing from China. Compared to anthropogenic sources, natural sources show greater seasonal variability (highest in simmer). The emissions are significant, sometimes dominant, contributors to total mercury emission in the regions. The estimates provide possible explanation for the gaps between the anthropogenic emission estimates based on activity data and the emission inferred from field observations in the regions. To understand the contribution of domestic emissions to mercury deposition in the United States, the second part of the work applies the mercury model of Community Multi-scale Air Quality Modeling system (CMAQ-Hg v4.6) to apportion the various emission sources attributing to the mercury wet and dry deposition in the 6 United States receptor regions. Contributions to mercury deposition from electric generating units (EGU), iron and steel industry (IRST), industrial point sources excluding EGU and IRST (OIPM), the remaining anthropogenic sources (RA), natural processes (NAT), and out-of-boundary transport (BC) in domain was estimated. The model results for 2005 compared reasonably well to field observations made by MDN (Mercury Deposition Network

  7. Tectonic subsidence modelling and Gondwana source rock hydrocarbon potential, Northwest Bangladesh modelling of Kuchma, Singra and Hazipur wells

    Energy Technology Data Exchange (ETDEWEB)

    Frielingsdorf, J. [Shell Petroleum Development Company, Nigeria Limited, P.O. Box 23, Port Harcourt, Rivers State (Nigeria); Aminul Islam, Sk.; Mizanur Rahman, Md. [BAPEX, Bangladesh Petroleum Exploration and Production Ltd., Shahjalal Tower, 80/A-B Siddeshwari Circular Road, Dhaka 1217 (Bangladesh); Block, Martin [Federal Institute for Geosciences and Natural Resources (BGR), Stilleweg 2, 30655 Hannover (Germany); Golam Rabbani, Md. [Norwegian University of Science and Technology (NTNU), 7491 Trondheim (Norway)

    2008-06-15

    The northwestern part of Bangladesh is in terms of hydrocarbon exploration still under-explored. This paper presents the basin development from a structural point of view and includes the results of thermal and maturity modelling using numerical tools of basin modelling. One regional seismic section and three exploration wells have been investigated to unravel a conceptual model for the subsidence and thermal history of the region. According to the findings it is very likely that up to 2900 m of Triassic/Jurassic and partly Permian sediments have been eroded prior to the break-up of Gondwana. During continental break-up a peak heat flow is considered. This was necessary for calibrating maturity profiles using vitrinite reflectance (VR) derived from modelled wells. A significant gas generation phase during Lower Jurassic is predicted. At modelled well locations, although renewed subsidence occurred during Tertiary to present day, a second phase of gas generation has not occurred, as past maximum temperatures were not exceeded. According to the interpreted regional seismic sections in the region, the area southeast of the 'Hinge Zone' can be regarded as the main kitchen area for gas generation from the Gondwana source rock. The petroleum system in the northwestern part of Bangladesh remains high risk due to uncertainties in source rock distribution and generation. (author)

  8. Coronal structure analysis based on the potential field source surface modeling and total solar eclipse observation

    Science.gov (United States)

    Muhamad, Johan; Mumtahana, Farahhati; Sutastio, Heri; Imaduddin, Irfan; Putri, Gerhana P.

    2016-11-01

    We constructed global coronal magnetic fields of the Sun during the Total Solar Eclipse (TSE) 9 March 2016 by using Potential Field Source Surface (PFSS) model. Synoptic photospheric magnetogram data from Helioseismic and Magnetic Imager (HMI) onboard Solar Dynamics Observatory (SDO) was used as a boundary condition to extrapolate the coronal magnetic fields of the Sun. This extrapolated structure was analyzed by comparing the alignment of the fields from the model with coronal structure from the observation. We also used observational data of coronal structure during the total solar eclipse to know how well the model agree with the observation. As a result, we could identify several coronal streamers which were produced by the large closed loops in the lower regime of the corona. This result verified that the PFSS extrapolation can be used as a tool to model the inner corona with several constraints. We also discussed how the coronal structure can be used to deduce the phase of the solar cycle.

  9. Probability density function modeling of scalar mixing from concentrated sources in turbulent channel flow

    CERN Document Server

    Bakosi, J; Boybeyi, Z; 10.1063/1.2803348

    2010-01-01

    Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth & Pope with Durbin's method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a non-local representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent timescale is supplied by the gamma-distribution model of van Slooten et al. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with th...

  10. Spallation Neutron Source Drift Tube Linac Resonance Control Cooling System Modeling

    CERN Document Server

    Tang, Johnny Y; Champion, Marianne M; Feschenko, Alexander; Gibson, Paul; Kiselev, Yuri; Kovalishin, A S; Kravchuk, Leonid V; Kvasha, Adolf; Schubert, James P

    2005-01-01

    The Resonance Control Cooling System (RCCS) for the warm linac of the Spallation Neutron Source was designed by Los Alamos National Laboratory. The primary design focus was on water cooling of individual component contributions. The sizing the RCCS water skid was accomplished by means of a specially created SINDA/FLUINT model tailored to these system requirements. A new model was developed in Matlab Simulink and incorporates actual operational values and control valve interactions. Included is the dependence of RF input power on system operation, cavity detuning values during transients, time delays that result from water flows through the heat exchanger, the dynamic process of water warm-up in the cooling system due to dissipated RF power on the cavity surface, differing contributions on the cavity detuning due to drift tube and wall heating, and a dynamic model of the heat exchanger with characteristics in close agreement to the real unit. Because of the Matlab Simulink model, investigation of a wide range ...

  11. Update on single-screw expander geometry model integrated into an open-source simulation tool

    Science.gov (United States)

    Ziviani, D.; Bell, I. H.; De Paepe, M.; van den Broek, M.

    2015-08-01

    In this paper, a mechanistic steady-state model of a single-screw expander is described with emphasis on the geometric description. Insights into the calculation of the main parameters and the definition of the groove profile are provided. Additionally, the adopted chamber model is discussed. The model has been implemented by means of the open-source software PDSim (Positive Displacement SIMulation), written in the Python language, and the solution algorithm is described. The single-screw expander model is validated with a set of steady-state measurement points collected from a 11 kWe organic Rankine cycle test-rig with SES36 and R245fa as working fluid. The overall performance and behavior of the expander are also further analyzed.

  12. The development and validation of a five factor model of sources of self-efficacy in clinical nursing education

    NARCIS (Netherlands)

    Gloudemans, Henk; Schalk, Rene; Reynaert, Wouter; Braeken, Johan

    2012-01-01

    Background: The aim of this study is to validate a newly developed nurses' self-efficacy sources inventory. We test the validity of a five-dimensional model of sources of self-efficacy, which we contrast with the traditional four-dimensional model based on Bandura's theoretical concepts. Methods: Co

  13. The development and validation of a five-factor model of Sources of Self-Efficacy in clinical nursing education

    NARCIS (Netherlands)

    Gloudemans, H.; Reynaert, W.; Schalk, R.; Braeken, J.

    2013-01-01

    Background: The aim of this study is to validate a newly developed nurses' self-efficacy sources inventory. We test the validity of a five-dimensional model of sources of self-efficacy, which we contrast with the traditional four-dimensional model based on Bandura’s theoretical

  14. Impacts of DNAPL Source Treatment: Experimental and Modeling Assessment of the Benefits of Partial DNAPL Source Removal

    Science.gov (United States)

    2009-09-01

    term research plan. At many hazardous waste sites contaminants reside in the subsurface as separate dense non-aqueous phase liquids (DNAPL). These...2006). Using Multilevel Samplers to Assess Ethanol Flushing and Enhanced Bioremediation at Former Sages Drycleaners. M.S. Thesis, University of... nuclear industry for conducting performance assessment calculations. The analytical FORTRAN code for the DNAPL source function, REMChlor, was

  15. Dynamic Rupture Simulations Based on the Characterized Source Model of the 2011 Tohoku Earthquake

    Science.gov (United States)

    Tsuda, Kenichi; Iwase, Satoshi; Uratani, Hiroaki; Ogawa, Sachio; Watanabe, Takahide; Miyakoshi, Jun'ichi; Ampuero, Jean Paul

    2017-01-01

    The 2011 Off the Pacific Coast of Tohoku earthquake (Tohoku earthquake, M w 9.0) occurred on the Japan Trench and caused a devastating tsunami. Studies of this earthquake have revealed complex features of its rupture process. In particular, the shallow parts of the fault (near the trench) hosted large slip and long period seismic wave radiation, whereas the deep parts of the rupture (near the coast) hosted smaller slip and strong radiation of short period seismic waves. Understanding such depth-dependent feature of the rupture process of the Tohoku earthquake is necessary as it may occur during future mega-thrust earthquakes in this and other regions. In this study, we investigate the "characterized source model" of the Tohoku earthquake through dynamic rupture simulations. This source model divides the fault plane into several parts characterized by different size and frictional strength (main asperity, background area, etc.) and is widely used in Japan for the prediction of strong ground motion and tsunami through kinematic rupture simulations. Our characterized source model of the Tohoku earthquake comprises a large shallow asperity with moderate frictional strength, small deep asperities with high frictional strength, a background area with low frictional strength, and an area with dynamic weakening close to the trench (low dynamic friction coefficient as arising from, e.g., thermal pressurization). The results of our dynamic rupture simulation reproduce the main depth-dependent feature of the rupture process of the Tohoku earthquake. We also find that the width of the area close to the trench (equal to the distance from the trench to the shallow asperity, interpreted as the size of the accretionary prism) and the presence of dynamic weakening in this area have a significant influence on the final slip distribution. These results are useful to construct characterized source models for other subduction zones with different scale of the accretionary prism, such

  16. User's Guide for the Agricultural Non-Point Source (AGNPS) Pollution Model Data Generator

    Science.gov (United States)

    Finn, Michael P.; Scheidt, Douglas J.; Jaromack, Gregory M.

    2003-01-01

    BACKGROUND Throughout this user guide, we refer to datasets that we used in conjunction with developing of this software for supporting cartographic research and producing the datasets to conduct research. However, this software can be used with these datasets or with more 'generic' versions of data of the appropriate type. For example, throughout the guide, we refer to national land cover data (NLCD) and digital elevation model (DEM) data from the U.S. Geological Survey (USGS) at a 30-m resolution, but any digital terrain model or land cover data at any appropriate resolution will produce results. Another key point to keep in mind is to use a consistent data resolution for all the datasets per model run. The U.S. Department of Agriculture (USDA) developed the Agricultural Nonpoint Source (AGNPS) pollution model of watershed hydrology in response to the complex problem of managing nonpoint sources of pollution. AGNPS simulates the behavior of runoff, sediment, and nutrient transport from watersheds that have agriculture as their prime use. The model operates on a cell basis and is a distributed parameter, event-based model. The model requires 22 input parameters. Output parameters are grouped primarily by hydrology, sediment, and chemical output (Young and others, 1995.) Elevation, land cover, and soil are the base data from which to extract the 22 input parameters required by the AGNPS. For automatic parameter extraction, follow the general process described in this guide of extraction from the geospatial data through the AGNPS Data Generator to generate input parameters required by the pollution model (Finn and others, 2002.)

  17. A land use regression model incorporating data on industrial point source pollution

    Institute of Scientific and Technical Information of China (English)

    Li Chen; Yuming Wang; Peiwu Li; Yaqin Ji; Shaofei Kong; Zhiyong Li; Zhipeng Bai

    2012-01-01

    Advancing the understanding of the spatial aspects of air pollution in the city regional environment is an area whe