WorldWideScience

Sample records for source puff model

  1. Diffusion coefficient adaptive correction in Lagrangian puff model

    International Nuclear Information System (INIS)

    Tan Wenji; Wang Dezhong; Ma Yuanwei; Ji Zhilong

    2014-01-01

    Lagrangian puff model is widely used in the decision support system for nuclear emergency management. The diffusion coefficient is one of the key parameters impacting puff model. An adaptive method was proposed in this paper, which could correct the diffusion coefficient in Lagrangian puff model, and it aimed to improve the accuracy of calculating the nuclide concentration distribution. This method used detected concentration data, meteorological data and source release data to estimate the actual diffusion coefficient with least square method. The diffusion coefficient adaptive correction method was evaluated by Kincaid data in MVK, and was compared with traditional Pasquill-Gifford (P-G) diffusion scheme method. The results indicate that this diffusion coefficient adaptive correction method can improve the accuracy of Lagrangian puff model. (authors)

  2. A real-time PUFF-model for accidental releases in complex terrain

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.; Mikkelsen, T.; Larsen, S.E.; Troen, I.; Baas, A.F. de; Kamada, R.; Skupniewicz, C.; Schacher, G.

    1990-01-01

    LINCOM-RIMPUFF, a combined flow/puff model, was developed at Riso National Laboratory for the Vandenberg AFB Meteorology and Plume Dispersion Handbook and is suitable as is for real time response to emergency spills and vents of gases and radionuclides. LINCOM is a linear, diagnostic, spectral, potential flow model which extends the Jackson-Hunt theory of non-hydrostatic, adiabatic wind flow over hills to the mesoscale domain. It is embedded in a weighted objective analysis (WOA) of real-time Vandenberg tower winds and may be used in ultra-high speed lookup table mode. The mesoscale dispersion model RIMPUFF is a flexible Gaussian puff model equipped with computer-time effective features for terrain and stability-dependent dispersion parameterization, plume rise formulas, inversion and ground-level reflection capabilities and wet/dry (source) depletion. It can treat plume bifurcation in complex terrain by using a puff-splitting scheme. It allows the flow-model to compute the larger scale wind field, reserving turbulent diffusion calculations for the sub-grid scale. In diagnostic mode toxic exposure are well assessed via the release of a single initial puff. With optimization, processing time for RIMPUFF should be on the order of 2 CPU minutes or less on a PC-system. In prognostic mode with shifting winds, multiple puff releases may become necessary, thereby lengthening processing time

  3. Computer modeling of a small neon gas-puff pinch

    International Nuclear Information System (INIS)

    Ullschmied, J.

    1996-01-01

    The macroscopic dynamics of a cylindrical gas-puff pinch and conditions of radiation plasma collapse are studied by using a one-dimensional ('mechanical') computer model. Besides the Joule plasma heating, compressional heating, magnetic field freezing in a plasma and recombination losses, also the real temperature- and density-dependences of radiation plasma loss are taken into account. The results of calculations are compared with experimental data taken from a small neon-puff z-pinch experiment operated at the Institute of Plasma Physics in Prague. (author). 7 figs., 11 refs

  4. Laser-Irradiated Gas Puff Target Plasma Modeling

    Czech Academy of Sciences Publication Activity Database

    Vrba, Pavel; Vrbová, M.

    2014-01-01

    Roč. 42, č. 10 (2014), s. 2600-2601 ISSN 0093-3813 R&D Projects: GA ČR GAP102/12/2043 Grant - others:GA MŠk(CZ) CZ.1.07/2.3.00/20.0092 Institutional support: RVO:61389021 Keywords : Gas puff laser plasma * water window radiation source * RHMD code Z* Subject RIV: BH - Optics, Masers, Lasers Impact factor: 1.101, year: 2014 http://ieeexplore.ieee.org

  5. A 'Puff' dispersion model for routine and accidental releases

    International Nuclear Information System (INIS)

    Grsic, Z.; Rajkovic, B.; Milutinovic, P.

    1999-01-01

    A Puff dispersion model for accidental or routine releases is presented. This model was used as a constitutive part of an automatic meteorological station.All measured quantities are continuously displayed on PC monitor in a digital and graphical form, they are averaging every 10 minutes and sending to the civil information center of Belgrade. In the paper simulation of a pollutant plume dispersion from The oil refinery 'Pancevo', on April 18 th 1999 is presented. (author)

  6. Puff-trajectory modelling for long-duration releases

    International Nuclear Information System (INIS)

    Underwood, B.Y.

    1988-01-01

    This investigation considers some aspects of the interpretation and application of the puff-trajectory technique which is increasingly being considered for use in accident consequence assessment. It firsthigh lights the problems of applying the straight-line Gaussian model to releases of many hours duration and the drawbacks of using the ad hoc technique of multiple straight-line plumes, thereby pointing to the advantages of allowing curved trajectories. A number of fundamental questions are asked about the conventional puff-trajectory approach such as: what is the justification for using ensemble-average spread parameters (σ values) in constructing particular realizations of the concentration field and to what sampling time should these σ values correspond. These questions are answered in the present work by returning to basics: an interpretation of the puff-trajectory method is developed which establishes a correspondence between the omission of wind-field fluctuations with period below a given value in the generation of trajectories and the achievable spatial resolution of the estimates of time-integrated concentration. In application to accident consequence assessment, this focusses attention on what spatial resolution is necessary for particular consequence types or is implicit in the computational discretization employed

  7. Debris-free soft x-ray source with gas-puff target

    Science.gov (United States)

    Ni, Qiliang; Chen, Bo; Gong, Yan; Cao, Jianlin; Lin, Jingquan; Lee, Hongyan

    2001-12-01

    We have been developing a debris-free laser plasma light source with a gas-puff target system whose nozzle is driven by a piezoelectric crystal membrane. The gas-puff target system can utilize gases such as CO2, O2 or some gas mixture according to different experiments. Therefore, in comparison with soft X-ray source using a metal target, after continuously several-hour laser interaction with gas from the gas-puff target system, no evidences show that the light source can produce debris. The debris-free soft X-ray source is prepared for soft X-ray projection lithography research at State Key Laboratory of Applied Optics. Strong emission from CO2, O2 and Kr plasma is observed.

  8. MESOI Version 2.0: an interactive mesoscale Lagrangian puff dispersion model with deposition and decay

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Athey, G.F.; Glantz, C.S.

    1983-11-01

    MESOI Version 2.0 is an interactive Lagrangian puff model for estimating the transport, diffusion, deposition and decay of effluents released to the atmosphere. The model is capable of treating simultaneous releases from as many as four release points, which may be elevated or at ground-level. The puffs are advected by a horizontal wind field that is defined in three dimensions. The wind field may be adjusted for expected topographic effects. The concentration distribution within the puffs is initially assumed to be Gaussian in the horizontal and vertical. However, the vertical concentration distribution is modified by assuming reflection at the ground and the top of the atmospheric mixing layer. Material is deposited on the surface using a source depletion, dry deposition model and a washout coefficient model. The model also treats the decay of a primary effluent species and the ingrowth and decay of a single daughter species using a first order decay process. This report is divided into two parts. The first part discusses the theoretical and mathematical bases upon which MESOI Version 2.0 is based. The second part contains the MESOI computer code. The programs were written in the ANSI standard FORTRAN 77 and were developed on a VAX 11/780 computer. 43 references, 14 figures, 13 tables

  9. Puff models for simulation of fugitive radioactive emissions in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Camila P. da, E-mail: camila.costa@ufpel.edu.b [Universidade Federal de Pelotas (UFPel), RS (Brazil). Inst. de Fisica e Matematica. Dept. de Matematica e Estatistica; Pereira, Ledina L., E-mail: ledinalentz@yahoo.com.b [Universidade do Extremo Sul Catarinense (UNESC), Criciuma, SC (Brazil); Vilhena, Marco T., E-mail: vilhena@pq.cnpq.b [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Tirabassi, Tiziano, E-mail: t.tirabassi@isac.cnr.i [Institute of Atmospheric Sciences and Climate (CNR/ISAC), Bologna (Italy)

    2009-07-01

    A puff model for the dispersion of material from fugitive radioactive emissions is presented. For vertical diffusion the model is based on general techniques for solving time dependent advection-diffusion equation: the ADMM (Advection Diffusion Multilayer Method) and GILTT (Generalized Integral Laplace Transform Technique) techniques. The first one is an analytical solution based on a discretization of the Atmospheric Boundary Layer (ABL) in sub-layers where the advection-diffusion equation is solved by the Laplace transform technique. The solution is given in integral form. The second one is a well-known hybrid method that had solved a wide class of direct and inverse problems mainly in the area of Heat Transfer and Fluid Mechanics and the solution is given in series form. Comparisons between values predicted by the models against experimental ground-level concentrations are shown. (author)

  10. Puff models for simulation of fugitive radioactive emissions in atmosphere

    International Nuclear Information System (INIS)

    Costa, Camila P. da; Vilhena, Marco T.

    2009-01-01

    A puff model for the dispersion of material from fugitive radioactive emissions is presented. For vertical diffusion the model is based on general techniques for solving time dependent advection-diffusion equation: the ADMM (Advection Diffusion Multilayer Method) and GILTT (Generalized Integral Laplace Transform Technique) techniques. The first one is an analytical solution based on a discretization of the Atmospheric Boundary Layer (ABL) in sub-layers where the advection-diffusion equation is solved by the Laplace transform technique. The solution is given in integral form. The second one is a well-known hybrid method that had solved a wide class of direct and inverse problems mainly in the area of Heat Transfer and Fluid Mechanics and the solution is given in series form. Comparisons between values predicted by the models against experimental ground-level concentrations are shown. (author)

  11. An efficient approach to transient turbulent dispersion modeling by CFD-statistical analysis of a many-puff system

    International Nuclear Information System (INIS)

    Ching, W-H; K H Leung, Michael; Leung, Dennis Y C

    2009-01-01

    Transient turbulent dispersion phenomena can be found in various practical problems, such as the accidental release of toxic chemical vapor and the airborne transmission of infectious droplets. Computational fluid dynamics (CFD) is an effective tool for analyzing such transient dispersion behaviors. However, the transient CFD analysis is often computationally expensive and time consuming. In the present study, a computationally efficient CFD-statistical hybrid modeling method has been developed for studying transient turbulent dispersion. In this method, the source emission is represented by emissions of many infinitesimal puffs. Statistical analysis is performed to obtain first the statistical properties of the puff trajectories and subsequently the most probable distribution of the puff trajectories that represent the macroscopic dispersion behaviors. In two case studies of ambient dispersion, the numerical modeling results obtained agree reasonably well with both experimental measurements and conventional k-ε modeling results published in the literature. More importantly, the proposed many-puff CFD-statistical hybrid modeling method effectively reduces the computational time by two orders of magnitude.

  12. Puff-on-cell model for computing pollutant transport and diffusion

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1975-01-01

    Most finite-difference methods of modeling pollutant dispersion have been shown to introduce numerical pseudodiffusion, which can be much larger than the true diffusion in the fluid flow and can even generate negative values in the predicted pollutant concentrations. Two attempts to minimize the effect of pseudodiffusion are discussed with emphasis on the particle-in-cell (PIC) method of Sklarew. This paper describes a method that replaces Sklarew's numerous particles in a grid volume, and parameterizes subgrid-scale concentration with a Gaussian puff, and thus avoids the computation of the moments, as in the model of Egan and Mahoney by parameterizing subgrid-scale concentration with a Guassian puff

  13. Assessment of corneal dynamics with high-speed swept source Optical Coherence Tomography combined with an air puff system

    Science.gov (United States)

    Alonso-Caneiro, David; Karnowski, Karol; Kaluzny, Bartlomiej J.; Kowalczyk, Andrzej; Wojtkowski, Maciej

    2011-07-01

    We present a novel method and instrument for in vivo imaging and measurement of the human corneal dynamics during an air puff. The instrument is based on high-speed swept source optical coherence tomography (ssOCT) combined with a custom adapted air puff chamber from a non-contact tonometer, which uses an air stream to deform the cornea in a non-invasive manner. During the short period of time that the deformation takes place, the ssOCT acquires multiple A-scans in time (M-scan) at the center of the air puff, allowing observation of the dynamics of the anterior and posterior corneal surfaces as well as the anterior lens surface. The dynamics of the measurement are driven by the biomechanical properties of the human eye as well as its intraocular pressure. Thus, the analysis of the M-scan may provide useful information about the biomechanical behavior of the anterior segment during the applanation caused by the air puff. An initial set of controlled clinical experiments are shown to comprehend the performance of the instrument and its potential applicability to further understand the eye biomechanics and intraocular pressure measurements. Limitations and possibilities of the new apparatus are discussed.

  14. Development of intense pulsed heavy ion beam diode using gas puff plasma gun as ion source

    International Nuclear Information System (INIS)

    Ito, H.; Higashiyama, M.; Takata, S.; Kitamura, I.; Masugata, K.

    2006-01-01

    A magnetically insulated ion diode with an active ion source of a gas puff plasma gun has been developed in order to generate a high-intensity pulsed heavy ion beam for the implantation process of semiconductors and the surface modification of materials. The nitrogen plasma produced by the plasma gun is injected into the acceleration gap of the diode with the external magnetic field system. The ion diode is operated at diode voltage approx. =200 kV, diode current approx. =2 kA and pulse duration approx. =150 ns. A new acceleration gap configuration for focusing ion beam has been designed in order to enhance the ion current density. The experimental results show that the ion current density is enhanced by a factor of 2 and the ion beam has the ion current density of 27 A/cm 2 . In addition, the coaxial type Marx generator with voltage 200 kV and current 15 kA has been developed and installed in the focus type ion diode. The ion beam of ion current density approx. =54 A/cm 2 is obtained. To produce metallic ion beams, an ion source by aluminum wire discharge has been developed and the aluminum plasma of ion current density ∼70 A/cm 2 is measured. (author)

  15. Characteristics of the magnetic wall reflection model on ion acceleration in gas-puff z pinch

    International Nuclear Information System (INIS)

    Nishio, M.; Takasugi, K.

    2013-01-01

    The magnetic wall reflection model was examined with the numerical simulation of the trajectory calculation of particles. This model is for the ions accelerated by some current-independent mechanism. The trajectory calculation showed angle dependency of highest velocities of accelerated particles. This characteristics is of the magnetic wall reflection model, not of the other current-independent acceleration mechanism. Thomson parabola measurements of accelerated ions produced in the gas-puff z-pinch experiments were carried out for the verification of the angle dependency. (author)

  16. Integration of plume and puff diffusion models/application of CFD

    Science.gov (United States)

    Mori, Akira

    The clinical symptoms of patients and other evidences of a gas poisoning accident inside an industrial building strongly suggested an abrupt influx of engine exhaust from a construction vehicle which was operating outside in the open air. But the obviously high level of gas concentration could not be well explained by any conventional steady-state gas diffusion models. The author used an unsteady-state continuous Puff Model to simulate the time-wise changes in air stream with the pollutant gas being continuously emitted, and successfully reproduced the observed phenomena. The author demonstrates that this diffusion formula can be solved analytically by the use of error function as long as the change in wind velocity is stepwise, and clarifies the accurate differences between the unsteady- and steady-states and their convergence profiles. Also, the relationship between the Puff and Plume Models is discussed. The case study included a computational fluid dynamics (CFD) analysis to estimate the steady-state air stream and the gas concentration pattern in the affected area. It is well known that clear definition of the boundary conditions is key to successful CFD analysis. The author describes a two-step use of CFD: the first step to define the boundary conditions and the second to determine the steady-state air stream and the gas concentration pattern.

  17. Atmospheric Dispersion Simulation for Level 3 PSA at Ulchin Nuclear Site using a PUFF model

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Jun; Han, Seok-Jung; Jeong, Hyojoon; Jang, Seung-Cheol [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Air dispersion prediction is a key in the level 3 PSA to predict radiation releases into the environment for preparing an effective strategy for an evacuation as a basis of the emergency preparedness. To predict the atmospheric dispersion accurately, the specific conditions of the radiation release location should be considered. There are various level 3 PSA tools and MACSS2 is one of the widely used level 3 PSA tools in many countries including Korea. Due to the characteristics of environmental conditions in Korea, it should be demonstrated that environmental conditions of Korea nuclear sites can be appropriately illustrated by the tool. In Korea, because all nuclear power plants are located on coasts, sea and land breezes might be a significant factor. The objectives of this work is to simulate the atmospheric dispersion for Ulchin nuclear site in Korea using a PUFF model and to generate the data which can be used for the comparison with that of PLUME model. A nuclear site has own atmospheric dispersion characteristics. Especially in Korea, nuclear sites are located at coasts and it is expected that see and land breeze effects are relatively high. In this work, the atmospheric dispersion at Ulchin nuclear site was simulated to evaluate the effect of see and land breezes in four seasons. In the simulation results, it was observed that the wind direction change with time has a large effect on atmospheric dispersion. If the result of a PLUME model is more conservative than most severe case of a PUFF model, then the PLUME model could be used for Korea nuclear sites in terms of safety assessment.

  18. Application of Lagrangian puff model in the early stage of a nuclear emergency

    International Nuclear Information System (INIS)

    Yu Qi; Liu Yuanzhong

    2000-01-01

    The effect of changes of intervention levels and meteorological conditions on the early emergency countermeasures is analysed for nuclear power plant emergencies. A Lagrangian puff model RIMPUFF is used to predict dose distributions under stable and unstable meteorological conditions. The release scenario for PWR6 is used as an example to determine emergency areas for different intervention levels. The prediction results show that the evacuation area radius is 5 km and the radii for sheltering and intake-of stable iodine are both 10 km. The difference between the emergency areas determined by the intervention levels given in HAF0703/NEPA9002 and IAEA safety series No. 109 is only in the sheltering area which is much smaller using the IAEA guidelines

  19. A compact, quasi-monochromatic laser-plasma EUV source based on a double-stream gas-puff target at 13.8 nm wavelength

    Czech Academy of Sciences Publication Activity Database

    Wachulak, P.W.; Bartnik, A.; Fiedorowicz, H.; Feigl, T.; Jarocki, R.; Kostecki, J.; Rudawski, P.; Sawicka, Magdalena; Szczurek, M.; Szczurek, A.; Zawadzki, Z.

    2010-01-01

    Roč. 100, č. 3 (2010), 461-469 ISSN 0946-2171 Institutional research plan: CEZ:AV0Z10100523 Keywords : laser-plasma * EUV source * gas puff target * elliptical multi- layer * mirror * table-top setup Subject RIV: BH - Optics, Masers, Lasers Impact factor: 2.239, year: 2010

  20. A parametric description of a skewed puff in the diabatic surface layer

    International Nuclear Information System (INIS)

    Mikkelsen, T.

    1982-10-01

    The spreading of passive material in the stable, neutral and unstable surface layer from an instantaneous ground source is parameterized in a form appropriate for use with an operational puff diffusion model. (author)

  1. Comparison of measured and modeled gas-puff emissions on Alcator C-Mod

    Science.gov (United States)

    Baek, Seung-Gyou; Terry, J. L.; Stotler, D. P.; Labombard, B. L.; Brunner, D. F.

    2017-10-01

    Understanding neutral transport in tokamak boundary plasmas is important because of its possible effects on the pedestal and scrape-off layer (SOL). On Alcator C-Mod, measured neutral line emissions from externally-puffed deuterium and helium gases are compared with the synthetic results of a neutral transport code, DEGAS 2. The injected gas flow rate and the camera response are absolutely calibrated. Time-averaged SOL density and temperature profiles are input to a steady-state simulation. An updated helium atomic model is employed in DEGAS2. Good agreement is found for the D α peak brightness and profile shape. However, the measured helium I line brightness is found to be lower than that in the simulation results by a roughly a factor of three over a wide range of density particularly in the far SOL region. Two possible causes for this discrepancy are reviewed. First, local cooling due to gas puff may suppress the line emission. Second, time-dependent turbulence effect may impact the helium neutral transport. Unlike deuterium atoms that gain energy from charge exchange and dissociation processes, helium neutrals remain cold and have a relatively short mean free path, known to make them prone to turbulence based on the Kubo number criterion. Supported by USDoE awards: DE-FC02-99ER54512, DE-SC0014251, and DE-AC02-09CH11466.

  2. Applications of the PUFF model to forecasts of volcanic clouds dispersal from Etna and Vesuvio

    Science.gov (United States)

    Daniele, P.; Lirer, L.; Petrosino, P.; Spinelli, N.; Peterson, R.

    2009-05-01

    PUFF is a numerical volcanic ash tracking model developed to simulate the behaviour of ash clouds in the atmosphere. The model uses wind field data provided by meteorological models and adds dispersion and sedimentation physics to predict the evolution of the cloud once it reaches thermodynamic equilibrium with the atmosphere. The software is intended for use in emergency response situations during an eruption to quickly forecast the position and trajectory of the ash cloud in the near (˜1-72 h) future. In this paper, we describe the first application of the PUFF model in forecasting volcanic ash dispersion from the Etna and Vesuvio volcanoes. We simulated the daily occurrence of an eruptive event of Etna utilizing ash cloud parameters describing the paroxysm of 22nd July 1998 and wind field data for the 1st September 2005-31st December 2005 time span from the Global Forecast System (GFS) model at the approximate location of the Etna volcano (38N 15E). The results show that volcanic ash particles are dispersed in a range of directions in response to changing wind field at various altitudes and that the ash clouds are mainly dispersed toward the east and southeast, although the exact trajectory is highly variable, and can change within a few hours. We tested the sensitivity of the model to the mean particle grain size and found that an increased concentration of ash particles in the atmosphere results when the mean grain size is decreased. Similarly, a dramatic variation in dispersion results when the logarithmic standard deviation of the particle-size distribution is changed. Additionally, we simulated the occurrence of an eruptive event at both Etna and Vesuvio, using the same parameters describing the initial volcanic plume, and wind field data recorded for 1st September 2005, at approximately 38N 15E for Etna and 41N 14E for Vesuvio. The comparison of the two simulations indicates that identical eruptions occurring at the same time at the two volcanic centres

  3. Determination of the physical values of a plasma puff by analysis of the diamagnetic signals. 1. part: expansion model for the puff. 2. part: comparison of experimental results with the expansion model for the plasma puff

    International Nuclear Information System (INIS)

    Jacquinot, J.; Leloup, Ch.; Waelbroeck, F.; Poffe, J.P.

    1964-01-01

    The flow of a dense plasma puff, along the axis of a uniform magnetic field is examined, assuming the following hypotheses: the axial distribution of the line density can be described at any time by a gaussian function whose characteristic parameter is independent of the distance from the axis of the system; the β ratio is less than 0,6. An approximate solution of the magnetohydrodynamics equations is obtained. The evolution of the characteristic properties of the plasma (local velocity, temperature and density) can be calculated from a set of equations involving 5 plasma parameters. A method leading to the determination of these parameters is described. It uses 5 informations picked up from the diamagnetic signals induced by the plasma into a set of 4 compensated magnetic loops. (authors) [fr

  4. Evaluation of a new method for puff arrival time as assessed through wind tunnel modelling

    Czech Academy of Sciences Publication Activity Database

    Chaloupecká, Hana; Jaňour, Zbyněk; Mikšovský, J.; Jurčáková, Klára; Kellnerová, Radka

    2017-01-01

    Roč. 111, October (2017), s. 194-210 ISSN 0957-5820 R&D Projects: GA ČR GA15-18964S Institutional support: RVO:61388998 Keywords : wind tunnel * short-term gas leakage * puff Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 2.905, year: 2016 https://www.sciencedirect.com/science/article/pii/S0957582017302203

  5. Heuristic drift-based model of the power scrape-off width in low-gas-puff H-mode tokamaks

    International Nuclear Information System (INIS)

    Goldston, R.J.

    2012-01-01

    A heuristic model for the plasma scrape-off width in low-gas-puff tokamak H-mode plasmas is introduced. Grad B and curv B drifts into the scrape-off layer (SOL) are balanced against near-sonic parallel flows out of the SOL, to the divertor plates. The overall particle flow pattern posited is a modification for open field lines of Pfirsch–Schlüter flows to include order-unity sinks to the divertors. These assumptions result in an estimated SOL width of ∼2aρ p /R. They also result in a first-principles calculation of the particle confinement time of H-mode plasmas, qualitatively consistent with experimental observations. It is next assumed that anomalous perpendicular electron thermal diffusivity is the dominant source of heat flux across the separatrix, investing the SOL width, derived above, with heat from the main plasma. The separatrix temperature is calculated based on a two-point model balancing power input to the SOL with Spitzer–Härm parallel thermal conduction losses to the divertor. This results in a heuristic closed-form prediction for the power scrape-off width that is in reasonable quantitative agreement both in absolute magnitude and in scaling with recent experimental data. Further work should include full numerical calculations, including all magnetic and electric drifts, as well as more thorough comparison with experimental data.

  6. Buffer regulation of calcium puff sequences

    International Nuclear Information System (INIS)

    Fraiman, Daniel; Dawson, Silvina Ponce

    2014-01-01

    Puffs are localized Ca 2+ signals that arise in oocytes in response to inositol 1,4,5-trisphosphate (IP 3 ). They are the result of the liberation of Ca 2+ from the endoplasmic reticulum through the coordinated opening of IP 3 receptor/channels clustered at a functional release site. The presence of buffers that trap Ca 2+ provides a mechanism that enriches the spatio–temporal dynamics of cytosolic calcium. The expression of different types of buffers along the cell's life provides a tool with which Ca 2+ signals and their responses can be modulated. In this paper we extend the stochastic model of a cluster of IP 3 R-Ca 2+ channels introduced previously to elucidate the effect of buffers on sequences of puffs at the same release site. We obtain analytically the probability laws of the interpuff time and of the number of channels that participate of the puffs. Furthermore, we show that under typical experimental conditions the effect of buffers can be accounted for in terms of a simple inhibiting function. Hence, by exploring different inhibiting functions we are able to study the effect of a variety of buffers on the puff size and interpuff time distributions. We find the somewhat counter-intuitive result that the addition of a fast Ca 2+ buffer can increase the average number of channels that participate of a puff. (paper)

  7. Buffer regulation of calcium puff sequences.

    Science.gov (United States)

    Fraiman, Daniel; Dawson, Silvina Ponce

    2014-02-01

    Puffs are localized Ca(2 +) signals that arise in oocytes in response to inositol 1,4,5-trisphosphate (IP3). They are the result of the liberation of Ca(2 +) from the endoplasmic reticulum through the coordinated opening of IP3 receptor/channels clustered at a functional release site. The presence of buffers that trap Ca(2 +) provides a mechanism that enriches the spatio-temporal dynamics of cytosolic calcium. The expression of different types of buffers along the cell's life provides a tool with which Ca(2 +) signals and their responses can be modulated. In this paper we extend the stochastic model of a cluster of IP3R-Ca(2 +) channels introduced previously to elucidate the effect of buffers on sequences of puffs at the same release site. We obtain analytically the probability laws of the interpuff time and of the number of channels that participate of the puffs. Furthermore, we show that under typical experimental conditions the effect of buffers can be accounted for in terms of a simple inhibiting function. Hence, by exploring different inhibiting functions we are able to study the effect of a variety of buffers on the puff size and interpuff time distributions. We find the somewhat counter-intuitive result that the addition of a fast Ca(2 +) buffer can increase the average number of channels that participate of a puff.

  8. Puff-plume atmospheric deposition model for use at SRP in emergency-response situations

    International Nuclear Information System (INIS)

    Garrett, A.J.; Murphy, C.E. Jr.

    1981-05-01

    An atmospheric transport and diffusion model developed for real-time calculation of the location and concentration of toxic or radioactive materials during an accidental release was improved by including deposition calculations

  9. Drift-based Model for Power Scrape-off Width in Low-Gas-Puff H-mode Plasmas: Theory and Implications

    Energy Technology Data Exchange (ETDEWEB)

    Goldston, R., E-mail: rgoldston@pppl.gov [Princeton Plasma Physics Laboratory, Princeton (United States)

    2012-09-15

    Full text: A heuristic model for the plasma scrape-off width in low-gas-puff tokamak H-mode plasmas is introduced. {nabla}B and curvature drifts into the scrape-off layer (SOL) are balanced against near-sonic parallel flows out of the SOL, to the divertor plates. These assumptions result in an estimated SOL width of order the poloidal gyroradius. It is next assumed that anomalous perpendicular electron thermal diffusivity is the dominant source of heat flux across the separatrix, investing the SOL width, derived above, with heat from the main plasma. The separatrix temperature is then calculated based on a two-point model balancing power input to the SOL with Spitzer-Hiarm parallel thermal conduction losses to the divertor. This results in a heuristic closed-form prediction for the power scrape-off width that is in quantitative agreement both in absolute magnitude and in scaling with recent experimental data. The applicability of the Spitzer-Harm model to this regime can be questioned at the lowest densities, where the presence of a sheath can raise the divertor target electron temperature. A more general two-point model including a finite ratio of divertor target to upstream electron temperature shows only a 5% effect on the SOL width with target temperature f{sub T} = 75% of upstream, so this effect is likely negligible in experimentally relevant regimes. To achieve the near-sonic flows measured experimentally, and assumed in this model, sets requirements on the ratio of upstream to total SOL particle sources relative to the square-root of the ratio of target to upstream temperature. As a result very high recycling regimes may allow significantly wider power fluxes. The Pfisch-Schluter model for equilibrium flows has been modified to allow near-sonic flows, appropriate for gradient scale lengths of order the poloidal gyroradius. This results in a new quadrupole flow pattern that amplifies the usual P-S flows at the outer midplane, while reducing them at the inner

  10. Simulations of Ar gas-puff Z-pinch radiation sources with double shells and central jets on the Z generator

    Science.gov (United States)

    Tangri, V.; Harvey-Thompson, A. J.; Giuliani, J. L.; Thornhill, J. W.; Velikovich, A. L.; Apruzese, J. P.; Ouart, N. D.; Dasgupta, A.; Jones, B.; Jennings, C. A.

    2016-10-01

    Radiation-magnetohydrodynamic simulations using the non-local thermodynamic equilibrium Mach2-Tabular Collisional-Radiative Equilibrium code in (r, z) geometry are performed for two pairs of recent Ar gas-puff Z-pinch experiments on the refurbished Z generator with an 8 cm diameter nozzle. One pair of shots had an outer-to-inner shell mass ratio of 1:1.6 and a second pair had a ratio of 1:1. In each pair, one of the shots had a central jet. The experimental trends in the Ar K-shell yield and power are reproduced in the calculations. However, the K-shell yield and power are significantly lower than the other three shots for the case of a double-shell puff of 1:1 mass ratio and no central jet configuration. Further simulations of a hypothetical experiment with the same relative density profile of this configuration, but higher total mass, show that the coupled energy from the generator and the K-shell yield can be increased to levels achieved in the other three configurations, but not the K-shell power. Based on various measures of effective plasma radius, the compression in the 1:1 mass ratio and no central jet case is found to be less because the plasma inside the magnetic piston is hotter and of lower density. Because of the reduced density, and the reduced radiation cooling (which is proportional to the square of the density), the core plasma is hotter. Consequently, for the 1:1 outer-to-inner shell mass ratio, the load mass controls the yield and the center jet controls the power.

  11. Mean field strategies induce unrealistic nonlinearities in calcium puffs

    Directory of Open Access Journals (Sweden)

    Guillermo eSolovey

    2011-08-01

    Full Text Available Mean field models are often useful approximations to biological systems, but sometimes, they can yield misleading results. In this work, we compare mean field approaches with stochastic models of intracellular calcium release. In particular, we concentrate on calcium signals generated by the concerted opening of several clustered channels (calcium puffs. To this end we simulate calcium puffs numerically and then try to reproduce features of the resulting calcium distribution using mean field models were all the channels open and close simultaneously. We show that an unrealistic nonlinear relationship between the current and the number of open channels is needed to reproduce the simulated puffs. Furthermore, a single channel current which is five times smaller than the one of the stochastic simulations is also needed. Our study sheds light on the importance of the stochastic kinetics of the calcium release channel activity to estimate the release fluxes.

  12. Compositional profiling and sensorial analysis of multi-wholegrain extruded puffs as affected by fructan inclusion.

    Science.gov (United States)

    Handa, C; Goomer, S

    2015-09-01

    Rice grits, corn grits, pulse, wholegrain - finger millet and sorghum were utilized in the production of multigrain extruded puffs using a single screw extruder. The effect of inclusion of fructan - fructoligosaccharide in multi-wholegrain (MWG) extruded puffs was examined. MWG fructan enriched puffs puffs had 450 % higher dietary fiber content than the control puff (CP). These puffs can be categorized as 'Good Source' of fiber as it suffices 17.2 % DV of fiber. Puffs were rated 8.1 ± 0.6, 8.3 ± 0.7, 8.1 ± 0.6, 7.5 ± 0.5 and 8.2 ± 0.6 for color, flavor, texture, appearance and overall acceptability respectively. The scores for all the attributes were found to be not significantly different (p extruded puffs could be improved by the inclusion of fructans.

  13. Dosage-based parameters for characterization of puff dispersion results.

    Science.gov (United States)

    Berbekar, Eva; Harms, Frank; Leitl, Bernd

    2015-01-01

    A set of parameters is introduced to characterize the dispersion of puff releases based on the measured dosage. These parameters are the dosage, peak concentration, arrival time, peak time, leaving time, ascent time, descent time and duration. Dimensionless numbers for the scaling of the parameters are derived from dimensional analysis. The dimensionless numbers are tested and confirmed based on a statistically representative wind tunnel dataset. The measurements were carried out in a 1:300 scale model of the Central Business District in Oklahoma City. Additionally, the effect of the release duration on the puff parameters is investigated. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Modeling of ns and ps laser-induced soft X-ray sourcesusing nitrogen gas puff target

    Czech Academy of Sciences Publication Activity Database

    Vrba, Pavel; Vrbová, M.; Zakharov, S.V.; Zakharov, V.S.

    2014-01-01

    Roč. 21, č. 7 (2014), 073301-073301 ISSN 1070-664X R&D Projects: GA ČR GAP102/12/2043; GA MŠk(CZ) LG13029 Institutional support: RVO:61389021 Keywords : Capillary Z-pinch * Water window radiation source * RHMD Code Z* Subject RIV: BH - Optics, Masers, Lasers Impact factor: 2.142, year: 2014 http://scitation.aip.org/content/aip/journal/pop/21/7/10.1063/1.4887295

  15. Polycyclic aromatic hydrocarbons (PAH), nickel and vanadium in air dust from Bahrein (Persian Gulf): Measurements and Puff model calculations for this area during the burning of the oil wells in Kuwait

    International Nuclear Information System (INIS)

    Vaessen, H.A.M.G.; Wilbers, A.A.M.M.; Jekel, A.A.; Van Pul, W.A.J.; Van der Meulen, A.; Bloemen, H.J.Th.; De Boer, J.L.M.

    1993-01-01

    When Kuwait's oil wells were at fire in 1991, air particulate matter (inhalable fraction) was sampled in Bahrain (soot clouds were over that region at that time) and analysed for PAHs, nickel (Ni) and vanadium (V). Also in that period Puff-model calculations were carried out to forecast the dispersion of the combustion products and the impact on the environment in the Persian Gulf region. Based on the outcome of the model calculations and the analytical findings the major conclusions are that: (a) the PAH contamination level of the air particulate matter is equal or below that found for rural areas in the Netherlands and on average one order of magnitude below the findings of the model calculations; (b) there is no link between the air particulate matter content and the PAH contamination measured. The benzo(a)pyrene fraction of the PAH contamination is 10-14% which is surprisingly constant; (c) the strongly significant correlation between the Ni- and V-content both mutually and with respect to the air particulate matter content strongly suggests a common origin i.e. the burning oil wells in Kuwait; (d) the air particulate matter content measured is one up to two orders of magnitudes over the findings of the model calculations; (e) the emission factors applied in the Puff-model calculations, most probably, insufficiently match the combustion conditions of oil wells at fire. 6 figs., 3 tabs.,

  16. Air puff-induced 22-kHz calls in F344 rats.

    Science.gov (United States)

    Inagaki, Hideaki; Sato, Jun

    2016-03-01

    Air puff-induced ultrasonic vocalizations in adult rats, termed "22-kHz calls," have been applied as a useful animal model to develop psychoneurological and psychopharmacological studies focusing on human aversive affective disorders. To date, all previous studies on air puff-induced 22-kHz calls have used outbred rats. Furthermore, newly developed gene targeting technologies, which are essential for further advancement of biomedical experiments using air puff-induced 22-kHz calls, have enabled the production of genetically modified rats using inbred rat strains. Therefore, we considered it necessary to assess air puff-induced 22-kHz calls in inbred rats. In this study, we assessed differences in air puff-induced 22-kHz calls between inbred F344 rats and outbred Wistar rats. Male F344 rats displayed similar total (summed) duration of air puff-induced 22 kHz vocalizations to that of male Wistar rats, however, Wistar rats emitted fewer calls of longer duration, while F344 rats emitted higher number of vocalizations of shorter duration. Additionally, female F344 rats emitted fewer air puff-induced 22-kHz calls than did males, thus confirming the existence of a sex difference that was previously reported for outbred Wistar rats. The results of this study could confirm the reliability of air puff stimulus for induction of a similar amount of emissions of 22-kHz calls in different rat strains, enabling the use of air puff-induced 22-kHz calls in inbred F344 rats and derived genetically modified animals in future studies concerning human aversive affective disorders. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Dynamic Sensing of Cornea Deformation during an Air Puff

    Science.gov (United States)

    Yamada, Kenji; Yamasaki, Naoyuki; Gosho, Takumi; Kiuchi, Yoshiaki; Takenaka, Jouji; Higashimori, Mitsuru; Kaneko, Makoto

    In early diagnosis of glancoma, intraocular pressure measurement is one of an important method. Non-contact method has measured eye pressure through the deformation of cornea during the increase of the force due to air puff. The deformation is influenced by the cornea stiffness as well as the eye internal pressure. Since the cornea stiffness is unknown in general, it is difficult to evaluate the ture eye pressure. The dynamic behavior of cornea under air puff may provide us with a good hint for evaluating the cornea stiffness appropriately. For this purpose, we develop the sensing system composed of a high speed camera, a mirror for producing a virtual camera, a non-contact tonometer and a slit light source. This system enables us to measure the cornea deformation under concave shape. We show the experimental data for human eyes as well as an artificial eye made by transparent material.

  18. Calculation method for gamma-dose rates from spherical puffs

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.; Deme, S.; Lang, E.

    1993-05-01

    The Lagrangian puff-models are widely used for calculation of the dispersion of atmospheric releases. Basic output from such models are concentrations of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on semi-infinite cloud model. This method is however only applicable for points far away from the release point. The exact calculation of the cloud dose using the volume integral requires significant computer time. The volume integral for the gamma dose could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor due to the fact that the same correction factors are used for all isotopes. The authors describe a more elaborate correction method. This method uses precalculated values of the gamma-dose rate as a function of the puff dispersion parameter (δ p ) and the distance from the puff centre for four energy groups. The release of energy for each radionuclide in each energy group has been calculated and tabulated. Based on these tables and a suitable interpolation procedure the calculation of gamma doses takes very short time and is almost independent of the number of radionuclides. (au) (7 tabs., 7 ills., 12 refs.)

  19. Calculation method for gamma dose rates from Gaussian puffs

    Energy Technology Data Exchange (ETDEWEB)

    Thykier-Nielsen, S; Deme, S; Lang, E

    1995-06-01

    The Lagrangian puff models are widely used for calculation of the dispersion of releases to the atmosphere. Basic output from such models is concentration of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on the semi-infinite cloud model. This method is however only applicable for puffs with large dispersion parameters, i.e. for receptors far away from the release point. The exact calculation of the cloud dose using volume integral requires large computer time usually exceeding what is available for real time calculations. The volume integral for gamma doses could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor because only a few of the relevant parameters are considered. A multi-parameter method for calculation of gamma doses is described here. This method uses precalculated values of the gamma dose rates as a function of E{sub {gamma}}, {sigma}{sub y}, the asymmetry factor - {sigma}{sub y}/{sigma}{sub z}, the height of puff center - H and the distance from puff center R{sub xy}. To accelerate the calculations the release energy, for each significant radionuclide in each energy group, has been calculated and tabulated. Based on the precalculated values and suitable interpolation procedure the calculation of gamma doses needs only short computing time and it is almost independent of the number of radionuclides considered. (au) 2 tabs., 15 ills., 12 refs.

  20. Calculation method for gamma dose rates from Gaussian puffs

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.; Deme, S.; Lang, E.

    1995-06-01

    The Lagrangian puff models are widely used for calculation of the dispersion of releases to the atmosphere. Basic output from such models is concentration of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on the semi-infinite cloud model. This method is however only applicable for puffs with large dispersion parameters, i.e. for receptors far away from the release point. The exact calculation of the cloud dose using volume integral requires large computer time usually exceeding what is available for real time calculations. The volume integral for gamma doses could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor because only a few of the relevant parameters are considered. A multi-parameter method for calculation of gamma doses is described here. This method uses precalculated values of the gamma dose rates as a function of E γ , σ y , the asymmetry factor - σ y /σ z , the height of puff center - H and the distance from puff center R xy . To accelerate the calculations the release energy, for each significant radionuclide in each energy group, has been calculated and tabulated. Based on the precalculated values and suitable interpolation procedure the calculation of gamma doses needs only short computing time and it is almost independent of the number of radionuclides considered. (au) 2 tabs., 15 ills., 12 refs

  1. Determination of the physical values of a plasma puff by analysis of the diamagnetic signals. 1. part: expansion model for the puff. 2. part: comparison of experimental results with the expansion model for the plasma puff; Determination des grandeurs physiques d'une bouffee de plasma par l'analyse de signaux diamagnetiques. 1. partie: modele d'expansion de bouffee. 2. partie: confrontation des resultats experimentaux et du modele d'expansion de la bouffee de plasma

    Energy Technology Data Exchange (ETDEWEB)

    Jacquinot, J; Leloup, Ch; Waelbroeck, F; Poffe, J P [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires

    1964-07-01

    The flow of a dense plasma puff, along the axis of a uniform magnetic field is examined, assuming the following hypotheses: the axial distribution of the line density can be described at any time by a gaussian function whose characteristic parameter is independent of the distance from the axis of the system; the {beta} ratio is less than 0,6. An approximate solution of the magnetohydrodynamics equations is obtained. The evolution of the characteristic properties of the plasma (local velocity, temperature and density) can be calculated from a set of equations involving 5 plasma parameters. A method leading to the determination of these parameters is described. It uses 5 informations picked up from the diamagnetic signals induced by the plasma into a set of 4 compensated magnetic loops. (authors) [French] L'ecoulement d'une bouffee dense de plasma le long des lignes de force d'un champ magnetique uniforme est etudie en faisant les hypotheses suivantes : la distribution axiale de la densite lineique est, a chaque instant, une gaussienne dont le parametre caracteristique ne depend pas de la distance a l'axe de revolution du systeme; {beta}(2 {mu}{sub 0} p/B{sup 2}{sub e}) est inferieur a 0,6. Dans ces conditions, une solution approchee des equations magneto-hydrodynamiques a pu etre trouvee. L'evolution des quantites physiques du plasma (vitesse, temperature, densite locale) est alors explicitement donnee par des equations dependant de 5 parametres. On decrit une methode permettant la determination de ces parametres. Elle necessite 5 informations prises sur les signaux diamagnetiques induits par le plasma dans un jeu de quatre boucles magnetiques compensees. (auteurs)

  2. Corneal biomechanical properties from air-puff corneal deformation imaging

    Science.gov (United States)

    Marcos, Susana; Kling, Sabine; Bekesi, Nandor; Dorronsoro, Carlos

    2014-02-01

    The combination of air-puff systems with real-time corneal imaging (i.e. Optical Coherence Tomography (OCT), or Scheimpflug) is a promising approach to assess the dynamic biomechanical properties of the corneal tissue in vivo. In this study we present an experimental system which, together with finite element modeling, allows measurements of corneal biomechanical properties from corneal deformation imaging, both ex vivo and in vivo. A spectral OCT instrument combined with an air puff from a non-contact tonometer in a non-collinear configuration was used to image the corneal deformation over full corneal cross-sections, as well as to obtain high speed measurements of the temporal deformation of the corneal apex. Quantitative analysis allows direct extraction of several deformation parameters, such as apex indentation across time, maximal indentation depth, temporal symmetry and peak distance at maximal deformation. The potential of the technique is demonstrated and compared to air-puff imaging with Scheimpflug. Measurements ex vivo were performed on 14 freshly enucleated porcine eyes and five human donor eyes. Measurements in vivo were performed on nine human eyes. Corneal deformation was studied as a function of Intraocular Pressure (IOP, 15-45 mmHg), dehydration, changes in corneal rigidity (produced by UV corneal cross-linking, CXL), and different boundary conditions (sclera, ocular muscles). Geometrical deformation parameters were used as input for inverse finite element simulation to retrieve the corneal dynamic elastic and viscoelastic parameters. Temporal and spatial deformation profiles were very sensitive to the IOP. CXL produced a significant reduction of the cornea indentation (1.41x), and a change in the temporal symmetry of the corneal deformation profile (1.65x), indicating a change in the viscoelastic properties with treatment. Combining air-puff with dynamic imaging and finite element modeling allows characterizing the corneal biomechanics in-vivo.

  3. Open source molecular modeling.

    Science.gov (United States)

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  4. Prediction of dosage-based parameters from the puff dispersion of airborne materials in urban environments using the CFD-RANS methodology

    Science.gov (United States)

    Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.

    2018-02-01

    One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict dosage-based parameters from the puff release of an airborne material from a point source in the atmospheric boundary layer inside the built-up area. The present work addresses the question of whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict ensemble-average dosage-based parameters that are related with the puff dispersion. RANS simulations with the ADREA-HF code were, therefore, performed, where a single puff was released in each case. The present method is validated against the data sets from two wind-tunnel experiments. In each experiment, more than 200 puffs were released from which ensemble-averaged dosage-based parameters were calculated and compared to the model's predictions. The performance of the model was evaluated using scatter plots and three validation metrics: fractional bias, normalized mean square error, and factor of two. The model presented a better performance for the temporal parameters (i.e., ensemble-average times of puff arrival, peak, leaving, duration, ascent, and descent) than for the ensemble-average dosage and peak concentration. The majority of the obtained values of validation metrics were inside established acceptance limits. Based on the obtained model performance indices, the CFD-RANS methodology as implemented in the code ADREA-HF is able to predict the ensemble-average temporal quantities related to transient emissions of airborne material in urban areas within the range of the model performance acceptance criteria established in the literature. The CFD-RANS methodology as implemented in the code ADREA-HF is also able to predict the ensemble-average dosage, but the dosage results should be treated with some caution; as in one case, the observed ensemble-average dosage was under-estimated slightly more than the acceptance criteria. Ensemble

  5. Huff 'n puff to revaporize liquid dropout in an Omani gas field

    Energy Technology Data Exchange (ETDEWEB)

    Al-Wadhahi, M.; Boukadi, F.H.; Al-Bemani, A.; Al-Maamari, R.; Al-Hadrami, H. [Department of Petroleum and Chemical Engineering, Sultan Qaboos University, P.O. Box 33, Al-Khod 123 (Oman)

    2007-01-15

    In this study, Huff 'n Puff technique is used as a production mechanism to revaporize liquid dropout in the Saih Rawl retrograde condensate gas field, Oman. During the huff cycle, a number of wells were shut in to achieve revaporization. The same wells were put on stream, during the puff cycle. Liquid dropout induced a mechanical skin around the wellbore and hampered gas production capabilities but has been revaporized through pressurization. The pressure buildup in the rich-gas condensate reservoir was due to a cross flow originating from a deeper highly pressurized lean-gas bearing formation. The pressure communication was taking place through the wellbore during shut-in cycles. A compositional simulation model was used to confirm the theory of condensate revaporization. Simulation results indicated that Huff 'n Puff is a viable production technique. The technique improved gas deliverability and enhanced gas-liquid production by minimizing the skin caused by gas-liquid dropout. (author)

  6. He Puff System For Dust Detector Upgrade

    International Nuclear Information System (INIS)

    Rais, B.; Skinner, C.H.; Roquemore, A.L.

    2010-01-01

    Local detection of surface dust is needed for the safe operation of next-step magnetic fusion devices such as ITER. An electrostatic dust detector, based on a 5 cm x 5 cm grid of interlocking circuit traces biased to 50 V, has been developed to detect dust on remote surfaces and was successfully tested for the first time on the National Spherical Torus Experiment (NSTX). We report on a helium puff system that clears residual dust from this detector and any incident debris or fibers that might cause a permanent short circuit. The entire surface of the detector was cleared of carbon particles by two consecutive helium puffs delivered by three nozzles of 0.45 mm inside diameter. The optimal configuration was found to be with the nozzles at an angle of 30o with respect to the surface of the detector and a helium backing pressure of 6 bar.

  7. High-resolution measurement, line identification, and spectral modeling of the Kβ spectrum of heliumlike argon emitted by a laser-produced plasma using a gas-puff target

    International Nuclear Information System (INIS)

    Skobelev, I.Y.; Faenov, A.Y.; Dyakin, V.M.; Fiedorowicz, H.; Bartnik, A.; Szczurek, M.; Beiersdorfer, P.; Nilsen, J.; Osterheld, A.L.

    1997-01-01

    We present an analysis of the spectrum of satellite transitions to the He-β line in ArXVII. High-resolution measurements of the spectra from laser-heated Ar-gas-puff targets are made with spectral resolution of 10000 and spatial resolution of better than 50 μm. These are compared with tokamak measurements. Several different lines are identified in the spectra and the spectral analysis is used to determine the plasma parameters in the gas-puff laser-produced plasma. The data complement those from tokamak measurements to provide more complete information on the satellite spectra. copyright 1997 The American Physical Society

  8. Olive Oil Based Emulsions in Frozen Puff Pastry Production

    Science.gov (United States)

    Gabriele, D.; Migliori, M.; Lupi, F. R.; de Cindio, B.

    2008-07-01

    Puff pastry is an interesting food product having different industrial applications. It is obtained by laminating layers of dough and fats, mainly shortenings or margarine, having specific properties which provides required spreading characteristic and able to retain moisture into dough. To obtain these characteristics, pastry shortenings are usually saturated fats, however the current trend in food industry is mainly oriented towards unsatured fats such as olive oil, which are thought to be safer for human health. In the present work, a new product, based on olive oil, was studied as shortening replacer in puff pastry production. To ensure the desired consistency, for the rheological matching between fat and dough, a water-in-oil emulsion was produced based on olive oil, emulsifier and a hydrophilic thickener agent able to increase material structure. Obtained materials were characterized by rheological dynamic tests in linear viscoelastic conditions, aiming to setup process and material consistency, and rheological data were analyzed by using the weak gel model. Results obtained for tested emulsions were compared to theological properties of a commercial margarine, adopted as reference value for texture and stability. Obtained emulsions are characterized by interesting rheological properties strongly dependent on emulsifier characteristics and water phase composition. However a change in process temperature during fat extrusion and dough lamination seems to be necessary to match properly typical dough rheological properties.

  9. Photovoltaic sources modeling

    CERN Document Server

    Petrone, Giovanni; Spagnuolo, Giovanni

    2016-01-01

    This comprehensive guide surveys all available models for simulating a photovoltaic (PV) generator at different levels of granularity, from cell to system level, in uniform as well as in mismatched conditions. Providing a thorough comparison among the models, engineers have all the elements needed to choose the right PV array model for specific applications or environmental conditions matched with the model of the electronic circuit used to maximize the PV power production.

  10. Gas puff modulation experiments in Tore Supra

    International Nuclear Information System (INIS)

    Haas, J.C.M. de; Devynck, P.; Dudok de Wit, T.; Garbet, X.; Gil, C.; Harris, G.; Laviron, C.; Martin, G.

    1993-01-01

    Experiments with a modulation of the gas puff have been done in Tore Supra with the aim to investigate the transport of particles and heat. The target plasma is ohmically heated, sawtoothing with frequencies between 12 and 20 Hz, deuterium for both the plasma and the injection, and with various densities, rising in a series of shots. Both the diffusion coefficient and the pinch velocity for the particle transport were determined using an harmonic modulation. The method gives reasonable results, even for small perturbations, and the obtained values are able to reproduce the stationary values. The heat flow carried by electrons also shows a modulation. The part of the modulation which is not caused by the density can in principle be used to discriminate diffusive and convective terms in the heat flux. An ion temperature profile calculated with empirically determined value of heat diffusivity reproduces the slow evolution of the total kinetic energy. 6 figs., 7 refs

  11. Neutral Transport Simulations of Gas Puff Imaging Experiments on Alcator C-Mod

    International Nuclear Information System (INIS)

    Stotler, D.P.; LaBombard, B.; Terry, J.L.; Zweben, S.J.

    2002-01-01

    Visible imaging of gas puffs has been used on the Alcator C-Mod tokamak to characterize edge plasma turbulence, yielding data that can be compared with plasma turbulence codes. Simulations of these experiments with the DEGAS 2 Monte Carlo neutral transport code have been carried out to explore the relationship between the plasma fluctuations and the observed light emission. By imposing two-dimensional modulations on the measured time-average plasma density and temperature profiles, we demonstrate that the spatial structure of the emission cloud reflects that of the underlying turbulence. However, the photon emission rate depends on the plasma density and temperature in a complicated way, and no simple scheme for inferring the plasma parameters directly from the light emission patterns is apparent. The simulations indicate that excited atoms generated by molecular dissociation are a significant source of photons, further complicating interpretation of the gas puff imaging results.Visibl e imaging of gas puffs has been used on the Alcator C-Mod tokamak to characterize edge plasma turbulence, yielding data that can be compared with plasma turbulence codes. Simulations of these experiments with the DEGAS 2 Monte Carlo neutral transport code have been carried out to explore the relationship between the plasma fluctuations and the observed light emission. By imposing two-dimensional modulations on the measured time-average plasma density and temperature profiles, we demonstrate that the spatial structure of the emission cloud reflects that of the underlying turbulence. However, the photon emission rate depends on the plasma density and temperature in a complicated way, and no simple scheme for inferring the plasma parameters directly from the light emission patterns is apparent. The simulations indicate that excited atoms generated by molecular dissociation are a significant source of photons, further complicating interpretation of the gas puff imaging results

  12. Observation of the bremsstrahlung generation in the process of the Rayleigh endash Taylor instability development at gas puff implosion

    International Nuclear Information System (INIS)

    Baksht, R.B.; Fedunin, A.V.; Labetsky, A.Y.; Rousskich, A.G.; Shishlov, A.V.

    1997-01-01

    The electron magnetohydrodynamic model predicts the appearance of anode endash cathode voltage in the process of Rayleigh endash Taylor instability development during gas puff implosions. The appearance of the anode endash cathode voltage should be accompanied by the accelerated electron flow and the generation of the bremsstrahlung radiation. Experiments with neon and krypton gas puffs were performed on the GIT-4 [S. P. Bugaev, et al., Plasma Sci. 18, 115 (1990)] generator (1.6 MA, 120 ns) to observe the bremsstrahlung radiation during the gas puff implosion. Two spikes of the bremsstrahlung radiation were observed in the experiments. The first spike is connected with the gas breakdown; the second one is connected with the final stage of the implosion. The development of the RT instabilities does not initiate the bremsstrahlung radiation, therefore, the absence of anode endash cathode voltage is demonstrated. copyright 1997 American Institute of Physics

  13. Influence of an external gas puff on the RI-mode confinement properties in TEXTOR

    International Nuclear Information System (INIS)

    Kalupin, D.

    2002-06-01

    An actual subject of experimental and theoretical studies in present day fusion research is the development of an operational scenario combining simultaneously high confinement, with at least H-mode quality, and high densities, around or above the empirical Greenwald limit. Recently, this subject was studied in TEXTOR radiative improved (RI) mode discharges, in which the seeding of a small amount of impurities is helpful in a transition to the improved confinement stage. It was found that by the careful tailoring of external fuelling and optimisation of the wall conditions it is possible to maintain the H-mode or even higher quality confinement at densities much above Greenwald density limit. However, more intense fuelling, aimed to extend maximal achievable densities, led to the progressive confinement deterioration. The theory explains the transition to the RI-mode as a bifurcation into the stage where the transport governed by the ion temperature gradient (ITG) instability is significantly reduced due to a high density gradient and high value of the effective charge. The numerical studies of an influence of the gas puff intensity on confinement properties of plasma, done with the help of the 1-D transport code RITM, show that the same theory can be used for an explanation of the confinement rollover triggered by a strong gas puff. The code was modified in order to simulate the effect of the gas puff on the confinement properties. The anomalous transport coefficients in the plasma core include contributions from the ITG and dissipative trapped electron (DTE) instabilities. The transport at the plasma edge under RI-mode conditions might be described by the electrostatic turbulence caused by electric currents in the scrape-off layer of the limiter. The present computations show that this assumption for the edge transport does not allow the modeling of an effect of the gas puff intensity on the profiles evolution in agreement with experimental observations. The

  14. Experimental study on gas-puff Z-pinch load characteristics on yang accelerator

    International Nuclear Information System (INIS)

    Ren Xiaodong; Huang Xianbin; Yang Libing; Dan Jiakun; Duan Shuchao; Zhang Zhaohui; Zhou Shaotong

    2010-01-01

    A supersonic single-shell gas-puff load has been developed for Z-pinch experiments on 'Yang' accelerator. Using a fast responding pressure probe to measure the supersonic gas flow, impact pressure at different position and plenum pressure were acquired, which were combined with gas dynamics formulas to determine gas pressures and densities. The radial density profile displays that positions of gas shell varies with axial position, and the gas densities on axis increases as the distance from nozzle increases. Integral radial densities indicates that the linear mass density peaks at nozzle exit and decreases as increasing the distance from nozzle. Using single-shell supersonic gas-puff load, Z-pinch implosion experiments were performed on 'Yang' accelerator. Primary analysis of implosion process was presented, and computational trajectories of imploding plasma shell using snowplow model are in agreement with the experimental results. (authors)

  15. Modelling Choice of Information Sources

    Directory of Open Access Journals (Sweden)

    Agha Faisal Habib Pathan

    2013-04-01

    Full Text Available This paper addresses the significance of traveller information sources including mono-modal and multimodal websites for travel decisions. The research follows a decision paradigm developed earlier, involving an information acquisition process for travel choices, and identifies the abstract characteristics of new information sources that deserve further investigation (e.g. by incorporating these in models and studying their significance in model estimation. A Stated Preference experiment is developed and the utility functions are formulated by expanding the travellers' choice set to include different combinations of sources of information. In order to study the underlying choice mechanisms, the resulting variables are examined in models based on different behavioural strategies, including utility maximisation and minimising the regret associated with the foregone alternatives. This research confirmed that RRM (Random Regret Minimisation Theory can fruitfully be used and can provide important insights for behavioural studies. The study also analyses the properties of travel planning websites and establishes a link between travel choices and the content, provenance, design, presence of advertisements, and presentation of information. The results indicate that travellers give particular credence to governmentowned sources and put more importance on their own previous experiences than on any other single source of information. Information from multimodal websites is more influential than that on train-only websites. This in turn is more influential than information from friends, while information from coachonly websites is the least influential. A website with less search time, specific information on users' own criteria, and real time information is regarded as most attractive

  16. Particle fuelling for long pulse with standard gas puff and supersonic pulsed gas injection

    International Nuclear Information System (INIS)

    Bucalossi, J.; Tsitrone, E.; Martin, G.

    2003-01-01

    In addition to the standard gas puff and to the technically complex pellet injection, a novel intermediate method, based on the injection of a supersonic high density cloud of neutrals, has been recently implemented on the Tore Supra tokamak. Fuelling efficiency, in the 30-50% range are found while it lies in the 10-20% range for the gas puff. It is not sensitive to the plasma density and to the additional heating. According to modelling, the increased efficiency is attributed to the very short injection duration compared to the particle confinement time and to the strong cooling of the plasma edge resulting from the massive injection of matter. A feedback loop on the frequency of the injector has been successfully implemented to control the plasma density. In long pulse experiments (>200s), wall saturation has not been reached. Gas puffing rate was typically around 1 Pa.m 3 s -1 while dynamic wall retention around 0.6 Pa.m 3 s -1 . Co-deposited carbon layer could trap such large amounts of gas. A discharge fuelled by supersonic pulsed gas injections exhibits lower wall retention than a gas puff fuelled discharge. (author)

  17. The Influence of Puff Characteristics, Nicotine Dependence, and Rate of Nicotine Metabolism on Daily Nicotine Exposure in African American Smokers.

    Science.gov (United States)

    Ross, Kathryn C; Dempsey, Delia A; St Helen, Gideon; Delucchi, Kevin; Benowitz, Neal L

    2016-06-01

    African American (AA) smokers experience greater tobacco-related disease burden than Whites, despite smoking fewer cigarettes per day (CPD). Understanding factors that influence daily nicotine intake in AA smokers is an important step toward decreasing tobacco-related health disparities. One factor of interest is smoking topography, or the study of puffing behavior. (i) to create a model using puff characteristics, nicotine dependence, and nicotine metabolism to predict daily nicotine exposure, and (ii) to compare puff characteristics and nicotine intake from two cigarettes smoked at different times to ensure the reliability of the puff characteristics included in our model. Sixty AA smokers smoked their preferred brand of cigarette at two time points through a topography device. Plasma nicotine, expired CO, and changes in subjective measures were measured before and after each cigarette. Total nicotine equivalents (TNE) was measured from 24-hour urine collected during ad libitum smoking. In a model predicting daily nicotine exposure, total puff volume, CPD, sex, and menthol status were significant predictors (R(2) = 0.44, P smokers. Cancer Epidemiol Biomarkers Prev; 25(6); 936-43. ©2016 AACR. ©2016 American Association for Cancer Research.

  18. Marijuana smoking: effects of varying puff volume and breathhold duration.

    Science.gov (United States)

    Azorlosa, J L; Greenwald, M K; Stitzer, M L

    1995-02-01

    Two studies were conducted to quantify biological and behavioral effects resulting from exposure to controlled doses of marijuana smoke. In one study, puff volume (30, 60 and 90 ml) and in a second study, breathhold duration (0, 10 and 20 sec) were systematically varied while holding constant other smoking topography parameters (number of puffs = 10, interpuff interval = 60 sec and inhalation volume = 25% of vital capacity). Each study also varied levels of delta 9-tetrahydro-cannabinol marijuana cigarette content (1.75% and 3.55%). Regular marijuana users served as subjects (n = 7 in each experiment). Subjects smoked 10 puffs in each of six sessions; a seventh, nonsmoking session (all measures recorded at the same times as in active smoking sessions) served as a control. Variations in puff volume produced significant dose-related changes in postsmoking plasma delta 9-tetrahydro-cannabinol levels, carbon monoxide boost and subjective effects (e.g., "high"). In contrast, breathholding for 10 or 20 sec versus 0 sec increased plasma delta 9-tetrahydro-cannabinol levels but not CO boost or subjective effects. Task performance measures were not reliably influenced by marijuana smoke exposure within the dosing ranges examined. These findings confirm the utility of the controlled smoking technology, support the notion that cumulative puff volume systematically influences biological exposure and subjective effects, but cast doubt on the common belief that prolonged breathholding of marijuana smoke enhances classical subjective effects associated with its reinforcing value in humans.

  19. Study of gas-puff Z-pinches on COBRA

    Energy Technology Data Exchange (ETDEWEB)

    Qi, N.; Rosenberg, E. W.; Gourdain, P. A.; Grouchy, P. W. L. de; Kusse, B. R.; Hammer, D. A.; Bell, K. S.; Shelkovenko, T. A.; Potter, W. M.; Atoyan, L.; Cahill, A. D.; Evans, M.; Greenly, J. B.; Hoyt, C. L.; Pikuz, S. A.; Schrafel, P. C. [Laboratory of Plasma Studies, Cornell University, Ithaca, New York 14853 (United States); Kroupp, E.; Fisher, A.; Maron, Y. [Weizmann Institute of Science, Rehovot 76100 (Israel)

    2014-11-15

    Gas-puff Z-pinch experiments were conducted on the 1 MA, 200 ns pulse duration Cornell Beam Research Accelerator (COBRA) pulsed power generator in order to achieve an understanding of the dynamics and instability development in the imploding and stagnating plasma. The triple-nozzle gas-puff valve, pre-ionizer, and load hardware are described. Specific diagnostics for the gas-puff experiments, including a Planar Laser Induced Fluorescence system for measuring the radial neutral density profiles along with a Laser Shearing Interferometer and Laser Wavefront Analyzer for electron density measurements, are also described. The results of a series of experiments using two annular argon (Ar) and/or neon (Ne) gas shells (puff-on-puff) with or without an on- (or near-) axis wire are presented. For all of these experiments, plenum pressures were adjusted to hold the radial mass density profile as similar as possible. Initial implosion stability studies were performed using various combinations of the heavier (Ar) and lighter (Ne) gasses. Implosions with Ne in the outer shell and Ar in the inner were more stable than the opposite arrangement. Current waveforms can be adjusted on COBRA and it was found that the particular shape of the 200 ns current pulse affected on the duration and diameter of the stagnated pinched column and the x-ray yield.

  20. Study on preparation the egg yolk puff with chitosan

    Directory of Open Access Journals (Sweden)

    LI Hui

    2014-12-01

    Full Text Available This paper was studied chitosans with different degrees of deacetylation (70%,80%,90%,95% and different usages of chitosan that were added to research the effect of functional indexs in the egg yolk puff,such as calcium content and cholesterol content.Preliminarily chitosan was explored in the application of the Egg yolk puff.Text results showed that when the deacetylation degree of chitosan and its usage were 90% and 1% separately,the functional indexs and sensory quality of the Egg yolk puff can reach the equilibrium.Its calcium content was 76.2 mg/100 g,increased by 44.3 percent.Its cholesterol content was 290 mg/100 g,decreased by 35.1%.

  1. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    Science.gov (United States)

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  2. Numerical analysis of gas puff modulation experiment on JT-60U

    International Nuclear Information System (INIS)

    Nagashima, Keisuke; Sakasai, Akira

    1992-03-01

    In tokamak transport physics, source modulation experiments are one of the most effective methods. For an analysis of these modulation experiments, a simple numerical method was developed to solve the general transport equations. This method was applied to gas puff modulation experiments on JT-60U. From the comparison between the measured and calculated density perturbations, it was found that the particle diffusion coefficient is about 0.8 m 2 /sec in the edge region and 0.1-0.2 m 2 /sec in the central region. (author)

  3. First Argon Gas Puff Experiments With 500 ns Implosion Time On Sphinx Driver

    Science.gov (United States)

    Zucchini, F.; Calamy, H.; Lassalle, F.; Loyen, A.; Maury, P.; Grunenwald, J.; Georges, A.; Morell, A.; Bedoch, J.-P.; Ritter, S.; Combes, P.; Smaniotto, O.; Lample, R.; Coleman, P. L.; Krishnan, M.

    2009-01-01

    Experiments have been performed at the SPHINX driver to study potential of an Argon Gas Puff load designed by AASC. We present here the gas Puff hardware and results of the last shot series. The Argon Gas Puff load used is injected thanks to a 20 cm diameter nozzle. The nozzle has two annuli and a central jet. The pressure and gas type in each of the nozzle plena can be independently adjusted to tailor the initial gaz density distribution. This latter is selected as to obtain an increasing radial density from outer shell towards the pinch axis in order to mitigate the RT instabilities and to increase radiating mass on axis. A flashboard unit produces a high intensity UV source to pre-ionize the Argon gas. Typical dimensions of the load are 200 mm in diameter and 40 mm height. Pressures are adjusted to obtain an implosion time around 550 ns with a peak current of 3.5 MA. With the goal of improving k-shell yield a mass scan of the central jet was performed and implosion time, mainly given by outer and middle plena settings, was kept constant. Tests were also done to reduce the implosion time for two configurations of the central jet. Strong zippering of the radiation production was observed mainly due to the divergence of the central jet over the 40 mm of the load height. Due to that feature k-shell radiation is mainly obtained near cathode. Therefore tests were done to mitigate this effect first by adjusting local pressure of middle and central jet and second by shortening the pinch length. At the end of this series, best shot gave 5 kJ of Ar k-shell yield. PCD detectors showed that k-shell x-ray power was 670 GW with a FWHM of less than 10 ns.

  4. First Argon Gas Puff Experiments With 500 ns Implosion Time On Sphinx Driver

    International Nuclear Information System (INIS)

    Zucchini, F.; Calamy, H.; Lassalle, F.; Loyen, A.; Maury, P.; Grunenwald, J.; Georges, A.; Morell, A.; Bedoch, J.-P.; Ritter, S.; Combes, P.; Smaniotto, O.; Lample, R.; Coleman, P. L.; Krishnan, M.

    2009-01-01

    Experiments have been performed at the SPHINX driver to study potential of an Argon Gas Puff load designed by AASC. We present here the gas Puff hardware and results of the last shot series.The Argon Gas Puff load used is injected thanks to a 20 cm diameter nozzle. The nozzle has two annuli and a central jet. The pressure and gas type in each of the nozzle plena can be independently adjusted to tailor the initial gaz density distribution. This latter is selected as to obtain an increasing radial density from outer shell towards the pinch axis in order to mitigate the RT instabilities and to increase radiating mass on axis. A flashboard unit produces a high intensity UV source to pre-ionize the Argon gas. Typical dimensions of the load are 200 mm in diameter and 40 mm height. Pressures are adjusted to obtain an implosion time around 550 ns with a peak current of 3.5 MA.With the goal of improving k-shell yield a mass scan of the central jet was performed and implosion time, mainly given by outer and middle plena settings, was kept constant. Tests were also done to reduce the implosion time for two configurations of the central jet. Strong zippering of the radiation production was observed mainly due to the divergence of the central jet over the 40 mm of the load height. Due to that feature k-shell radiation is mainly obtained near cathode. Therefore tests were done to mitigate this effect first by adjusting local pressure of middle and central jet and second by shortening the pinch length.At the end of this series, best shot gave 5 kJ of Ar k-shell yield. PCD detectors showed that k-shell x-ray power was 670 GW with a FWHM of less than 10 ns.

  5. Spectroscopic determination of the magnetic field distribution in a gas-puff Z-pinch plasma

    Energy Technology Data Exchange (ETDEWEB)

    Gregorian, L; Davara, G; Kroupp, E; Maron, Y [Weizmann Institute of Science, Rehovot (Israel). Dept. of Particle Physics

    1997-12-31

    The time dependent radial distribution of the magnetic field in a gas-puff Z-pinch plasma has been determined by observing the Zeeman effect on emission lines, allowed for by polarization spectroscopy and high accuracy line-profile measurements. A modeling scheme, based on a 1-D magnetic diffusion equation, is used to fit the experimental data. The plasma conductivity inferred from the field distribution was found to be consistent with the Spitzer conductivity. The current density distribution and the time dependent plasma region in which the entire circuit current flows were determined. (author). 3 figs., 6 refs.

  6. Assessing Model Characterization of Single Source ...

    Science.gov (United States)

    Aircraft measurements made downwind from specific coal fired power plants during the 2013 Southeast Nexus field campaign provide a unique opportunity to evaluate single source photochemical model predictions of both O3 and secondary PM2.5 species. The model did well at predicting downwind plume placement. The model shows similar patterns of an increasing fraction of PM2.5 sulfate ion to the sum of SO2 and PM2.5 sulfate ion by distance from the source compared with ambient based estimates. The model was less consistent in capturing downwind ambient based trends in conversion of NOX to NOY from these sources. Source sensitivity approaches capture near-source O3 titration by fresh NO emissions, in particular subgrid plume treatment. However, capturing this near-source chemical feature did not translate into better downwind peak estimates of single source O3 impacts. The model estimated O3 production from these sources but often was lower than ambient based source production. The downwind transect ambient measurements, in particular secondary PM2.5 and O3, have some level of contribution from other sources which makes direct comparison with model source contribution challenging. Model source attribution results suggest contribution to secondary pollutants from multiple sources even where primary pollutants indicate the presence of a single source. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, deci

  7. A statistical theory on the turbulent diffusion of Gaussian puffs

    International Nuclear Information System (INIS)

    Mikkelsen, T.; Larsen, S.E.; Pecseli, H.L.

    1982-12-01

    The relative diffusion of a one-dimensional Gaussian cloud of particles is related to a two-particle covariance function in a homogeneous and stationary field of turbulence. A simple working approximation is suggested for the determination of this covariance function in terms of entirely Eulerian fields. Simple expressions are derived for the growth of the puff's standard deviation for diffusion times that are small compared to the integral time scale of the turbulence. (Auth.)

  8. 'Carcinogens in a puff': smoking in Hong Kong movies.

    Science.gov (United States)

    Ho, Sai-Yin; Wang, Man-Ping; Lai, Hak-Kan; Hedley, Anthony J; Lam, Tai-Hing

    2010-12-01

    Smoking scenes in movies, exploited by the tobacco industry to circumvent advertisement bans, are linked to adolescent smoking. Recently, a Hong Kong romantic comedy Love in a puff put smoking at centre stage, with numerous smoking scenes and words that glamourise smoking. Although WHO has issued guidelines on reducing the exposure of children to smoking in movies, none is adopted in Hong Kong. Comprehensive tobacco control strategies are urgently needed to protect young people in Hong Kong from cigarette promotion in movies.

  9. Impact of neutral density fluctuations on gas puff imaging diagnostics

    Science.gov (United States)

    Wersal, C.; Ricci, P.

    2017-11-01

    A three-dimensional turbulence simulation of the SOL and edge regions of a toroidally limited tokamak is carried out. The simulation couples self-consistently the drift-reduced two-fluid Braginskii equations to a kinetic equation for neutral atoms. A diagnostic neutral gas puff on the low-field side midplane is included and the impact of neutral density fluctuations on D_α light emission investigated. We find that neutral density fluctuations affect the D_α emission. In particular, at a radial distance from the gas puff smaller than the neutral mean free path, neutral density fluctuations are anti-correlated with plasma density, electron temperature, and D_α fluctuations. It follows that the neutral fluctuations reduce the D_α emission in most of the observed region and, therefore, have to be taken into account when interpreting the amplitude of the D_α emission. On the other hand, higher order statistical moments (skewness, kurtosis) and turbulence characteristics (such as correlation length, or the autocorrelation time) are not significantly affected by the neutral fluctuations. At distances from the gas puff larger than the neutral mean free path, a non-local shadowing effect influences the neutral density fluctuations. There, the D_α fluctuations are correlated with the neutral density fluctuations, and the high-order statistical moments and measurements of other turbulence properties are strongly affected by the neutral density fluctuations.

  10. Probability density function of a puff dispersing from the wall of a turbulent channel

    Science.gov (United States)

    Nguyen, Quoc; Papavassiliou, Dimitrios

    2015-11-01

    Study of dispersion of passive contaminants in turbulence has proved to be helpful in understanding fundamental heat and mass transfer phenomena. Many simulation and experimental works have been carried out to locate and track motions of scalar markers in a flow. One method is to combine Direct Numerical Simulation (DNS) and Lagrangian Scalar Tracking (LST) to record locations of markers. While this has proved to be useful, high computational cost remains a concern. In this study, we develop a model that could reproduce results obtained by DNS and LST for turbulent flow. Puffs of markers with different Schmidt numbers were released into a flow field at a frictional Reynolds number of 150. The point of release was at the channel wall, so that both diffusion and convection contribute to the puff dispersion pattern, defining different stages of dispersion. Based on outputs from DNS and LST, we seek the most suitable and feasible probability density function (PDF) that represents distribution of markers in the flow field. The PDF would play a significant role in predicting heat and mass transfer in wall turbulence, and would prove to be helpful where DNS and LST are not always available.

  11. Can one puff really make an adolescent addicted to nicotine? A critical review of the literature

    Directory of Open Access Journals (Sweden)

    Frenk Hanan

    2010-11-01

    Full Text Available Abstract Rationale In the past decade, there have been various attempts to understand the initiation and progression of tobacco smoking among adolescents. One line of research on these issues has made strong claims regarding the speed in which adolescents can become physically and mentally addicted to smoking. According to these claims, and in contrast to other models of smoking progression, adolescents can lose autonomy over their smoking behavior after having smoked one puff in their lifetime and never having smoked again, and can become mentally and physically "hooked on nicotine" even if they have never smoked a puff. Objectives To critically examine the conceptual and empirical basis for the claims made by the "hooked on nicotine" thesis. Method We reviewed the major studies on which the claims of the "hooked on nicotine" research program are based. Results The studies we reviewed contained substantive conceptual and methodological flaws. These include an untenable and idiosyncratic definition of addiction, use of single items or of very lenient criteria for diagnosing nicotine dependence, reliance on responders' causal attributions in determining physical and mental addiction to nicotine and biased coding and interpretation of the data. Discussion The conceptual and methodological problems detailed in this review invalidate many of the claims made by the "hooked on nicotine" research program and undermine its contribution to the understanding of the nature and development of tobacco smoking in adolescents.

  12. Learning models for multi-source integration

    Energy Technology Data Exchange (ETDEWEB)

    Tejada, S.; Knoblock, C.A.; Minton, S. [Univ. of Southern California/ISI, Marina del Rey, CA (United States)

    1996-12-31

    Because of the growing number of information sources available through the internet there are many cases in which information needed to solve a problem or answer a question is spread across several information sources. For example, when given two sources, one about comic books and the other about super heroes, you might want to ask the question {open_quotes}Is Spiderman a Marvel Super Hero?{close_quotes} This query accesses both sources; therefore, it is necessary to have information about the relationships of the data within each source and between sources to properly access and integrate the data retrieved. The SIMS information broker captures this type of information in the form of a model. All the information sources map into the model providing the user a single interface to multiple sources.

  13. Numerical Simulation and Optimization of Enhanced Oil Recovery by the In Situ Generated CO2 Huff-n-Puff Process with Compound Surfactant

    Directory of Open Access Journals (Sweden)

    Yong Tang

    2016-01-01

    Full Text Available This paper presents the numerical investigation and optimization of the operating parameters of the in situ generated CO2 Huff-n-Puff method with compound surfactant on the performance of enhanced oil recovery. First, we conducted experiments of in situ generated CO2 and surfactant flooding. Next, we constructed a single-well radial 3D numerical model using a thermal recovery chemical flooding simulator to simulate the process of CO2 Huff-n-Puff. The activation energy and reaction enthalpy were calculated based on the reaction kinetics and thermodynamic models. The interpolation parameters were determined through history matching a series of surfactant core flooding results with the simulation model. The effect of compound surfactant on the Huff-n-Puff CO2 process was demonstrated via a series of sensitivity studies to quantify the effects of a number of operation parameters including the injection volume and mole concentration of the reagent, the injection rate, the well shut-in time, and the oil withdrawal rate. Based on the daily production rate during the period of Huff-n-Puff, a desirable agreement was shown between the field applications and simulated results.

  14. Photovoltaic sources modeling and emulation

    CERN Document Server

    Piazza, Maria Carmela Di

    2012-01-01

    This book offers an extensive introduction to the modeling of photovoltaic generators and their emulation by means of power electronic converters will aid in understanding and improving design and setup of new PV plants.

  15. Amputation for a puff adder (Bitis arietans) envenomation in a child ...

    African Journals Online (AJOL)

    spreading halfway to the knee. The injury arose from a puff adder bite while walking on the bank of the Nile. The father took four days to transport the .... by an African Puff Adder (Bitis arietans). J Emerg Med. 1997; 15: 827-831. Figure 2. The child after the operation with hospital orderlies and father (West family photograph)

  16. The Commercial Open Source Business Model

    Science.gov (United States)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  17. An analytic uranium sources model

    International Nuclear Information System (INIS)

    Singer, C.E.

    2001-01-01

    This document presents a method for estimating uranium resources as a continuous function of extraction costs and describing the uncertainty in the resulting fit. The estimated functions provide convenient extrapolations of currently available data on uranium extraction cost and can be used to predict the effect of resource depletion on future uranium supply costs. As such, they are a useful input for economic models of the nuclear energy sector. The method described here pays careful attention to minimizing built-in biases in the fitting procedure and defines ways to describe the uncertainty in the resulting fits in order to render the procedure and its results useful to the widest possible variety of potential users. (author)

  18. Compression enhancement by current stepping in a multicascade liner gas-puff Z-pinch plasma

    Energy Technology Data Exchange (ETDEWEB)

    Khattak, N A D [Department of Physics, Gomal Unversity, D I Khan (Pakistan); Ahmad, Zahoor; Murtaza, G [National Tokamak Fusion Program, PAEC, Islamabad (Pakistan); Zakaullah, M [Department of Physics, Quaid-i-Azam University, Islamabad 45320 (Pakistan)], E-mail: ktk_nad@yahoo.com

    2008-04-15

    Plasma dynamics of a liner consisting of two or three annular cascade gas-puffs with entrained axial magnetic field is studied using the modified snow-plow model. The current stepping technique (Les 1984 J. Phys. D: Appl. Phys. 17 733) is employed to enhance compression of the imploding plasma. A small-diameter low-voltage-driven system of imploding plasma is considered in order to work out the possibility of the highest gain, in terms of plasma parameters and radiation yield with a relatively simple and compact system. Our numerical results demonstrate that current stepping enhances the plasma compression, yielding high values of the plasma parameters and compressed magnetic field B{sub z} (in magnitudes), if the switching time for the additional current is properly synchronized.

  19. Compression enhancement by current stepping in a multicascade liner gas-puff Z-pinch plasma

    International Nuclear Information System (INIS)

    Khattak, N A D; Ahmad, Zahoor; Murtaza, G; Zakaullah, M

    2008-01-01

    Plasma dynamics of a liner consisting of two or three annular cascade gas-puffs with entrained axial magnetic field is studied using the modified snow-plow model. The current stepping technique (Les 1984 J. Phys. D: Appl. Phys. 17 733) is employed to enhance compression of the imploding plasma. A small-diameter low-voltage-driven system of imploding plasma is considered in order to work out the possibility of the highest gain, in terms of plasma parameters and radiation yield with a relatively simple and compact system. Our numerical results demonstrate that current stepping enhances the plasma compression, yielding high values of the plasma parameters and compressed magnetic field B z (in magnitudes), if the switching time for the additional current is properly synchronized

  20. Characterization and modeling of the heat source

    Energy Technology Data Exchange (ETDEWEB)

    Glickstein, S.S.; Friedman, E.

    1993-10-01

    A description of the input energy source is basic to any numerical modeling formulation designed to predict the outcome of the welding process. The source is fundamental and unique to each joining process. The resultant output of any numerical model will be affected by the initial description of both the magnitude and distribution of the input energy of the heat source. Thus, calculated weld shape, residual stresses, weld distortion, cooling rates, metallurgical structure, material changes due to excessive temperatures and potential weld defects are all influenced by the initial characterization of the heat source. Understandings of both the physics and the mathematical formulation of these sources are essential for describing the input energy distribution. This section provides a brief review of the physical phenomena that influence the input energy distributions and discusses several different models of heat sources that have been used in simulating arc welding, high energy density welding and resistance welding processes. Both simplified and detailed models of the heat source are discussed.

  1. Balmorel open source energy system model

    DEFF Research Database (Denmark)

    Wiese, Frauke; Bramstoft, Rasmus; Koduvere, Hardi

    2018-01-01

    As the world progresses towards a cleaner energy future with more variable renewable energy sources, energy system models are required to deal with new challenges. This article describes design, development and applications of the open source energy system model Balmorel, which is a result...... of a long and fruitful cooperation between public and private institutions within energy system research and analysis. The purpose of the article is to explain the modelling approach, to highlight strengths and challenges of the chosen approach, to create awareness about the possible applications...... of Balmorel as well as to inspire to new model developments and encourage new users to join the community. Some of the key strengths of the model are the flexible handling of the time and space dimensions and the combination of operation and investment optimisation. Its open source character enables diverse...

  2. Faster universal modeling for two source classes

    NARCIS (Netherlands)

    Nowbakht, A.; Willems, F.M.J.; Macq, B.; Quisquater, J.-J.

    2002-01-01

    The Universal Modeling algorithms proposed in [2] for two general classes of finite-context sources are reviewed. The above methods were constructed by viewing a model structure as a partition of the context space and realizing that a partition can be reached through successive splits. Here we start

  3. System level modelling with open source tools

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Koefoed; Madsen, Jan; Niaki, Seyed Hosein Attarzadeh

    , called ForSyDe. ForSyDe is available under the open Source approach, which allows small and medium enterprises (SME) to get easy access to advanced modeling capabilities and tools. We give an introduction to the design methodology through the system level modeling of a simple industrial use case, and we...

  4. Inhaled smoke volume and puff indices with cigarettes of different tar and nicotine levels

    International Nuclear Information System (INIS)

    Woodman, G.; Newman, S.P.; Pavia, D.; Clarke, S.W.

    1987-01-01

    Ten asymptomatic smokers each smoked a low, low-to-middle and a middle tar cigarette with approximately the same tar-to-nicotine ratio, in a randomised order. The inhaled smoke volume was measured by tracing the smoke with the inert gas 81 Kr m . Puffing indices were recorded using an electronic smoking analyser and flowhead/cigarette holder. Throughout the study neither the mean inhaled smoke volume per puff nor the total inhaled smoke volume per cigarette changed significantly; however, the mean and total puff volumes were largest with the low tar cigarette and decreased with the higher tar brands. Puff volume was related to puff work (r s =0.83,P s =0.10,P>0.1). It is concluded that when switched between brands with the same tar-to-nicotine ratio, smokers increase their puff volumes with a lower tar cigarette but do not change the volume of smoke inhaled. Puff work and puff resistance were significantly correlated (r s =0.45,P<0.02). (author)

  5. Temperature Evolution of a 1 MA Triple-Nozzle Gas-Puff Z-Pinch

    Science.gov (United States)

    de Grouchy, Philip; Banasek, Jacob; Engelbrecht, Joey; Qi, Niansheng; Atoyan, Levon; Byvank, Tom; Cahill, Adam; Moore, Hannah; Potter, William; Ransohoff, Lauren; Hammer, David; Kusse, Bruce; Laboratory of Plasma Studies Team

    2015-11-01

    Mitigation of the Rayleigh-Taylor instability (RTI) plays a critical role in optimizing x-ray output at high-energy ~ 13 keV using the triple-nozzle Krypton gas-puff at Sandia National Laboratory. RTI mitigation by gas-puff density profiling using a triple-nozzle gas-puff valve has recently been recently demonstrated on the COBRA 1MA z-pinch at Cornell University. In support of this work we investigate the role of shell cooling in the growth of RTI during gas-puff implosions. Temperature measurements within the imploding plasma shell are recorded using a 527 nm, 10 GW Thomson scattering diagnostic for Neon, Argon and Krypton puffs. The mass-density profile is held constant at 22 microgram per centimeter for all three puffs and the temperature evolution of the imploding material is recorded. In the case of Argon puffs we find that the shell ion and electron effective temperatures remain in equilibrium at around 1keV for the majority of the implosion phase. In contrast scattered spectra from Krypton are dominated by of order 10 keV effective ion temperatures. Supported by the NNSA Stewardship Sciences Academic Programs.

  6. Atmospheres of Two Super-Puffs: Transmission Spectra of Kepler 51b and Kepler 51d

    Science.gov (United States)

    Roberts, Jessica; Berta-Thompson, Zachory K.; Desert, Jean-Michel; Deck, Katherine; Fabrycky, Daniel; Fortney, Jonathan J.; Line, Michael R.; Lopez, Eric; Masuda, Kento; Morley, Caroline; Sanchis Ojeda, Roberto; Winn, Joshua N.

    2018-06-01

    The Kepler 51 system hosts three transiting, extremely low-mass, low-density exoplanets. These planets orbit a young G type star at periods of 45, 85 and 130 days, placing them outside of the regime for the inflated hot-Jupiters. Instead, the Kepler 51 planets are part of a rare class of exoplanets: the super-puffs. Models suggest these H/He-rich planets formed outside of the snow-line and migrated inwards, which might imply abundant water in their atmospheres. Because Kepler 51b and 51d have low surface gravities, they also have scale heights 10x larger than a typical hot-Jupiter, making them prime targets for atmospheric investigation. Kepler 51c, while also possessing a large scale height, only grazes its star during transit. We are also presented with a unique opportunity to study two super-puffs in very different temperature regimes around the same star. Therefore, we observed two transits each of both Kepler 51b and 51d with the Hubble Space Telescope’s Wide Field Camera 3 G141 grism spectroscopy. Using these data we created spectroscopic light curves that allow us to compute a transmission spectrum for each planet. We conclude that both planets have a flat transmission spectrum with a precision better than 0.6 scale heights between 1.1 and 1.7 microns. We also analyzed the transit timing variations of each planet by combining re-fitted Kepler mid-transit times with our measured HST times. From these additional timing points, we are able to better constrain the planetary masses and the dynamics of the system. With these updated masses and revisited stellar parameters, we determine precise measurements on the densities of these planets. We will present these results as well as discuss the implications for high altitude aerosols in both Kepler 51b and 51d.

  7. Probabilistic forward model for electroencephalography source analysis

    International Nuclear Information System (INIS)

    Plis, Sergey M; George, John S; Jun, Sung C; Ranken, Doug M; Volegov, Petr L; Schmidt, David M

    2007-01-01

    Source localization by electroencephalography (EEG) requires an accurate model of head geometry and tissue conductivity. The estimation of source time courses from EEG or from EEG in conjunction with magnetoencephalography (MEG) requires a forward model consistent with true activity for the best outcome. Although MRI provides an excellent description of soft tissue anatomy, a high resolution model of the skull (the dominant resistive component of the head) requires CT, which is not justified for routine physiological studies. Although a number of techniques have been employed to estimate tissue conductivity, no present techniques provide the noninvasive 3D tomographic mapping of conductivity that would be desirable. We introduce a formalism for probabilistic forward modeling that allows the propagation of uncertainties in model parameters into possible errors in source localization. We consider uncertainties in the conductivity profile of the skull, but the approach is general and can be extended to other kinds of uncertainties in the forward model. We and others have previously suggested the possibility of extracting conductivity of the skull from measured electroencephalography data by simultaneously optimizing over dipole parameters and the conductivity values required by the forward model. Using Cramer-Rao bounds, we demonstrate that this approach does not improve localization results nor does it produce reliable conductivity estimates. We conclude that the conductivity of the skull has to be either accurately measured by an independent technique, or that the uncertainties in the conductivity values should be reflected in uncertainty in the source location estimates

  8. PUFF-III: A Code for Processing ENDF Uncertainty Data Into Multigroup Covariance Matrices

    International Nuclear Information System (INIS)

    Dunn, M.E.

    2000-01-01

    PUFF-III is an extension of the previous PUFF-II code that was developed in the 1970s and early 1980s. The PUFF codes process the Evaluated Nuclear Data File (ENDF) covariance data and generate multigroup covariance matrices on a user-specified energy grid structure. Unlike its predecessor, PUFF-III can process the new ENDF/B-VI data formats. In particular, PUFF-III has the capability to process the spontaneous fission covariances for fission neutron multiplicity. With regard to the covariance data in File 33 of the ENDF system, PUFF-III has the capability to process short-range variance formats, as well as the lumped reaction covariance data formats that were introduced in ENDF/B-V. In addition to the new ENDF formats, a new directory feature is now available that allows the user to obtain a detailed directory of the uncertainty information in the data files without visually inspecting the ENDF data. Following the correlation matrix calculation, PUFF-III also evaluates the eigenvalues of each correlation matrix and tests each matrix for positive definiteness. Additional new features are discussed in the manual. PUFF-III has been developed for implementation in the AMPX code system, and several modifications were incorporated to improve memory allocation tasks and input/output operations. Consequently, the resulting code has a structure that is similar to other modules in the AMPX code system. With the release of PUFF-III, a new and improved covariance processing code is available to process ENDF covariance formats through Version VI

  9. Effects of design parameters and puff topography on heating coil temperature and mainstream aerosols in electronic cigarettes

    Science.gov (United States)

    Zhao, Tongke; Shu, Shi; Guo, Qiuju; Zhu, Yifang

    2016-06-01

    Emissions from electronic cigarettes (ECs) may contribute to both indoor and outdoor air pollution and the number of users is increasing rapidly. ECs operate based on the evaporation of e-liquid by a high-temperature heating coil. Both puff topography and design parameters can affect this evaporation process. In this study, both mainstream aerosols and heating coil temperature were measured concurrently to study the effects of design parameters and puff topography. The heating coil temperatures and mainstream aerosols varied over a wide range across different brands and within same brand. The peak heating coil temperature and the count median diameter (CMD) of EC aerosols increased with a longer puff duration and a lower puff flow rate. The particle number concentration was positively associated with the puff duration and puff flow rate. These results provide a better understanding of how EC emissions are affected by design parameters and puff topography and emphasize the urgent need to better regulate EC products.

  10. A model for superliminal radio sources

    International Nuclear Information System (INIS)

    Milgrom, M.; Bahcall, J.N.

    1977-01-01

    A geometrical model for superluminal radio sources is described. Six predictions that can be tested by observations are summarized. The results are in agreement with all the available observations. In this model, the Hubble constant is the only numerical parameter that is important in interpreting the observed rates of change of angular separations for small redshifts. The available observations imply that H 0 is less than 55 km/s/Mpc if the model is correct. (author)

  11. Air quality dispersion models from energy sources

    International Nuclear Information System (INIS)

    Lazarevska, Ana

    1996-01-01

    Along with the continuing development of new air quality models that cover more complex problems, in the Clean Air Act, legislated by the US Congress, a consistency and standardization of air quality model applications were encouraged. As a result, the Guidelines on Air Quality Models were published, which are regularly reviewed by the Office of Air Quality Planning and Standards, EPA. These guidelines provide a basis for estimating the air quality concentrations used in accessing control strategies as well as defining emission limits. This paper presents a review and analysis of the recent versions of the models: Simple Terrain Stationary Source Model; Complex Terrain Dispersion Model; Ozone,Carbon Monoxide and Nitrogen Dioxide Models; Long Range Transport Model; Other phenomenon Models:Fugitive Dust/Fugitive Emissions, Particulate Matter, Lead, Air Pathway Analyses - Air Toxic as well as Hazardous Waste. 8 refs., 4 tabs., 2 ills

  12. Effects of quantity and layers number of low trans margarines on puff pastry quality

    Directory of Open Access Journals (Sweden)

    Zahorec Jana J.

    2017-01-01

    Full Text Available The aim of this study was to investigate the effect of puff pastry margarine with reduced content of trans isomers in production of puff pastry with enhanced nutritional value. Experiments were carried out on the basis of 32 factorial design, wherein the independent variables were the amount of puff pastry margarines (30, 40 and 50%, on flour weight and number of margarine layers formed during the dough processing (108, 144, and 256. In order to determine the optimum values of independent parameters, the study was focused on defining of relevant qualitative indicators of the final product. By investigation of influence of the type of puff pastry margarine (ML1 and ML2 on the quality of puff pastry, it was determined that physico-chemical properties of margarine ML1 were not optimal for puff pastry production. Margarine ML1 had lower hardness by 50-60%, lower SFC by 20-35% and worse thermal characteristics compared to margarine ML2. Only by application of the maximum amount of margarine ML1 and 144 margarine layers a satisfactory quality of puff pastry was obtained: the lift of 2.89, hardness of 17.7 kgs, volume 83.6 cm3 and the total number of points of 14.8. Because of its better technological characteristics, margarine ML2 is favorable for making puff pastry. Significantly better physical properties and excellent pastry quality was obtained in samples with margarine ML2 in an amount of 50% of margarine and 256 layers: higher lift by 45%, volume by 25% and the total number of points by about 20% compared to sample ML1 with the best quality.

  13. Development of the gas puff charge exchange recombination spectroscopy (GP-CXRS) technique for ion measurements in the plasma edge

    International Nuclear Information System (INIS)

    Churchill, R. M.; Theiler, C.; Lipschultz, B.; Dux, R.; Pütterich, T.; Viezzer, E.

    2013-01-01

    A novel charge-exchange recombination spectroscopy (CXRS) diagnostic method is presented, which uses a simple thermal gas puff for its donor neutral source, instead of the typical high-energy neutral beam. This diagnostic, named gas puff CXRS (GP-CXRS), is used to measure ion density, velocity, and temperature in the tokamak edge/pedestal region with excellent signal-background ratios, and has a number of advantages to conventional beam-based CXRS systems. Here we develop the physics basis for GP-CXRS, including the neutral transport, the charge-exchange process at low energies, and effects of energy-dependent rate coefficients on the measurements. The GP-CXRS hardware setup is described on two separate tokamaks, Alcator C-Mod and ASDEX Upgrade. Measured spectra and profiles are also presented. Profile comparisons of GP-CXRS and a beam based CXRS system show good agreement. Emphasis is given throughout to describing guiding principles for users interested in applying the GP-CXRS diagnostic technique

  14. Development of the gas puff charge exchange recombination spectroscopy (GP-CXRS) technique for ion measurements in the plasma edge

    Energy Technology Data Exchange (ETDEWEB)

    Churchill, R. M.; Theiler, C.; Lipschultz, B. [MIT Plasma Science and Fusion Center, Cambridge, Massachusetts 02139 (United States); Dux, R.; Pütterich, T.; Viezzer, E. [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, D-85748 Garching (Germany); Collaboration: Alcator C-Mod Team; ASDEX Upgrade Team

    2013-09-15

    A novel charge-exchange recombination spectroscopy (CXRS) diagnostic method is presented, which uses a simple thermal gas puff for its donor neutral source, instead of the typical high-energy neutral beam. This diagnostic, named gas puff CXRS (GP-CXRS), is used to measure ion density, velocity, and temperature in the tokamak edge/pedestal region with excellent signal-background ratios, and has a number of advantages to conventional beam-based CXRS systems. Here we develop the physics basis for GP-CXRS, including the neutral transport, the charge-exchange process at low energies, and effects of energy-dependent rate coefficients on the measurements. The GP-CXRS hardware setup is described on two separate tokamaks, Alcator C-Mod and ASDEX Upgrade. Measured spectra and profiles are also presented. Profile comparisons of GP-CXRS and a beam based CXRS system show good agreement. Emphasis is given throughout to describing guiding principles for users interested in applying the GP-CXRS diagnostic technique.

  15. Optimization of Fat-Reduced Puff Pastry Using Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Christoph Silow

    2017-02-01

    Full Text Available Puff pastry is a high-fat bakery product with fat playing a key role, both during the production process and in the final pastry. In this study, response surface methodology (RSM was successfully used to evaluate puff pastry quality for the development of a fat-reduced version. The technological parameters modified included the level of roll-in fat, the number of fat layers (50–200 and the final thickness (1.0–3.5 mm of the laminated dough. Quality characteristics of puff pastry were measured using the Texture Analyzer with an attached Extended Craft Knife (ECK and Multiple Puncture Probe (MPP, the VolScan and the C-Cell imaging system. The number of fat layers and final dough thickness, in combination with the amount of roll-in fat, had a significant impact on the internal and external structural quality parameters. With technological changes alone, a fat-reduced (≥30% puff pastry was developed. The qualities of fat-reduced puff pastries were comparable to conventional full-fat (33 wt % products. A sensory acceptance test revealed no significant differences in taste of fatness or ‘liking of mouthfeel’. Additionally, the fat-reduced puff pastry resulted in a significant (p < 0.05 positive correlation to ‘liking of flavor’ and overall acceptance by the assessors.

  16. Optimization of Fat-Reduced Puff Pastry Using Response Surface Methodology.

    Science.gov (United States)

    Silow, Christoph; Zannini, Emanuele; Axel, Claudia; Belz, Markus C E; Arendt, Elke K

    2017-02-22

    Puff pastry is a high-fat bakery product with fat playing a key role, both during the production process and in the final pastry. In this study, response surface methodology (RSM) was successfully used to evaluate puff pastry quality for the development of a fat-reduced version. The technological parameters modified included the level of roll-in fat, the number of fat layers (50-200) and the final thickness (1.0-3.5 mm) of the laminated dough. Quality characteristics of puff pastry were measured using the Texture Analyzer with an attached Extended Craft Knife (ECK) and Multiple Puncture Probe (MPP), the VolScan and the C-Cell imaging system. The number of fat layers and final dough thickness, in combination with the amount of roll-in fat, had a significant impact on the internal and external structural quality parameters. With technological changes alone, a fat-reduced (≥30%) puff pastry was developed. The qualities of fat-reduced puff pastries were comparable to conventional full-fat (33 wt %) products. A sensory acceptance test revealed no significant differences in taste of fatness or 'liking of mouthfeel'. Additionally, the fat-reduced puff pastry resulted in a significant ( p < 0.05) positive correlation to 'liking of flavor' and overall acceptance by the assessors.

  17. Interactions and ``puff clustering'' close to the critical point in pipe flow

    Science.gov (United States)

    Vasudevan, Mukund; Hof, Björn

    2017-11-01

    The first turbulent structures to arise in pipe flow are puffs. Albeit transient in nature, their spreading determines if eventually turbulence becomes sustained. Due to the extremely long time scales involved in these processes it is virtually impossible to directly observe the transition and the flow patterns that are eventually assumed in the long time limit. We present a new experimental approach where, based on the memoryless nature of turbulent puffs, we continuously recreate the flow pattern exiting the pipe. These periodic boundary conditions enable us to show that the flow pattern eventually settles to a statistically steady state. While our study confirms the value of the critical point of Rec 2040 , the flow fields show that puffs interact over longer ranges than previously suspected. As a consequence puffs tend to cluster and these regions of large puff densities travel across the puff pattern in a wave like fashion. While transition in Couette flow has been shown to fall into the ``directed percolation'', pipe flow may be more complicated since long range interactions are prohibited for the percolation transition type. Extensive measurements at the critical point will be presented to clarify the nature of the transition.

  18. Developing a Successful Open Source Training Model

    Directory of Open Access Journals (Sweden)

    Belinda Lopez

    2010-01-01

    Full Text Available Training programs for open source software provide a tangible, and sellable, product. A successful training program not only builds revenue, it also adds to the overall body of knowledge available for the open source project. By gathering best practices and taking advantage of the collective expertise within a community, it may be possible for a business to partner with an open source project to build a curriculum that promotes the project and supports the needs of the company's training customers. This article describes the initial approach used by Canonical, the commercial sponsor of the Ubuntu Linux operating system, to engage the community in the creation of its training offerings. We then discuss alternate curriculum creation models and some of the conditions that are necessary for successful collaboration between creators of existing documentation and commercial training providers.

  19. Open source integrated modeling environment Delta Shell

    Science.gov (United States)

    Donchyts, G.; Baart, F.; Jagers, B.; van Putten, H.

    2012-04-01

    In the last decade, integrated modelling has become a very popular topic in environmental modelling since it helps solving problems, which is difficult to model using a single model. However, managing complexity of integrated models and minimizing time required for their setup remains a challenging task. The integrated modelling environment Delta Shell simplifies this task. The software components of Delta Shell are easy to reuse separately from each other as well as a part of integrated environment that can run in a command-line or a graphical user interface mode. The most components of the Delta Shell are developed using C# programming language and include libraries used to define, save and visualize various scientific data structures as well as coupled model configurations. Here we present two examples showing how Delta Shell simplifies process of setting up integrated models from the end user and developer perspectives. The first example shows coupling of a rainfall-runoff, a river flow and a run-time control models. The second example shows how coastal morphological database integrates with the coastal morphological model (XBeach) and a custom nourishment designer. Delta Shell is also available as open-source software released under LGPL license and accessible via http://oss.deltares.nl.

  20. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  1. Simulation study of huff-n-puff air injection for enhanced oil recovery in shale oil reservoirs

    Directory of Open Access Journals (Sweden)

    Hu Jia

    2018-03-01

    Full Text Available This paper is the first attempt to evaluate huff-n-puff air injection in a shale oil reservoir using a simulation approach. Recovery mechanisms and physical processes of huff-n-puff air injection in a shale oil reservoir are investigated through investigating production performance, thermal behavior, reservoir pressure and fluid saturation features. Air flooding is used as the basic case for a comparative study. The simulation study suggests that thermal drive is the main recovery mechanism for huff-n-puff air injection in the shale oil reservoir, but not for simple air flooding. The synergic recovery mechanism of air flooding in conventional light oil reservoirs can be replicated in shale oil reservoirs by using air huff-n-puff injection strategy. Reducing huff-n-puff time is better for performing the synergic recovery mechanism of air injection. O2 diffusion plays an important role in huff-n-puff air injection in shale oil reservoirs. Pressure transmissibility as well as reservoir pressure maintenance ability in huff-n-puff air injection is more pronounced than the simple air flooding after primary depletion stage. No obvious gas override is exhibited in both air flooding and air huff-n-puff injection scenarios in shale reservoirs. Huff-n-puff air injection has great potential to develop shale oil reservoirs. The results from this work may stimulate further investigations.

  2. Markov source model for printed music decoding

    Science.gov (United States)

    Kopec, Gary E.; Chou, Philip A.; Maltz, David A.

    1995-03-01

    This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.

  3. Modeling of renewable hybrid energy sources

    Directory of Open Access Journals (Sweden)

    Dumitru Cristian Dragos

    2009-12-01

    Full Text Available Recent developments and trends in the electric power consumption indicate an increasing use of renewable energy. Renewable energy technologies offer the promise of clean, abundant energy gathered from self-renewing resources such as the sun, wind, earth and plants. Virtually all regions of the world have renewable resources of one type or another. By this point of view studies on renewable energies focuses more and more attention. The present paper intends to present different mathematical models related to different types of renewable energy sources such as: solar energy and wind energy. It is also presented the validation and adaptation of such models to hybrid systems working in geographical and meteorological conditions specific to central part of Transylvania region. The conclusions based on validation of such models are also shown.

  4. "Polycyclische aromatische koolwaterstoffen. (PAK), nikkel en vanadium in luchtstof uit Bahrein (Perzische Golf): metingen en Puff-modelberekeningen voor dit gebied ten tijde van het branden van de oliebronnen in Kuwayt"

    NARCIS (Netherlands)

    Vaessen HAMG; Wilbers AAMM; Jekel AA; van Pul WAJ; van der Meulen A; Bloemen HJT; de Boer JLM

    1993-01-01

    In 1991, air particulate matter, was sampled in Bahrein when soot clouds were over that region. Also in that period Puff-model calculations were carried out for the Persian Gulf region to forecast the dispersion of the combustion products and the impact on the environment of the burning oil wells

  5. Modeling a neutron rich nuclei source

    Energy Technology Data Exchange (ETDEWEB)

    Mirea, M.; Bajeat, O.; Clapier, F.; Ibrahim, F.; Mueller, A.C.; Pauwels, N.; Proust, J. [Institut de Physique Nucleaire, IN2P3/CNRS, 91 - Orsay (France); Mirea, M. [Institute of Physics and Nuclear Engineering, Tandem Lab., Bucharest (Romania)

    2000-07-01

    The deuteron break-up process in a suitable converter gives rise to intense neutron beams. A source of neutron rich nuclei based on the neutron induced fission can be realised using these beams. A theoretical optimization of such a facility as a function of the incident deuteron energy is reported. The model used to determine the fission products takes into account the excitation energy of the target nucleus and the evaporation of prompt neutrons. Results are presented in connection with a converter-target specific geometry. (author000.

  6. Modeling a neutron rich nuclei source

    International Nuclear Information System (INIS)

    Mirea, M.; Bajeat, O.; Clapier, F.; Ibrahim, F.; Mueller, A.C.; Pauwels, N.; Proust, J.; Mirea, M.

    2000-01-01

    The deuteron break-up process in a suitable converter gives rise to intense neutron beams. A source of neutron rich nuclei based on the neutron induced fission can be realised using these beams. A theoretical optimization of such a facility as a function of the incident deuteron energy is reported. The model used to determine the fission products takes into account the excitation energy of the target nucleus and the evaporation of prompt neutrons. Results are presented in connection with a converter-target specific geometry. (authors)

  7. Distribution of IOP measured with an air puff tonometer in a young population.

    Science.gov (United States)

    Hashemi, Hassan; Khabazkhoob, Mehdi; Nabovati, Payam; Yazdani, Negareh; Ostadimoghaddam, Hadi; Shiralivand, Ehsan; Derakhshan, Akbar; Yekta, AbbasAli

    2018-03-01

    To determine the normal range of intraocular pressure (IOP) in the young and its association with certain corneal parameters using a non-contact device. Subjects were selected from students of Mashhad University of Medical Sciences through stratified sampling. All participants had visual acuity testing, corneal imaging, a comprehensive slit-lamp examination by an ophthalmologist, and IOP measurement using a non-contact air-puff tonometer. Of the 1280 invitees, 1073 (83.8%) participated, and 1027 were eligible. Mean IOP was 16.38 mmHg [95% confidence interval (CI): 16.22-16.53] in the total sample, 16.14 mmHg (95% CI: 15.84-16.45) in men, and 16.48 mmHg (95% CI: 16.31-16.66) in women. There was a significant IOP difference between myopes and emmetropes ( P  = 0.031). Based on the multiple linear regression model, IOP associated directly with age and central corneal thickness (CCT), and inversely with corneal diameter, spherical equivalent (SE), and keratoconus. Based on standardized coefficients of the regression model, CCT and SE had the strongest association with IOP. In the present study, we demonstrated the IOP distribution in a young population using a non-contact method. CCT and SE were strongly associated with IOP.

  8. Data analysis and source modelling for LISA

    International Nuclear Information System (INIS)

    Shang, Yu

    2014-01-01

    The gravitational waves are one of the most important predictions in general relativity. Besides of the directly proof of the existence of GWs, there are already several ground based detectors (such as LIGO, GEO, etc) and the planed future space mission (such as: LISA) which are aim to detect the GWs directly. GW contain a large amount of information of its source, extracting these information can help us dig out the physical property of the source, even open a new window for understanding the Universe. Hence, GW data analysis will be a challenging task in seeking the GWs. In this thesis, I present two works about the data analysis for LISA. In the first work, we introduce an extended multimodal genetic algorithm which utilizes the properties of the signal and the detector response function to analyze the data from the third round of mock LISA data challenge. We have found all five sources present in the data and recovered the coalescence time, chirp mass, mass ratio and sky location with reasonable accuracy. As for the orbital angular momentum and two spins of the Black Holes, we have found a large number of widely separated modes in the parameter space with similar maximum likelihood values. The performance of this method is comparable, if not better, to already existing algorithms. In the second work, we introduce an new phenomenological waveform model for the extreme mass ratio inspiral system. This waveform consists of a set of harmonics with constant amplitude and slowly evolving phase which we decompose in a Taylor series. We use these phenomenological templates to detect the signal in the simulated data, and then, assuming a particular EMRI model, estimate the physical parameters of the binary with high precision. The results show that our phenomenological waveform is very feasible in the data analysis of EMRI signal.

  9. Sustaining an Online, Shared Community Resource for Models, Robust Open source Software Tools and Data for Volcanology - the Vhub Experience

    Science.gov (United States)

    Patra, A. K.; Valentine, G. A.; Bursik, M. I.; Connor, C.; Connor, L.; Jones, M.; Simakov, N.; Aghakhani, H.; Jones-Ivey, R.; Kosar, T.; Zhang, B.

    2015-12-01

    Over the last 5 years we have created a community collaboratory Vhub.org [Palma et al, J. App. Volc. 3:2 doi:10.1186/2191-5040-3-2] as a place to find volcanology-related resources, and a venue for users to disseminate tools, teaching resources, data, and an online platform to support collaborative efforts. As the community (current active users > 6000 from an estimated community of comparable size) embeds the tools in the collaboratory into educational and research workflows it became imperative to: a) redesign tools into robust, open source reusable software for online and offline usage/enhancement; b) share large datasets with remote collaborators and other users seamlessly with security; c) support complex workflows for uncertainty analysis, validation and verification and data assimilation with large data. The focus on tool development/redevelopment has been twofold - firstly to use best practices in software engineering and new hardware like multi-core and graphic processing units. Secondly we wish to enhance capabilities to support inverse modeling, uncertainty quantification using large ensembles and design of experiments, calibration, validation. Among software engineering practices we practice are open source facilitating community contributions, modularity and reusability. Our initial targets are four popular tools on Vhub - TITAN2D, TEPHRA2, PUFF and LAVA. Use of tools like these requires many observation driven data sets e.g. digital elevation models of topography, satellite imagery, field observations on deposits etc. These data are often maintained in private repositories that are privately shared by "sneaker-net". As a partial solution to this we tested mechanisms using irods software for online sharing of private data with public metadata and access limits. Finally, we adapted use of workflow engines (e.g. Pegasus) to support the complex data and computing workflows needed for usage like uncertainty quantification for hazard analysis using physical

  10. Injector design for liner-on-target gas-puff experiments

    Science.gov (United States)

    Valenzuela, J. C.; Krasheninnikov, I.; Conti, F.; Wessel, F.; Fadeev, V.; Narkis, J.; Ross, M. P.; Rahman, H. U.; Ruskov, E.; Beg, F. N.

    2017-11-01

    We present the design of a gas-puff injector for liner-on-target experiments. The injector is composed of an annular high atomic number (e.g., Ar and Kr) gas and an on-axis plasma gun that delivers an ionized deuterium target. The annular supersonic nozzle injector has been studied using Computational Fluid Dynamics (CFD) simulations to produce a highly collimated (M > 5), ˜1 cm radius gas profile that satisfies the theoretical requirement for best performance on ˜1-MA current generators. The CFD simulations allowed us to study output density profiles as a function of the nozzle shape, gas pressure, and gas composition. We have performed line-integrated density measurements using a continuous wave (CW) He-Ne laser to characterize the liner gas density. The measurements agree well with the CFD values. We have used a simple snowplow model to study the plasma sheath acceleration in a coaxial plasma gun to help us properly design the target injector.

  11. Theoretical and experimental comparisons of Gamble 2 argon gas puff experiments

    International Nuclear Information System (INIS)

    Thornhill, J.W.; Young, F.C.; Whitney, K.G.; Davis, J.; Stephanakis, S.J.

    1990-01-01

    A one-dimensional radiative MHD analysis of an imploding argon gas puff plasma is performed. The calculations are set up to approximate the conditions of a series of argon gas puff experiments that were carried out on the NRL Gamble II generator. Annular gas puffs (2.5 cm diameter) are imploded with a 1.2-MA peak driving current for different initial argon mass loadings. Comparisons are made with the experimental results for implosion times, K, L-shell x-ray emission, and energy coupled from the generator to the plasma load. The purpose of these calculations is to provide a foundation from which a variety of physical phenomena which influence the power and total energy of the x-ray emission can be analyzed. Comparisons with similar experimental and theoretical results for aluminum plasmas are discussed

  12. Experimental studies of the argon-puff Z-pinch implosion process

    International Nuclear Information System (INIS)

    Huang Xianbin; Yang Libing; Gu Yuanchao; Deng Jianjun; Zhou Rongguo; Zou Jie; Zhou Shaotong; Zhang Siqun; Chen Guanghua; Chang Lihua; Li Fengping; Ouyang Kai; Li Jun; Yang Liang; Wang Xiong; Zhang Zhaohui

    2006-01-01

    A preliminary experiment for studying the argon-puff Z-pinch implosion process has been performed on the Yang accelerator. The ten-frame nanosecond temporal and spatial gated camera, visible high-speed scanning camera, differential laser interferometer, X-ray time integration pinhole camera and X-ray power system have been used to investigated the evolution of the argon-puff Z-pinch. Some typical results of argon-puff Z-pinch during implosion and pinch phase, including the 'zipper' effect, necking phenomenon, sausage instability, temperature changes and the effect of load current rise time, are given and analyzed as examples, and some relevant conclusions are drawn. (authors)

  13. CO2 Huff-n-Puff Process in a Light Oil Shallow Shelf Carbonate Reservoir

    Energy Technology Data Exchange (ETDEWEB)

    Boomer, R.J.; Cole, R.; Kovar, M.; Prieditis, J.; Vogt, J.; Wehner, S.

    1999-02-24

    The application cyclic CO2, often referred to as the CO2 Huff-n-Puff process, may find its niche in the maturing waterfloods of the Permian Basin. Coupling the CO2 Huff-n-Puff process to miscible flooding applications could provide the needed revenue to sufficiently mitigate near-term negative cash flow concerns in capital-intensive miscible projects. Texaco Exploration and Production Inc. and the US Department of Energy have teamed up in a attempt to develop the CO2 Huff-n-Puff process in the Grayburg and San Andres formations which are light oil, shallow shelf carbonate reservoirs that exist throughout the Permian Basin. This cost-shared effort is intended to demonstrate the viability of this underutilized technology in a specific class of domestic reservoir.

  14. Source term modelling parameters for Project-90

    International Nuclear Information System (INIS)

    Shaw, W.; Smith, G.; Worgan, K.; Hodgkinson, D.; Andersson, K.

    1992-04-01

    This document summarises the input parameters for the source term modelling within Project-90. In the first place, the parameters relate to the CALIBRE near-field code which was developed for the Swedish Nuclear Power Inspectorate's (SKI) Project-90 reference repository safety assessment exercise. An attempt has been made to give best estimate values and, where appropriate, a range which is related to variations around base cases. It should be noted that the data sets contain amendments to those considered by KBS-3. In particular, a completely new set of inventory data has been incorporated. The information given here does not constitute a complete set of parameter values for all parts of the CALIBRE code. Rather, it gives the key parameter values which are used in the constituent models within CALIBRE and the associated studies. For example, the inventory data acts as an input to the calculation of the oxidant production rates, which influence the generation of a redox front. The same data is also an initial value data set for the radionuclide migration component of CALIBRE. Similarly, the geometrical parameters of the near-field are common to both sub-models. The principal common parameters are gathered here for ease of reference and avoidance of unnecessary duplication and transcription errors. (au)

  15. An experimental study on Kr gas-puff Z-pinch

    International Nuclear Information System (INIS)

    Kuai Bin; Cong Peitian; Zeng Zhengzhong; Qiu Aici; Qiu Mengtong; Chen Hong; Liang Tianxue; He Wenlai; Wang Liangping; Zhang Zhong

    2002-01-01

    Kr gas-puff Z-pinch experiment performed recently on Qiang-guang I pulsed power generator is reported. The generator has a 1.5 MA current with a pulse width of 100 ns. The total X-ray energy as well as its spectrum has been obtained, and the average power of X-ray radiation in 50 - 700 eV measured by XRDs is 2 TW. The generator configuration, gas-puff load assembly and diagnostic system for the experiments are described

  16. Integrated source-risk model for radon: A definition study

    International Nuclear Information System (INIS)

    Laheij, G.M.H.; Aldenkamp, F.J.; Stoop, P.

    1993-10-01

    The purpose of a source-risk model is to support policy making on radon mitigation by comparing effects of various policy options and to enable optimization of counter measures applied to different parts of the source-risk chain. There are several advantages developing and using a source-risk model: risk calculations are standardized; the effects of measures applied to different parts of the source-risk chain can be better compared because interactions are included; and sensitivity analyses can be used to determine the most important parameters within the total source-risk chain. After an inventory of processes and sources to be included in the source-risk chain, the models presently available in the Netherlands are investigated. The models were screened for completeness, validation and operational status. The investigation made clear that, by choosing for each part of the source-risk chain the most convenient model, a source-risk chain model for radon may be realized. However, the calculation of dose out of the radon concentrations and the status of the validation of most models should be improved. Calculations with the proposed source-risk model will give estimations with a large uncertainty at the moment. For further development of the source-risk model an interaction between the source-risk model and experimental research is recommended. Organisational forms of the source-risk model are discussed. A source-risk model in which only simple models are included is also recommended. The other models are operated and administrated by the model owners. The model owners execute their models for a combination of input parameters. The output of the models is stored in a database which will be used for calculations with the source-risk model. 5 figs., 15 tabs., 7 appendices, 14 refs

  17. Mainstream Smoke Gas Phase Filtration Performance of Adsorption Materials Evaluated With A Puff-by-Puff Multiplex GC-MS Method

    Directory of Open Access Journals (Sweden)

    Xue L

    2014-12-01

    Full Text Available The mainstream smoke filtration performance of activated carbon, silica gel and polymeric aromatic resins for gas-phase components was evaluated using a puff-by-puff multiplex gas chromatography-mass spectrometry (GC-MS analysis method (1. The sample 1R4F Kentucky reference cigarettes were modified by placing the adsorbents in a plug/space/plug filter configuration. Due to differences in surface area and structural characteristics, the adsorbent materials studied showed different levels of filtration activities for the twenty-six constituents monitored. Activated carbon had significant adsorption activity for all the gas-phase smoke constituents observed except ethane and carbon dioxide, while silica gel had significant activities for polar components such as aldehydes, acrolein, ketones, and diacetyl. XAD-16 polyaromatic resins showed varied levels of activity for aromatic compounds, cyclic dienes and ketones.

  18. Developmental ecdysteroid titers and DNA puffs in larvae of two sciarid species, Rhynchosciara americana and Rhynchosciara milleri (Diptera: Sciaridae).

    Science.gov (United States)

    Soares, M A M; Hartfelder, K; Tesserolli de Souza, J M; Stocker, A J

    2015-10-01

    Ecdysteroid titers, developmental landmarks and the presence of prominent amplifying regions (DNA puffs) have been compared during late larval to pupal development in four groups of Rhynchosciara americana larvae and in R. americana and Rhynchosciara milleri. Three prominent DNA puffs (B2, C3 and C8) expand and regress sequentially on the rising phase of the 20-hydroxyecdysone (20E) titer in R. americana as a firm, cellular cocoon is being constructed. A sharp rise in 20E coincides with the regression of these puffs. The shape of the 20E curve is similar in R. milleri, a species that does not construct a massive cocoon, but the behavior of certain DNA puffs and their temporal relationship to the curve differs. Regions corresponding to B2 and C3 can be identified in R. milleri by banding pattern similarity with R. americana chromosomes and, in the case of B2, by hybridization to an R. americana probe. A B2 puff appears in R. milleri as the 20E titer rises but remains small in all gland regions. A puff similar to the R. americana C3 puff occurs in posterior gland cells of R. milleri (C3(Rm)) after the B2 puff, but this site did not hybridize to R. americana C3 probes. C3(Rm) incorporated (3)H-thymidine above background, but showed less post-puff DNA accumulation than C3 of R. americana. R. americana C8 probes hybridized to a more distal region of the R. milleri C chromosome that did not appear to amplify or form a large puff. These differences can be related to developmental differences, in particular differences in cocoon construction between the two species.

  19. An open source business model for malaria.

    Science.gov (United States)

    Årdal, Christine; Røttingen, John-Arne

    2015-01-01

    Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new malaria

  20. An open source business model for malaria.

    Directory of Open Access Journals (Sweden)

    Christine Årdal

    Full Text Available Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related

  1. About Block Dynamic Model of Earthquake Source.

    Science.gov (United States)

    Gusev, G. A.; Gufeld, I. L.

    One may state the absence of a progress in the earthquake prediction papers. The short-term prediction (diurnal period, localisation being also predicted) has practical meaning. Failure is due to the absence of the adequate notions about geological medium, particularly, its block structure and especially in the faults. Geological and geophysical monitoring gives the basis for the notion about geological medium as open block dissipative system with limit energy saturation. The variations of the volume stressed state close to critical states are associated with the interaction of the inhomogeneous ascending stream of light gases (helium and hydrogen) with solid phase, which is more expressed in the faults. In the background state small blocks of the fault medium produce the sliding of great blocks in the faults. But for the considerable variations of ascending gas streams the formation of bound chains of small blocks is possible, so that bound state of great blocks may result (earthquake source). Recently using these notions we proposed a dynamical earthquake source model, based on the generalized chain of non-linear bound oscillators of Fermi-Pasta-Ulam type (FPU). The generalization concerns its in homogeneity and different external actions, imitating physical processes in the real source. Earlier weak inhomogeneous approximation without dissipation was considered. Last has permitted to study the FPU return (return to initial state). Probabilistic properties in quasi periodic movement were found. The chain decay problem due to non-linearity and external perturbations was posed. The thresholds and dependence of life- time of the chain are studied. The great fluctuations of life-times are discovered. In the present paper the rigorous consideration of the inhomogeneous chain including the dissipation is considered. For the strong dissipation case, when the oscillation movements are suppressed, specific effects are discovered. For noise action and constantly arising

  2. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  3. Heat source model for welding process

    International Nuclear Information System (INIS)

    Doan, D.D.

    2006-10-01

    One of the major industrial stakes of the welding simulation relates to the control of mechanical effects of the process (residual stress, distortions, fatigue strength... ). These effects are directly dependent on the temperature evolutions imposed during the welding process. To model this thermal loading, an original method is proposed instead of the usual methods like equivalent heat source approach or multi-physical approach. This method is based on the estimation of the weld pool shape together with the heat flux crossing the liquid/solid interface, from experimental data measured in the solid part. Its originality consists in solving an inverse Stefan problem specific to the welding process, and it is shown how to estimate the parameters of the weld pool shape. To solve the heat transfer problem, the interface liquid/solid is modeled by a Bezier curve ( 2-D) or a Bezier surface (3-D). This approach is well adapted to a wide diversity of weld pool shapes met for the majority of the current welding processes (TIG, MlG-MAG, Laser, FE, Hybrid). The number of parameters to be estimated is weak enough, according to the cases considered from 2 to 5 in 20 and 7 to 16 in 3D. A sensitivity study leads to specify the location of the sensors, their number and the set of measurements required to a good estimate. The application of the method on test results of welding TIG on thin stainless steel sheets in emerging and not emerging configurations, shows that only one measurement point is enough to estimate the various weld pool shapes in 20, and two points in 3D, whatever the penetration is full or not. In the last part of the work, a methodology is developed for the transient analysis. It is based on the Duvaut's transformation which overpasses the discontinuity of the liquid metal interface and therefore gives a continuous variable for the all spatial domain. Moreover, it allows to work on a fixed mesh grid and the new inverse problem is equivalent to identify a source

  4. Gas puff radiation performance as a function of radial mass distribution

    International Nuclear Information System (INIS)

    Coleman, Philip L.; Krishnan, Mahadevan; Prasad, Rahul; Qi, Niansheng; Waisman, Eduardo; Failor, B.H.; Levine, J.S.; Sze, H.

    2002-01-01

    The basic concept of a z-pinch, that JxB forces implode a shell of mass, creating a hot dense plasma on-axis, is coming under closer scrutiny. Wire arrays may start with an initial cold mass in a near 'ideal' shell, but in fact they appear to develop complex radial mass distributions well before the final x-ray output. We consider here the situation for gas puff z-pinches. While the ideal of a gas 'shell' has been the nominal objective for many years, detailed measurements of gas flow show that nozzles used for plasma radiation sources (PRS) also have complex radial distributions. In particular, there are significant data showing that the best x-ray yield comes from the least shell-like distributions. Recent experiments on the Double Eagle generator with argon have further enhanced this view. For those tests with a double 'shell' nozzle, there was a factor of almost 4 increase in yield when the relative mass (outer:inner) in the two shells was changed from 2:1 to less than 1:1. We suggest the following explanation. A configuration with most of its mass at large radii is subject to severe disruption by instabilities during the implosion. A more continuous radial mass distribution with dρ/dr < 0 may mitigate instability development (via the 'snowplow stabilization' mechanism) and thus enhance the thermalization of the kinetic energy of the imploding mass. In addition, the appropriate balance of outer to inner mass maximizes the formation of a strong shock in the core of the pinch that heats the plasma and leads to x-ray emission

  5. On The Development of One-way Nesting of Air-pollution Model Smog Into Numerical Weather Prediction Model Eta

    Science.gov (United States)

    Halenka, T.; Bednar, J.; Brechler, J.

    The spatial distribution of air pollution on the regional scale (Bohemian region) is simulated by means of Charles University puff model SMOG. The results are used for the assessment of the concentration fields of ozone, nitrogen oxides and other ozone precursors. Current improved version of the model covers up to 16 groups of basic compounds and it is based on trajectory computation and puff interaction both by means of Gaussian diffusion mixing and chemical reactions of basic species. Gener- ally, the method used for trajectory computation is valuable mainly for episodes sim- ulation, nevertheless, climatological study can be solved as well by means of average wind rose. For the study being presented huge database of real emission sources was incorporated with all kind of sources included. Some problem with the background values of concentrations was removed. The model SMOG has been nested into the forecast model ETA to obtain appropriate meteorological data input. We can estimate air pollution characteristics both for episodes analysis and the prediction of future air quality conditions. Necessary prognostic variables from the numerical weather pre- diction model are taken for the region of the central Bohemia, where the original puff model was tested. We used mainly 850 hPa wind field for computation of prognos- tic trajectories, the influence of surface temperature as a parameter of photochemistry reactions as well as the effect of cloudness has been tested.

  6. Modelling of H.264 MPEG2 TS Traffic Source

    Directory of Open Access Journals (Sweden)

    Stanislav Klucik

    2013-01-01

    Full Text Available This paper deals with IPTV traffic source modelling. Traffic sources are used for simulation, emulation and real network testing. This model is made as a derivation of known recorded traffic sources that are analysed and statistically processed. As the results show the proposed model causes in comparison to the known traffic source very similar network traffic parameters when used in a simulated network.

  7. One-dimensional magnetohydrodynamic calculations of a hydrogen-gas puff

    International Nuclear Information System (INIS)

    Maxon, S.; Nielsen, P.D.

    1981-01-01

    A one-dimensional Lagrangian calculation of the implosion of a hydrogen gas puff is presented. At maximum compression, 60% of the mass is located in a density spike .5 mm off the axis with a half width of 40 μm. The temperature on axis reaches 200 eV

  8. Dynamics of sausage instabilities of a gas-puff Z-pinch

    International Nuclear Information System (INIS)

    Sopkin, Yu.V.; Dorokhin, L.A.; Koshelev, K.N.; Sidelnikov, Yu.V.

    1991-01-01

    The early stage of the sausage instability in a gas-puff Z-pinch has been registered in VUV and soft X-rays with a 10 ns framing camera. We hypothesize that the rings of plasma expanding from the sausage instability enable an alternative current path to dominate the formation of 'micropinches'. (orig.)

  9. High-energy electron acceleration in the gas-puff Z-pinch plasma

    Energy Technology Data Exchange (ETDEWEB)

    Takasugi, Keiichi, E-mail: takasugi@phys.cst.nihon-u.ac.jp [Institute of Quantum Science, Nihon University, 1-8 Kanda-Surugadai, Chiyoda, Tokyo 101-8308 (Japan); Miyazaki, Takanori [Institute of Quantum Science, Nihon University, 1-8 Kanda-Surugadai, Chiyoda, Tokyo 101-8308, Japan and Dept. Innovation Systems Eng., Utsunomiya University, 7-1-2 Yoto, Utsunomiya, Tochigi 321-8585 (Japan); Nishio, Mineyuki [Anan National College of Technology, 265 Aoki, Minobayashi, Anan, Tokushima 774-0017 (Japan)

    2014-12-15

    The characteristics of hard x-ray generation were examined in the gas-puff z-pinch experiment. The experiment on reversing the voltage was conducted. In both of the positive and negative discharges, the x-ray was generated only from the anode surface, so it was considered that the electrons were accelerated by the induced electromagnetic force at the pinch time.

  10. Computerized dosimetry of I-125 sources model 6711

    International Nuclear Information System (INIS)

    Isturiz, J.

    2001-01-01

    It tries on: physical presentation of the sources; radiation protection; mathematical model of I-125 source model 6711; data considered for the calculation program; experimental com probation of the dose distribution; exposure rate and apparent activity; techniques of the use given to the sources I-125; and the calculation planning systems [es

  11. Utilizing natural gas huff and puff to enhance production in heavy oil reservoir

    Energy Technology Data Exchange (ETDEWEB)

    Wenlong, G.; Shuhong, W.; Jian, Z.; Xialin, Z. [Society of Petroleum Engineers, Kuala Lumpur (Malaysia)]|[PetroChina Co. Ltd., Beijing (China); Jinzhong, L.; Xiao, M. [China Univ. of Petroleum, Beijing (China)

    2008-10-15

    The L Block in the north structural belt of China's Tuha Basin is a super deep heavy oil reservoir. The gas to oil ratio (GOR) is 12 m{sup 3}/m{sup 3} and the initial bubble point pressure is only 4 MPa. The low production can be attributed to high oil viscosity and low flowability. Although steam injection is the most widely method for heavy oil production in China, it is not suitable for the L Block because of its depth. This paper reviewed pilot tests in which the natural gas huff and puff process was used to enhance production in the L Block. Laboratory experiments that included both conventional and unconventional PVT were conducted to determine the physical property of heavy oil saturated by natural gas. The experiments revealed that the heavy oil can entrap the gas for more than several hours because of its high viscosity. A pseudo bubble point pressure exists much lower than the bubble point pressure in manmade foamy oils, which is relative to the depressurization rate. Elastic energy could be maintained in a wider pressure scope than natural depletion without gas injection. A special experimental apparatus that can stimulate the process of gas huff and puff in the reservoir was also introduced. The foamy oil could be seen during the huff and puff experiment. Most of the oil flowed to the producer in a pseudo single phase, which is among the most important mechanisms for enhancing production. A pilot test of a single well demonstrated that the oil production increased from 1 to 2 cubic metres per day to 5 to 6 cubic metres per day via the natural gas huff and puff process. The stable production period which was 5 to 10 days prior to huff and puff, was prolonged to 91 days in the first cycle and 245 days in the second cycle. 10 refs., 1 tab., 12 figs.

  12. Source Term Model for Fine Particle Resuspension from Indoor Surfaces

    National Research Council Canada - National Science Library

    Kim, Yoojeong; Gidwani, Ashok; Sippola, Mark; Sohn, Chang W

    2008-01-01

    This Phase I effort developed a source term model for particle resuspension from indoor surfaces to be used as a source term boundary condition for CFD simulation of particle transport and dispersion in a building...

  13. Effects of ascorbic acid, translutaminase and margarine amounts on the quality of puff pastry made from spelt flour

    OpenAIRE

    Šimurina, Olivera D.; Filipčev, Bojana V.; Bodroža-Solarov, Marija I.; Šoronja-Simović, Dragana M.

    2015-01-01

    Puff pastry has delicate and flaky texture which comes from unique combination of fat and dough. These bakery products are made from many thin layers of dough which are separated by alternate fat layers because of which they are considered to be high fat food. Properties of puff pastry depend mostly on the quality of flour, which must be specifically tailored for this purpose. The most commonly used flour in the production of puff pastry is refined wheat flour. Lately, the requirements of con...

  14. Gamma-ray imaging and holdup assays of 235-F PuFF cells 1 & 2

    Energy Technology Data Exchange (ETDEWEB)

    Aucott, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-12-20

    Savannah River National Laboratory (SRNL) Nuclear Measurements (L4120) was tasked with performing enhanced characterization of the holdup in the PuFF shielded cells. Assays were performed in accordance with L16.1-ADS-2460 using two high-resolution gamma-ray detectors. The first detector, an In Situ Object Counting System (ISOCS)-characterized detector, was used in conjunction with the ISOCS Geometry Composer software to quantify grams of holdup. The second detector, a Germanium Gamma-ray Imager (GeGI), was used to visualize the location and relative intensity of the holdup in the cells. Carts and collimators were specially designed to perform optimum assays of the cells. Thick, pencil-beam tungsten collimators were fabricated to allow for extremely precise targeting of items of interest inside the cells. Carts were designed with a wide range of motion to position and align the detectors. A total of 24 measurements were made, each typically 24 hours or longer to provide sufficient statistical precision. This report presents the results of the enhanced characterization for cells 1 and 2. The measured gram values agree very well with results from the 2014 study. In addition, images were created using both the 2014 data and the new GeGI data. The GeGI images of the cells walls reveal significant Pu-238 holdup on the surface of the walls in cells 1 and 2. Additionally, holdup is visible in the two pass-throughs from cell 1 to the wing cabinets. This report documents the final element (exterior measurements coupled with gamma-ray imaging and modeling) of the enhanced characterization of cells 1-5 (East Cell Line).

  15. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  16. Computational model of Amersham I-125 source model 6711 and Prosper Pd-103 source model MED3633 using MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Menezes, Artur F.; Reis Junior, Juraci P.; Silva, Ademir X., E-mail: ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear; Rosa, Luiz A.R. da, E-mail: lrosa@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Facure, Alessandro [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil); Cardoso, Simone C., E-mail: Simone@if.ufrj.b [Universidade Federal do Rio de Janeiro (IF/UFRJ), RJ (Brazil). Inst. de Fisica. Dept. de Fisica Nuclear

    2011-07-01

    Brachytherapy is used in cancer treatment at shorter distances through the use of small encapsulated source of ionizing radiation. In such treatment, a radiation source is positioned directly into or near the target volume to be treated. In this study the Monte Carlo based MCNP code was used to model and simulate the I-125 Amersham Health source model 6711 and the Pd-103 Prospera source model MED3633 in order to obtain the dosimetric parameter dose rate constant ({Lambda}) . The sources geometries were modeled and implemented in MCNPX code. The dose rate constant is an important parameter prostate LDR brachytherapy's treatments planning. This study was based on American Association of Physicists in Medicine (AAPM) recommendations which were produced by its Task Group 43. The results obtained were 0.941 and 0.65 for the dose rate constants of I-125 and Pd-103 sources, respectively. They present good agreement with the literature values based on different Monte Carlo codes. (author)

  17. Studies and modeling of cold neutron sources

    International Nuclear Information System (INIS)

    Campioni, G.

    2004-11-01

    With the purpose of updating knowledge in the fields of cold neutron sources, the work of this thesis has been run according to the 3 following axes. First, the gathering of specific information forming the materials of this work. This set of knowledge covers the following fields: cold neutron, cross-sections for the different cold moderators, flux slowing down, different measurements of the cold flux and finally, issues in the thermal analysis of the problem. Secondly, the study and development of suitable computation tools. After an analysis of the problem, several tools have been planed, implemented and tested in the 3-dimensional radiation transport code Tripoli-4. In particular, a module of uncoupling, integrated in the official version of Tripoli-4, can perform Monte-Carlo parametric studies with a spare factor of Cpu time fetching 50 times. A module of coupling, simulating neutron guides, has also been developed and implemented in the Monte-Carlo code McStas. Thirdly, achieving a complete study for the validation of the installed calculation chain. These studies focus on 3 cold sources currently functioning: SP1 from Orphee reactor and 2 other sources (SFH and SFV) from the HFR at the Laue Langevin Institute. These studies give examples of problems and methods for the design of future cold sources

  18. Discussion of Source Reconstruction Models Using 3D MCG Data

    Science.gov (United States)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  19. Model predictive control for Z-source power converter

    DEFF Research Database (Denmark)

    Mo, W.; Loh, P.C.; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of impedance-source (commonly known as Z-source) power converter. Output voltage control and current control for Z-source inverter are analyzed and simulated. With MPC's ability of multi- system variables regulation, load current and voltage...

  20. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    Science.gov (United States)

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  1. PUFF-IV, Code System to Generate Multigroup Covariance Matrices from ENDF/B-VI Uncertainty Files

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of program or function: The PUFF-IV code system processes ENDF/B-VI formatted nuclear cross section covariance data into multigroup covariance matrices. PUFF-IV is the newest release in this series of codes used to process ENDF uncertainty information and to generate the desired multi-group correlation matrix for the evaluation of interest. This version includes corrections and enhancements over previous versions. It is written in Fortran 90 and allows for a more modular design, thus facilitating future upgrades. PUFF-IV enhances support for resonance parameter covariance formats described in the ENDF standard and now handles almost all resonance parameter covariance information in the resolved region, with the exception of the long range covariance sub-subsections. PUFF-IV is normally used in conjunction with an AMPX master library containing group averaged cross section data. Two utility modules are included in this package to facilitate the data interface. The module SMILER allows one to use NJOY generated GENDF files containing group averaged cross section data in conjunction with PUFF-IV. The module COVCOMP allows one to compare two files written in COVERX format. 2 - Methods: Cross section and flux values on a 'super energy grid,' consisting of the union of the required energy group structure and the energy data points in the ENDF/B-V file, are interpolated from the input cross sections and fluxes. Covariance matrices are calculated for this grid and then collapsed to the required group structure. 3 - Restrictions on the complexity of the problem: PUFF-IV cannot process covariance information for energy and angular distributions of secondary particles. PUFF-IV does not process covariance information in Files 34 and 35; nor does it process covariance information in File 40. These new formats will be addressed in a future version of PUFF

  2. Three-Dimensional Neutral Transport Simulations of Gas Puff Imaging Experiments

    International Nuclear Information System (INIS)

    Stotler, D.P.; DIppolito, D.A.; LeBlanc, B.; Maqueda, R.J.; Myra, J.R.; Sabbagh, S.A.; Zweben, S.J.

    2003-01-01

    Gas Puff Imaging (GPI) experiments are designed to isolate the structure of plasma turbulence in the plane perpendicular to the magnetic field. Three-dimensional aspects of this diagnostic technique as used on the National Spherical Torus eXperiment (NSTX) are examined via Monte Carlo neutral transport simulations. The radial width of the simulated GPI images are in rough agreement with observations. However, the simulated emission clouds are angled approximately 15 degrees with respect to the experimental images. The simulations indicate that the finite extent of the gas puff along the viewing direction does not significantly degrade the radial resolution of the diagnostic. These simulations also yield effective neutral density data that can be used in an approximate attempt to infer two-dimensional electron density and temperature profiles from the experimental images

  3. Earthquake source model using strong motion displacement

    Indian Academy of Sciences (India)

    The strong motion displacement records available during an earthquake can be treated as the response of the earth as the a structural system to unknown forces acting at unknown locations. Thus, if the part of the earth participating in ground motion is modelled as a known finite elastic medium, one can attempt to model the ...

  4. Puff and bite: the relationship between the glucocorticoid stress response and anti-predator performance in checkered puffer (Sphoeroides testudineus).

    Science.gov (United States)

    Cull, Felicia; O'Connor, Constance M; Suski, Cory D; Shultz, Aaron D; Danylchuk, Andy J; Cooke, Steven J

    2015-04-01

    Individual variation in the endocrine stress response has been linked to survival and performance in a variety of species. Here, we evaluate the relationship between the endocrine stress response and anti-predator behaviors in wild checkered puffers (Sphoeroides testudineus) captured at Eleuthera Island, Bahamas. The checkered puffer has a unique and easily measurable predator avoidance strategy, which is to inflate or 'puff' to deter potential predators. In this study, we measured baseline and stress-induced circulating glucocorticoid levels, as well as bite force, a performance measure that is relevant to both feeding and predator defence, and 'puff' performance. We found that puff performance and bite force were consistent within individuals, but generally decreased following a standardized stressor. Larger puffers were able to generate a higher bite force, and larger puffers were able to maintain a more robust puff performance following a standardized stressor relative to smaller puffers. In terms of the relationship between the glucocorticoid stress response and performance metrics, we found no relationship between post-stress glucocorticoid levels and either puff performance or bite force. However, we did find that baseline glucocorticoid levels predicted the ability of a puffer to maintain a robust puff response following a repeated stressor, and this relationship was more pronounced in larger individuals. Our work provides a novel example of how baseline glucocorticoids can predict a fitness-related anti-predator behavior. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Flow behavior of N2 huff and puff process for enhanced oil recovery in tight oil reservoirs.

    Science.gov (United States)

    Lu, Teng; Li, Zhaomin; Li, Jian; Hou, Dawei; Zhang, Dingyong

    2017-11-16

    In the present work, the potential of N 2 huff and puff process to enhance the recovery of tight oil reservoir was evaluated. N 2 huff and puff experiments were performed in micromodels and cores to investigate the flow behaviors of different cycles. The results showed that, in the first cycle, N 2 was dispersed in the oil, forming the foamy oil flow. In the second cycle, the dispersed gas bubbles gradually coalesced into the continuous gas phase. In the third cycle, N 2 was produced in the form of continuous gas phase. The results from the coreflood tests showed that, the primary recovery was only 5.32%, while the recoveries for the three N 2 huff and puff cycles were 15.1%, 8.53% and 3.22%, respectively.The recovery and the pressure gradient in the first cycle were high. With the increase of huff and puff cycles, and the oil recovery and the pressure gradient rapidly decreased. The oil recovery of N 2 huff and puff has been found to increase as the N 2 injection pressure and the soaking time increased. These results showed that, the properly designed and controlled N 2 huff and puff process can lead to enhanced recovery of tight oil reservoirs.

  6. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  7. Data Sources Available for Modeling Environmental Exposures in Older Adults

    Science.gov (United States)

    This report, “Data Sources Available for Modeling Environmental Exposures in Older Adults,” focuses on information sources and data available for modeling environmental exposures in the older U.S. population, defined here to be people 60 years and older, with an emphasis on those...

  8. Hedonic Predictors of Tobacco Dependence: A Puff Guide to Smoking Cessation

    Science.gov (United States)

    2015-04-07

    process may not always be the case. While Perkins et al. (31) showed the potential for a connection between puff volume, considered reinforcement...effect can be attributed to the effects of "acute tolerance" (31 ). Nicotine is thought to acutely desensitize nicotinic receptors ("acute tolerance...company that performs biological assays) for the cotinine assay. Salivary cotinine levels were measured through an enzyme immunoassay conducted by

  9. Smoking Topography among Korean Smokers: Intensive Smoking Behavior with Larger Puff Volume and Shorter Interpuff Interval.

    Science.gov (United States)

    Kim, Sungroul; Yu, Sol

    2018-05-18

    The difference of smoker's topography has been found to be a function many factors, including sex, personality, nicotine yield, cigarette type (i.e., flavored versus non-flavored) and ethnicity. We evaluated the puffing behaviors of Korean smokers and its association with smoking-related biomarker levels. A sample of 300 participants was randomly recruited from metropolitan areas in South Korea. Topography measures during a 24-hour period were obtained using a CReSS pocket device. Korean male smokers smoked two puffs less per cigarette compared to female smokers (15.0 (13.0⁻19.0) vs. 17.5 (15.0⁻21.0) as the median (Interquartile range)), but had a significantly larger puff volume (62.7 (52.7⁻75.5) mL vs. 53.5 (42.0⁻64.2) mL); p = 0.012). The interpuff interval was similar between men and women (8.9 (6.5⁻11.2) s vs. 8.3 (6.2⁻11.0) s; p = 0.122) but much shorter than other study results. A dose-response association ( p = 0.0011) was observed between daily total puff volumes and urinary cotinine concentrations, after controlling for sex, age, household income level and nicotine addiction level. An understanding of the difference of topography measures, particularly the larger puff volume and shorter interpuff interval of Korean smokers, may help to overcome a potential underestimation of internal doses of hazardous byproducts of smoking.

  10. Current distribution measurements inside an electromagnetic plasma gun operated in a gas-puff mode

    OpenAIRE

    Poehlmann, Flavio R.; Cappelli, Mark A.; Rieker, Gregory B.

    2010-01-01

    Measurements are presented of the time-dependent current distribution inside a coaxial electromagnetic plasma gun. The measurements are carried out using an array of six axially distributed dual-Rogowski coils in a balanced circuit configuration. The radial current distributions indicate that operation in the gas-puff mode, i.e., the mode in which the electrode voltage is applied before injection of the gas, results in a stationary ionization front consistent with the presence of a plasma def...

  11. Gas-puff Z-pinch experiment on the LIMAY-I

    International Nuclear Information System (INIS)

    Takasugi, K.; Miyamoto, T.; Akiyama, H.; Shimomura, N.; Sato, M.; Tazima, T.

    1989-01-01

    A gas-puff z-pinch plasma has been produced on the pulsed power generator LIMAY-I at IPP Nagoya University. The stored energy of the generator is 13 kJ, and it generates 600 kV-70 ns-3 Ω power pulse. Ar or He gas is puffed from a hollow nozzle with 18 mm diameter, and a z-pinch plasma is produced by a discharge between 3 mm gap electrodes

  12. Effects of puff times on intraocular pressure agreement between non-contact and Goldmann applanation tonometers

    Directory of Open Access Journals (Sweden)

    Ibrahim Toprak

    2014-07-01

    Full Text Available AIM: To compare intraocular pressure(IOPvalues obtained from two different puff modes of Canon TX-F non-contact tonometer(NCTand Goldmann applanation tonometer(GATin patients with primary open angle glaucoma(POAG. METHODS: The study group comprised 55 right eyes of 55 patients with a confirmed diagnosis of POAG, which were under treatment. All patients underwent detailed ophthalmological examinations, optical coherence tomography imaging and automated perimetry. Intraocular pressure measurements were performed using 1-puff mode of NCT(NCT1, 3-puffs mode of NCT(NCT3and GAT with 5 minutes intervals in order. RESULTS: Fifty-five eyes of 55 patients with POAG(mean age of 64.1±8.1 yearswere enrolled into the study. NCT1 and NCT3 gave similar IOP values when compared with GAT measurements(14.22±3.42, 14.28±3.29, 14.66±3.49mmHg respectively, P=0.291. Intertonometer agreement was assessed using the Bland-Altman method. The 95% limits of agreement(LoAfor NCT1-GAT, NCT3-GAT and NCT1-NCT3 comparisons were -4.9 to +4.4mmHg, -4.1 to +3.4mmHg, and -3.4 to +3.3mmHg respectively.CONCLUSION: Although IOP measurements obtained from two puff modes of NCT and GAT showed similar values, wide range of LoA might restrict use of NCT1, NCT3 and GAT interchangeably in POAG patients.

  13. Quasistatic modelling of the coaxial slow source

    International Nuclear Information System (INIS)

    Hahn, K.D.; Pietrzyk, Z.A.; Vlases, G.C.

    1986-01-01

    A new 1-D Lagrangian MHD numerical code in flux coordinates has been developed for the Coaxial Slow Source (CSS) geometry. It utilizes the quasistatic approximation so that the plasma evolves as a succession of equilibria. The P=P (psi) equilibrium constraint, along with the assumption of infinitely fast axial temperature relaxation on closed field lines, is incorporated. An axially elongated, rectangular plasma is assumed. The axial length is adjusted by the global average condition, or assumed to be fixed. In this paper predictions obtained with the code, and a limited amount of comparison with experimental data are presented

  14. Hydromagnetic Rayleigh endash Taylor instability in high-velocity gas-puff implosions

    International Nuclear Information System (INIS)

    Roderick, N.F.; Peterkin, R.E. Jr.; Hussey, T.W.; Spielman, R.B.; Douglas, M.R.; Deeney, C.

    1998-01-01

    Experiments using the Saturn pulsed power generator have produced high-velocity z-pinch plasma implosions with velocities over 100 cm/μs using both annular and uniform-fill gas injection initial conditions. Both types of implosion show evidence of the hydromagnetic Rayleigh endash Taylor instability with the uniform-fill plasmas producing a more spatially uniform pinch. Two-dimensional magnetohydrodynamic simulations including unsteady flow of gas from a nozzle into the diode region have been used to investigate these implosions. The instability develops from the nonuniform gas flow field that forms as the gas expands from the injection nozzle. Instability growth is limited to the narrow unstable region of the current sheath. For the annular puff the unstable region breaks through the inner edge of the annulus increasing nonlinear growth as mass ejected from the bubble regions is not replenished by accretion. This higher growth leads to bubble thinning and disruption producing greater nonuniformity at pinch for the annular puff. The uniform puff provides gas to replenish bubble mass loss until just before pinch resulting in less bubble thinning and a more uniform pinch. copyright 1998 American Institute of Physics

  15. A gas puff experiment for partial simulation of compact toroid formation on MARAUDER

    International Nuclear Information System (INIS)

    Englert, S.E.; Englert, T.J.; Degnan, J.H.; Gahl, J.M.

    1994-01-01

    Preliminary results will be reported of a single valve gas puff experiment to determine spatial and spectral distribution of a gas during the early ionization stages. This experiment has been developed as a diagnostic test-bed for partial simulation of compact toroid formation on MARAUDER. The manner in which the experimental hardware has been designed allows for a wide range of diagnostic access to evaluate early time evolution of the ionization process. This evaluation will help contribute to a clearer understanding of the initial conditions for the formation stage of the compact toroid in the MARAUDER experiment, where 60 of the same puff valves are used. For the experiment, a small slice of the MARAUDER cylindrical gas injection and expansion region geometry have been re-created but in cartesian coordinates. All of the conditions in the experiment adhere as closely as possible to the MARAUDER experiment. The timing, current rise time, capacitance, resistance and inductance are appropriate to both the simulation of one of the 60 puff valves and current delivery to the load. Both time-resolved images and spectral data have been gathered for visible light emission of the plasma. Processed images reveal characteristics of spatial distribution of the current. Spectral data provide information with respect to electron temperature and density, and entrainment of contaminants

  16. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    Science.gov (United States)

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  17. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point

  18. Conception and realization of optical diagnosis to characterize gas puffs in Z-Pinch experiments. Comparison between experiment and computation. Study of a new nozzle

    International Nuclear Information System (INIS)

    Barnier, J.N.

    1998-01-01

    The CEA develops research programs on plasma. A good way to generate such X-rays sources, is to realize Z-pinch experiments, so to realize the radial implosion on its axis of a conducting cylinder in a very high current. The AMBIORIX machine, allowing such experiments, calls for necessitates the use of gaseous conductors. The gas puff, coming from the nozzle, is ionised by a 2 MA current. The aim of this thesis is the characterisation of the gas source before the current impulse. For this purpose many optic diagnostics have been tested. Interferometric measures allow the gas profile density measurement. Various gas have been studied: neon, argon, helium and aluminium. For the aluminium, the resonant interferometric imagery method has been used. A new nozzle with an innovative injection technic, has been designed, characterized and tested in Z-pinch configuration. Finally measures of light diffusion (Rayleigh) have been realised to show dust in the gas. (A.L.B.)

  19. Nuisance Source Population Modeling for Radiation Detection System Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P; Lange, D; Nelson, K; Wheeler, R

    2009-10-05

    A major challenge facing the prospective deployment of radiation detection systems for homeland security applications is the discrimination of radiological or nuclear 'threat sources' from radioactive, but benign, 'nuisance sources'. Common examples of such nuisance sources include naturally occurring radioactive material (NORM), medical patients who have received radioactive drugs for either diagnostics or treatment, and industrial sources. A sensitive detector that cannot distinguish between 'threat' and 'benign' classes will generate false positives which, if sufficiently frequent, will preclude it from being operationally deployed. In this report, we describe a first-principles physics-based modeling approach that is used to approximate the physical properties and corresponding gamma ray spectral signatures of real nuisance sources. Specific models are proposed for the three nuisance source classes - NORM, medical and industrial. The models can be validated against measured data - that is, energy spectra generated with the model can be compared to actual nuisance source data. We show by example how this is done for NORM and medical sources, using data sets obtained from spectroscopic detector deployments for cargo container screening and urban area traffic screening, respectively. In addition to capturing the range of radioactive signatures of individual nuisance sources, a nuisance source population model must generate sources with a frequency of occurrence consistent with that found in actual movement of goods and people. Measured radiation detection data can indicate these frequencies, but, at present, such data are available only for a very limited set of locations and time periods. In this report, we make more general estimates of frequencies for NORM and medical sources using a range of data sources such as shipping manifests and medical treatment statistics. We also identify potential data sources for industrial

  20. Modeling Group Interactions via Open Data Sources

    Science.gov (United States)

    2011-08-30

    data. The state-of-art search engines are designed to help general query-specific search and not suitable for finding disconnected online groups. The...groups, (2) developing innovative mathematical and statistical models and efficient algorithms that leverage existing search engines and employ

  1. Nitrogen component in nonpoint source pollution models

    Science.gov (United States)

    Pollutants entering a water body can be very destructive to the health of that system. Best Management Practices (BMPs) and/or conservation practices are used to reduce these pollutants, but understanding the most effective practices is very difficult. Watershed models are an effective tool to aid...

  2. Application of source-receptor models to determine source areas of biological components (pollen and butterflies)

    OpenAIRE

    M. Alarcón; M. Àvila; J. Belmonte; C. Stefanescu; R. Izquierdo

    2010-01-01

    The source-receptor models allow the establishment of relationships between a receptor point (sampling point) and the probable source areas (regions of emission) through the association of concentration values at the receptor point with the corresponding atmospheric back-trajectories, and, together with other techniques, to interpret transport phenomena on a synoptic scale. These models are generally used in air pollution studies to determine the areas of origin of chemical compounds measured...

  3. Bayesian mixture models for source separation in MEG

    International Nuclear Information System (INIS)

    Calvetti, Daniela; Homa, Laura; Somersalo, Erkki

    2011-01-01

    This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal. (paper)

  4. Constraints on equivalent elastic source models from near-source data

    International Nuclear Information System (INIS)

    Stump, B.

    1993-01-01

    A phenomenological based seismic source model is important in quantifying the important physical processes that affect the observed seismic radiation in the linear-elastic regime. Representations such as these were used to assess yield effects on seismic waves under a Threshold Test Ban Treaty and to help transport seismic coupling experience at one test site to another. These same characterizations in a non-proliferation environment find applications in understanding the generation of the different types of body and surface waves from nuclear explosions, single chemical explosions, arrays of chemical explosions used in mining, rock bursts and earthquakes. Seismologists typically begin with an equivalent elastic representation of the source which when convolved with the propagation path effects produces a seismogram. The Representation Theorem replaces the true source with an equivalent set of body forces, boundary conditions or initial conditions. An extension of this representation shows the equivalence of the body forces, boundary conditions and initial conditions and replaces the source with a set of force moments, the first degree moment tensor for a point source representation. The difficulty with this formulation, which can completely describe the observed waveforms when the propagation path effects are known, is in the physical interpretation of the actual physical processes acting in the source volume. Observational data from within the source region, where processes are often nonlinear, linked to numerical models of the important physical processes in this region are critical to a unique physical understanding of the equivalent elastic source function

  5. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    Science.gov (United States)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration

  6. Data Sources for NetZero Ft Carson Model

    Data.gov (United States)

    U.S. Environmental Protection Agency — Table of values used to parameterize and evaluate the Ft Carson NetZero integrated Model with published reference sources for each value. This dataset is associated...

  7. Near-Source Modeling Updates: Building Downwash & Near-Road

    Science.gov (United States)

    The presentation describes recent research efforts in near-source model development focusing on building downwash and near-road barriers. The building downwash section summarizes a recent wind tunnel study, ongoing computational fluid dynamics simulations and efforts to improve ...

  8. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  9. Earthquake Source Spectral Study beyond the Omega-Square Model

    Science.gov (United States)

    Uchide, T.; Imanishi, K.

    2017-12-01

    Earthquake source spectra have been used for characterizing earthquake source processes quantitatively and, at the same time, simply, so that we can analyze the source spectra for many earthquakes, especially for small earthquakes, at once and compare them each other. A standard model for the source spectra is the omega-square model, which has the flat spectrum and the falloff inversely proportional to the square of frequencies at low and high frequencies, respectively, which are bordered by a corner frequency. The corner frequency has often been converted to the stress drop under the assumption of circular crack models. However, recent studies claimed the existence of another corner frequency [Denolle and Shearer, 2016; Uchide and Imanishi, 2016] thanks to the recent development of seismic networks. We have found that many earthquakes in areas other than the area studied by Uchide and Imanishi [2016] also have source spectra deviating from the omega-square model. Another part of the earthquake spectra we now focus on is the falloff rate at high frequencies, which will affect the seismic energy estimation [e.g., Hirano and Yagi, 2017]. In June, 2016, we deployed seven velocity seismometers in the northern Ibaraki prefecture, where the shallow crustal seismicity mainly with normal-faulting events was activated by the 2011 Tohoku-oki earthquake. We have recorded seismograms at 1000 samples per second and at a short distance from the source, so that we can investigate the high-frequency components of the earthquake source spectra. Although we are still in the stage of discovery and confirmation of the deviation from the standard omega-square model, the update of the earthquake source spectrum model will help us systematically extract more information on the earthquake source process.

  10. Monte Carlo modelling of large scale NORM sources using MCNP.

    Science.gov (United States)

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  11. Optimization of Excitation in FDTD Method and Corresponding Source Modeling

    Directory of Open Access Journals (Sweden)

    B. Dimitrijevic

    2015-04-01

    Full Text Available Source and excitation modeling in FDTD formulation has a significant impact on the method performance and the required simulation time. Since the abrupt source introduction yields intensive numerical variations in whole computational domain, a generally accepted solution is to slowly introduce the source, using appropriate shaping functions in time. The main goal of the optimization presented in this paper is to find balance between two opposite demands: minimal required computation time and acceptable degradation of simulation performance. Reducing the time necessary for source activation and deactivation is an important issue, especially in design of microwave structures, when the simulation is intensively repeated in the process of device parameter optimization. Here proposed optimized source models are realized and tested within an own developed FDTD simulation environment.

  12. Impact of low-trans fat compositions on the quality of conventional and fat-reduced puff pastry.

    Science.gov (United States)

    Silow, Christoph; Zannini, Emanuele; Arendt, Elke K

    2016-04-01

    Four vegetable fat blends (FBs) with low trans-fatty acid (TFA ≤ 0.6 %) content with various ratios of palm stearin (PS) and rapeseed oil (RO) were characterised and examined for their application in puff pastry production. The amount of PS decreased from FB1 to FB4 and simultaneously the RO content increased. A range of analytical methods were used to characterise the FBs, including solid fat content (SFC), differential scanning calorimetry (DSC), cone penetrometry and rheological measurements. The internal and external structural quality parameters of baked puff pastry were investigated using texture analyser equipped with an Extended Craft Knife (ECK), VolScan and C-Cell image system. Puff pastry containing FB1 and FB2 achieved excellent baking results for full fat and fat-reduced puff pastry; hence these FBs contained adequate shortening properties. A fat reduction by 40 % using FB2 and a reduction of saturated fatty acids (SAFA) by 49 %, compared to the control, did not lead to adverse effects in lift and specific volume. The higher amount of RO and the lower SAFA content compared to FB1 coupled with the satisfying baking results makes FB2 the fat of choice in this study. FB3 and FB4 were found to be unsuitable for puff pastry production because of their melting behaviour.

  13. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  14. Eye retraction and rotation during Corvis ST 'air puff' intraocular pressure measurement and its quantitative analysis.

    Science.gov (United States)

    Boszczyk, Agnieszka; Kasprzak, Henryk; Jóźwik, Agnieszka

    2017-05-01

    The aim of this study was to analyse the indentation and deformation of the corneal surface, as well as eye retraction, which occur during air puff intraocular pressure (IOP) measurement. A group of 10 subjects was examined using a non-contact Corvis ST tonometer, which records image sequences of corneas deformed by an air puff. Obtained images were processed numerically in order to extract information about corneal deformation, indentation and eyeball retraction. The time dependency of the apex deformation/eye retraction ratio and the curve of dependency between apex indentation and eye retraction take characteristic shapes for individual subjects. It was noticed that the eye globes tend to rotate towards the nose in response to the air blast during measurement. This means that the eye globe not only displaces but also rotates during retraction. Some new parameters describing the shape of this curve are introduced. Our data show that intraocular pressure and amplitude of corneal indentation are inversely related (r 8  = -0.83, P = 0.0029), but the correlation between intraocular pressure and amplitude of eye retraction is low and not significant (r 8  = -0.24, P = 0.51). The curves describing corneal behaviour during air puff tonometry were determined and show that the eye globe rotates towards the nose during measurement. In addition, eye retraction amplitudes may be related to elastic or viscoelastic properties of deeper structures in the eye or behind the eye and this should be further investigated. Many of the proposed new parameters present comparable or even higher repeatability than the standard parameters provided by the Corvis ST. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  15. The Unfolding of Value Sources During Online Business Model Transformation

    Directory of Open Access Journals (Sweden)

    Nadja Hoßbach

    2016-12-01

    Full Text Available Purpose: In the magazine publishing industry, viable online business models are still rare to absent. To prepare for the ‘digital future’ and safeguard their long-term survival, many publishers are currently in the process of transforming their online business model. Against this backdrop, this study aims to develop a deeper understanding of (1 how the different building blocks of an online business model are transformed over time and (2 how sources of value creation unfold during this transformation process. Methodology: To answer our research question, we conducted a longitudinal case study with a leading German business magazine publisher (called BIZ. Data was triangulated from multiple sources including interviews, internal documents, and direct observations. Findings: Based on our case study, we nd that BIZ used the transformation process to differentiate its online business model from its traditional print business model along several dimensions, and that BIZ’s online business model changed from an efficiency- to a complementarity- to a novelty-based model during this process. Research implications: Our findings suggest that different business model transformation phases relate to different value sources, questioning the appropriateness of value source-based approaches for classifying business models. Practical implications: The results of our case study highlight the need for online-offline business model differentiation and point to the important distinction between service and product differentiation. Originality: Our study contributes to the business model literature by applying a dynamic and holistic perspective on the link between online business model changes and unfolding value sources.

  16. Characteristics of x-ray radiation from a gas-puff z-pinch plasma

    International Nuclear Information System (INIS)

    Akiyama, N.; Takasugi, K.

    2002-01-01

    Characteristics of x-ray radiation from Ar gas-puff z-pinch plasma have been investigated by changing delay time of discharge from gas puffing. Intense cloud structure of x-ray image was observed at small delay time region, but the total x-ray signal was not so intense. The x-ray signal increased with increasing the delay time, and hot spots of x-ray image also became intense. Electron temperature was evaluated from x-ray spectroscopic data, and no significant difference in temperature was observed. (author)

  17. Contributed Review: The novel gas puff targets for laser-matter interaction experiments

    Energy Technology Data Exchange (ETDEWEB)

    Wachulak, Przemyslaw W., E-mail: wachulak@gmail.com [Institute of Optoelectronics, Military University of Technology, Ul. Gen. S. Kaliskiego 2, 00-908 Warsaw (Poland)

    2016-09-15

    Various types of targetry are used nowadays in laser matter interaction experiments. Such targets are characterized using different methods capable of acquiring information about the targets such as density, spatial distribution, and temporal behavior. In this mini-review paper, a particular type of target will be presented. The targets under consideration are gas puff targets of various and novel geometries. Those targets were investigated using extreme ultraviolet (EUV) and soft X-ray (SXR) imaging techniques, such as shadowgraphy, tomography, and pinhole camera imaging. Details about characterization of those targets in the EUV and SXR spectral regions will be presented.

  18. Modeling water demand when households have multiple sources of water

    Science.gov (United States)

    Coulibaly, Lassina; Jakus, Paul M.; Keith, John E.

    2014-07-01

    A significant portion of the world's population lives in areas where public water delivery systems are unreliable and/or deliver poor quality water. In response, people have developed important alternatives to publicly supplied water. To date, most water demand research has been based on single-equation models for a single source of water, with very few studies that have examined water demand from two sources of water (where all nonpublic system water sources have been aggregated into a single demand). This modeling approach leads to two outcomes. First, the demand models do not capture the full range of alternatives, so the true economic relationship among the alternatives is obscured. Second, and more seriously, economic theory predicts that demand for a good becomes more price-elastic as the number of close substitutes increases. If researchers artificially limit the number of alternatives studied to something less than the true number, the price elasticity estimate may be biased downward. This paper examines water demand in a region with near universal access to piped water, but where system reliability and quality is such that many alternative sources of water exist. In extending the demand analysis to four sources of water, we are able to (i) demonstrate why households choose the water sources they do, (ii) provide a richer description of the demand relationships among sources, and (iii) calculate own-price elasticity estimates that are more elastic than those generally found in the literature.

  19. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  20. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  1. MCNP model for the many KE-Basin radiation sources

    International Nuclear Information System (INIS)

    Rittmann, P.D.

    1997-01-01

    This document presents a model for the location and strength of radiation sources in the accessible areas of KE-Basin which agrees well with data taken on a regular grid in September of 1996. This modelling work was requested to support dose rate reduction efforts in KE-Basin. Anticipated fuel removal activities require lower dose rates to minimize annual dose to workers. With this model, the effects of component cleanup or removal can be estimated in advance to evaluate their effectiveness. In addition, the sources contributing most to the radiation fields in a given location can be identified and dealt with

  2. Open source data assimilation framework for hydrological modeling

    Science.gov (United States)

    Ridler, Marc; Hummel, Stef; van Velzen, Nils; Katrine Falk, Anne; Madsen, Henrik

    2013-04-01

    An open-source data assimilation framework is proposed for hydrological modeling. Data assimilation (DA) in hydrodynamic and hydrological forecasting systems has great potential to improve predictions and improve model result. The basic principle is to incorporate measurement information into a model with the aim to improve model results by error minimization. Great strides have been made to assimilate traditional in-situ measurements such as discharge, soil moisture, hydraulic head and snowpack into hydrologic models. More recently, remotely sensed data retrievals of soil moisture, snow water equivalent or snow cover area, surface water elevation, terrestrial water storage and land surface temperature have been successfully assimilated in hydrological models. The assimilation algorithms have become increasingly sophisticated to manage measurement and model bias, non-linear systems, data sparsity (time & space) and undetermined system uncertainty. It is therefore useful to use a pre-existing DA toolbox such as OpenDA. OpenDA is an open interface standard for (and free implementation of) a set of tools to quickly implement DA and calibration for arbitrary numerical models. The basic design philosophy of OpenDA is to breakdown DA into a set of building blocks programmed in object oriented languages. To implement DA, a model must interact with OpenDA to create model instances, propagate the model, get/set variables (or parameters) and free the model once DA is completed. An open-source interface for hydrological models exists capable of all these tasks: OpenMI. OpenMI is an open source standard interface already adopted by key hydrological model providers. It defines a universal approach to interact with hydrological models during simulation to exchange data during runtime, thus facilitating the interactions between models and data sources. The interface is flexible enough so that models can interact even if the model is coded in a different language, represent

  3. Effects of Source RDP Models and Near-source Propagation: Implication for Seismic Yield Estimation

    Science.gov (United States)

    Saikia, C. K.; Helmberger, D. V.; Stead, R. J.; Woods, B. B.

    - It has proven difficult to uniquely untangle the source and propagation effects on the observed seismic data from underground nuclear explosions, even when large quantities of near-source, broadband data are available for analysis. This leads to uncertainties in our ability to quantify the nuclear seismic source function and, consequently the accuracy of seismic yield estimates for underground explosions. Extensive deterministic modeling analyses of the seismic data recorded from underground explosions at a variety of test sites have been conducted over the years and the results of these studies suggest that variations in the seismic source characteristics between test sites may be contributing to the observed differences in the magnitude/yield relations applicable at those sites. This contributes to our uncertainty in the determination of seismic yield estimates for explosions at previously uncalibrated test sites. In this paper we review issues involving the relationship of Nevada Test Site (NTS) source scaling laws to those at other sites. The Joint Verification Experiment (JVE) indicates that a magnitude (mb) bias (δmb) exists between the Semipalatinsk test site (STS) in the former Soviet Union (FSU) and the Nevada test site (NTS) in the United States. Generally this δmb is attributed to differential attenuation in the upper-mantle beneath the two test sites. This assumption results in rather large estimates of yield for large mb tunnel shots at Novaya Zemlya. A re-examination of the US testing experiments suggests that this δmb bias can partly be explained by anomalous NTS (Pahute) source characteristics. This interpretation is based on the modeling of US events at a number of test sites. Using a modified Haskell source description, we investigated the influence of the source Reduced Displacement Potential (RDP) parameters ψ ∞ , K and B by fitting short- and long-period data simultaneously, including the near-field body and surface waves. In general

  4. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  5. Effects of ascorbic acid, translutaminase and margarine amounts on the quality of puff pastry made from spelt flour

    Directory of Open Access Journals (Sweden)

    Šimurina Olivera D.

    2015-01-01

    Full Text Available Puff pastry has delicate and flaky texture which comes from unique combination of fat and dough. These bakery products are made from many thin layers of dough which are separated by alternate fat layers because of which they are considered to be high fat food. Properties of puff pastry depend mostly on the quality of flour, which must be specifically tailored for this purpose. The most commonly used flour in the production of puff pastry is refined wheat flour. Lately, the requirements of consumers for healthy bakery products have a great response in the baking industry. On the market there are new products made with ingredients which have high nutritional value. This paper presents an optimization of the composition of puff pastry made of spelt flour by varying the amount of ingredients such as: margarine, ascorbic acid and enzyme transglutaminase. The optimal ratio of these ingredients has been based on the consideration of their major and interaction effects. During the optimization of spelt puff pastry quality, the following goals were set: maximum volume, minimum firmness and maximum overall acceptability. The optimal solutions were in the concentration range from 3.60 mg/kg to 10 mg/kg for ascorbic acid, from 0.03 mg/kg to 3 mg/kg for transglutaminase and from 29.84 to 30% for margarine on dough basis. It is recommended that the composition of spelt puff pastry involve: 10 mg/kg ascorbic acid, 0.03 mg/kg transglutaminase and 30 % margarine on dough basis to provide the desired product characteristics.

  6. Open Sourcing Social Change: Inside the Constellation Model

    OpenAIRE

    Tonya Surman; Mark Surman

    2008-01-01

    The constellation model was developed by and for the Canadian Partnership for Children's Health and the Environment. The model offers an innovative approach to organizing collaborative efforts in the social mission sector and shares various elements of the open source model. It emphasizes self-organizing and concrete action within a network of partner organizations working on a common issue. Constellations are self-organizing action teams that operate within the broader strategic vision of a ...

  7. White Dwarf Model Atmospheres: Synthetic Spectra for Super Soft Sources

    OpenAIRE

    Rauch, Thomas

    2011-01-01

    The T\\"ubingen NLTE Model-Atmosphere Package (TMAP) calculates fully metal-line blanketed white dwarf model atmospheres and spectral energy distributions (SEDs) at a high level of sophistication. Such SEDs are easily accessible via the German Astrophysical Virtual Observatory (GAVO) service TheoSSA. We discuss applications of TMAP models to (pre) white dwarfs during the hottest stages of their stellar evolution, e.g. in the parameter range of novae and super soft sources.

  8. White Dwarf Model Atmospheres: Synthetic Spectra for Supersoft Sources

    Science.gov (United States)

    Rauch, Thomas

    2013-01-01

    The Tübingen NLTE Model-Atmosphere Package (TMAP) calculates fully metal-line blanketed white dwarf model atmospheres and spectral energy distributions (SEDs) at a high level of sophistication. Such SEDs are easily accessible via the German Astrophysical Virtual Observatory (GAVO) service TheoSSA. We discuss applications of TMAP models to (pre) white dwarfs during the hottest stages of their stellar evolution, e.g. in the parameter range of novae and supersoft sources.

  9. Extended nonnegative tensor factorisation models for musical sound source separation.

    Science.gov (United States)

    FitzGerald, Derry; Cranitch, Matt; Coyle, Eugene

    2008-01-01

    Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  10. Extended Nonnegative Tensor Factorisation Models for Musical Sound Source Separation

    Directory of Open Access Journals (Sweden)

    Derry FitzGerald

    2008-01-01

    Full Text Available Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  11. Characterization of a plasma produced using a high power laser with a gas puff target for x-ray laser experiments

    International Nuclear Information System (INIS)

    Fiedorowicz, H.; Bartnik, A.; Gac, K.; Parys, P.; Szczurek, M.; Tyl, J.

    1995-01-01

    A high temperature, high density plasma can be produced by using a nanosecond, high-power laser with a gas puff target. The gas puff target is formed by puffing a small amount of gas from a high-pressure reservoir through a nozzle into a vacuum chamber. In this paper we present the gas puff target specially designed for x-ray laser experiments. The solenoid valve with the nozzle in the form of a slit 0.3-mm wide and up to 40-mm long, allows to form an elongated gas puff suitable for the creation of an x-ray laser active medium by its perpendicular irradiation with the use of a laser beam focused to a line. Preliminary results of the experiments on the laser irradiation of the gas puff targets, produced by the new valve, show that hot plasma suitable for x-ray lasers is created

  12. The dynamics of gas-puff imploding plasmas on the NRL Gamble II Generator

    International Nuclear Information System (INIS)

    Stephanakis, S.J.; Boller, J.R.; Hinshelwood, D.D.; McDonald, S.W.; Mehlman, C.G.; Ottinger, P.F.; Young, F.C.

    1985-01-01

    The experimental study of imploding plasma loads on the NRL Gamble II generator was initiated more than a year ago. Preliminary results including scaling laws for K-line radiation output from neon puffs and the effect of plasma erosion opening switches (PEOS's) on the x-ray yields and the pinch quality were reported upon during the past year. In order to better understand the implosion dynamics of such plasmas, time-resolved photographs have been taken of the implosion history. In contrast with time-integrated x-ray pinhole photographs, the time-resolved visible-light pictures indicate that the implosion phase is essentially instability-free, while pinching and flaring occur at late times during the blow-up phase. Furthermore, these visible-light framing photographs clearly show that the discharge is flared out toward the anode at early times and becomes cylindrical at implosion. This so-called ''zipper-effect'' has been seen in previous argon-puff experiments and is due to the non-uniform initial distribution of gas across the anode-cathode gap. The authors present comparisons of time-resolved photographs taken both in visible and x-ray light along with x-ray spectra taken with and without PEOS's. The implications of these data are discussed in view of the present theoretical understanding of the plasma implosion dynamics

  13. The dynamics of gas-puff imploding plasmas on the NRL Gamble II generator

    International Nuclear Information System (INIS)

    Stephanakis, S.J.; Boller, J.R.; Hinshelwood, D.D.; McDonald, S.W.; Mehlman, C.G.; Ottinger, P.F.; Young, F.C.

    1985-01-01

    The experimental study of imploding plasma loads on the NRL Gamble II generator was initiated more than a year ago. Preliminary results including scaling laws for K-line radiation output from neon puffs and the effect of plasma erosion opening switches (PEOS's) on the x-ray yields and the pinch quality were reported upon during the past year. In order to better understand the implosion dynamics of such plasmas, time-resolved photographs have been taken of the implosion history. In contrast with time-integrated x-ray pinhole photographs, the time-resolved visible-light pictures indicate that the implosion phase is essentially instability-free, while pinching and flaring occur at late times during the blow-up phase. Furthermore, these visible-light framing photographs clearly show that the discharge is flared out toward the anode at early times and becomes cylindrical at implosion. This so-called ''zipper-effect'' has been seen in previous argon-puff experiments and is due to the non-uniform initial distribution of gas across the anode-cathode gap. The authors present comparisons of time-resolved photographs taken both in visible and x-ray light along with x-ray spectra taken with and without PEOS's. The implications of these data are discussed in view of the present theoretical understanding of the plasma implosion dynamics

  14. Recent Advances with the AMPX Covariance Processing Capabilities in PUFF-IV

    International Nuclear Information System (INIS)

    Wiarda, Dorothea; Arbanas, Goran; Leal, Luiz C.; Dunn, Michael E.

    2008-01-01

    The program PUFF-IV is used to process resonance parameter covariance information given in ENDF/B File 32 and point-wise covariance matrices given in ENDF/B File 33 into group-averaged covariances matrices on a user-supplied group structure. For large resonance covariance matrices, found for example in 235U, the execution time of PUFF-IV can be quite long. Recently the code was modified to take advandage of Basic Linear Algebra Subprograms (BLAS) routines for the most time-consuming matrix multiplications. This led to a substantial decrease in execution time. This faster processing capability allowed us to investigate the conversion of File 32 data into File 33 data using a larger number of user-defined groups. While conversion substantially reduces the ENDF/B file size requirements for evaluations with a large number of resonances, a trade-off is made between the number of groups used to represent the resonance parameter covariance as a point-wise covariance matrix and the file size. We are also investigating a hybrid version of the conversion, in which the low-energy part of the File 32 resonance parameter covariances matrix is retained and the correlations with higher energies as well as the high energy part are given in File 33.

  15. Study of soft X-ray energy spectra from gas-puff Z-pinch plasma

    International Nuclear Information System (INIS)

    Zou Xiaobing; Wang Xinxin; Zhang Guixin; Han Min; Luo Chengmu

    2006-01-01

    A ROSS-FILTER-PIN spectrometer in the spectral range of 0.28 keV-1.56 keV was developed to study the soft X-ray radiation emitted from gas-puff Z-pinch plasma. It is composed of five channels covering the energy interval of interest without gaps. Soft X-ray spectral energy cuts were determined by the L absorption edges of selected filter elements (K absorption edges being used for light filter elements), and the optimum thickness of filter material was designed using computer code. To minimize the residual sensitivity outside the sensitivity range of each channel, element of the first filter was added into the second filter of all the Ross pair. To diminish the area of each filter, PIN detector with small sensitive area of 1 mm 2 was adopted for the spectrometer. A filter with small area is easy to fabricate and would be helpful to withstand the Z-pinch discharge shock wave. With this ROSS-FILTER-PIN spectrometer, the energy spectra of soft X-ray from a small gas-puff Z-pinch were investigated, and the correlation between the soft X-ray yield and the plasma implosion state was also studied. (authors)

  16. Monitoring alert and drowsy states by modeling EEG source nonstationarity

    Science.gov (United States)

    Hsu, Sheng-Hsiou; Jung, Tzyy-Ping

    2017-10-01

    Objective. As a human brain performs various cognitive functions within ever-changing environments, states of the brain characterized by recorded brain activities such as electroencephalogram (EEG) are inevitably nonstationary. The challenges of analyzing the nonstationary EEG signals include finding neurocognitive sources that underlie different brain states and using EEG data to quantitatively assess the state changes. Approach. This study hypothesizes that brain activities under different states, e.g. levels of alertness, can be modeled as distinct compositions of statistically independent sources using independent component analysis (ICA). This study presents a framework to quantitatively assess the EEG source nonstationarity and estimate levels of alertness. The framework was tested against EEG data collected from 10 subjects performing a sustained-attention task in a driving simulator. Main results. Empirical results illustrate that EEG signals under alert versus drowsy states, indexed by reaction speeds to driving challenges, can be characterized by distinct ICA models. By quantifying the goodness-of-fit of each ICA model to the EEG data using the model deviation index (MDI), we found that MDIs were significantly correlated with the reaction speeds (r  =  -0.390 with alertness models and r  =  0.449 with drowsiness models) and the opposite correlations indicated that the two models accounted for sources in the alert and drowsy states, respectively. Based on the observed source nonstationarity, this study also proposes an online framework using a subject-specific ICA model trained with an initial (alert) state to track the level of alertness. For classification of alert against drowsy states, the proposed online framework achieved an averaged area-under-curve of 0.745 and compared favorably with a classic power-based approach. Significance. This ICA-based framework provides a new way to study changes of brain states and can be applied to

  17. Modelling of contaminant transfers in a ventilated room in the near-field of an accidental emission source; Modelisation du transfert d'un aerocontaminant dans un local ventile en champ proche d'une source d'emission accidentelle

    Energy Technology Data Exchange (ETDEWEB)

    Guerra, D.

    2004-11-15

    Nowadays, predicting the space-time evolution of a pollutant released in a ventilated room including a process operation remains hard to achieve. However this prediction is imperative in hazardous activities, such as nuclear ones. The study consists in predicting space-time evolution of an airborne contaminant dispersion in the near-field emission source around a workplace, following an accidental rupture of a containment enclosure. The whole work is based on experiments of gas tracing, and on multidimensional simulations using CFD tools. The proposed model is written as a correlated function of various parameters: leak geometry (slot or circular opening), emission type (continuous or puff), initial velocity and emission duration. Influence of ventilation and obstructions (room walls) have been also studied in the case of continuous leaks. All final models, for gaseous pollutants, are written as correlations inspired by the theory of free turbulent jet flows. These models are easy to use within the framework of safety evaluations dealing with radioactive material containment and radiological protection inside nuclear facilities. (author)

  18. Time-dependent source model of the Lusi mud volcano

    Science.gov (United States)

    Shirzaei, M.; Rudolph, M. L.; Manga, M.

    2014-12-01

    The Lusi mud eruption, near Sidoarjo, East Java, Indonesia, began erupting in May 2006 and continues to erupt today. Previous analyses of surface deformation data suggested an exponential decay of the pressure in the mud source, but did not constrain the geometry and evolution of the source(s) from which the erupting mud and fluids ascend. To understand the spatiotemporal evolution of the mud and fluid sources, we apply a time-dependent inversion scheme to a densely populated InSAR time series of the surface deformation at Lusi. The SAR data set includes 50 images acquired on 3 overlapping tracks of the ALOS L-band satellite between May 2006 and April 2011. Following multitemporal analysis of this data set, the obtained surface deformation time series is inverted in a time-dependent framework to solve for the volume changes of distributed point sources in the subsurface. The volume change distribution resulting from this modeling scheme shows two zones of high volume change underneath Lusi at 0.5-1.5 km and 4-5.5km depth as well as another shallow zone, 7 km to the west of Lusi and underneath the Wunut gas field. The cumulative volume change within the shallow source beneath Lusi is ~2-4 times larger than that of the deep source, whilst the ratio of the Lusi shallow source volume change to that of Wunut gas field is ~1. This observation and model suggest that the Lusi shallow source played a key role in eruption process and mud supply, but that additional fluids do ascend from depths >4 km on eruptive timescales.

  19. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    Science.gov (United States)

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  20. Retrieving global aerosol sources from satellites using inverse modeling

    Directory of Open Access Journals (Sweden)

    O. Dubovik

    2008-01-01

    Full Text Available Understanding aerosol effects on global climate requires knowing the global distribution of tropospheric aerosols. By accounting for aerosol sources, transports, and removal processes, chemical transport models simulate the global aerosol distribution using archived meteorological fields. We develop an algorithm for retrieving global aerosol sources from satellite observations of aerosol distribution by inverting the GOCART aerosol transport model.

    The inversion is based on a generalized, multi-term least-squares-type fitting, allowing flexible selection and refinement of a priori algorithm constraints. For example, limitations can be placed on retrieved quantity partial derivatives, to constrain global aerosol emission space and time variability in the results. Similarities and differences between commonly used inverse modeling and remote sensing techniques are analyzed. To retain the high space and time resolution of long-period, global observational records, the algorithm is expressed using adjoint operators.

    Successful global aerosol emission retrievals at 2°×2.5 resolution were obtained by inverting GOCART aerosol transport model output, assuming constant emissions over the diurnal cycle, and neglecting aerosol compositional differences. In addition, fine and coarse mode aerosol emission sources were inverted separately from MODIS fine and coarse mode aerosol optical thickness data, respectively. These assumptions are justified, based on observational coverage and accuracy limitations, producing valuable aerosol source locations and emission strengths. From two weeks of daily MODIS observations during August 2000, the global placement of fine mode aerosol sources agreed with available independent knowledge, even though the inverse method did not use any a priori information about aerosol sources, and was initialized with a "zero aerosol emission" assumption. Retrieving coarse mode aerosol emissions was less successful

  1. Low-level radioactive waste performance assessments: Source term modeling

    International Nuclear Information System (INIS)

    Icenhour, A.S.; Godbee, H.W.; Miller, L.F.

    1995-01-01

    Low-level radioactive wastes (LLW) generated by government and commercial operations need to be isolated from the environment for at least 300 to 500 yr. Most existing sites for the storage or disposal of LLW employ the shallow-land burial approach. However, the U.S. Department of Energy currently emphasizes the use of engineered systems (e.g., packaging, concrete and metal barriers, and water collection systems). Future commercial LLW disposal sites may include such systems to mitigate radionuclide transport through the biosphere. Performance assessments must be conducted for LUW disposal facilities. These studies include comprehensive evaluations of radionuclide migration from the waste package, through the vadose zone, and within the water table. Atmospheric transport mechanisms are also studied. Figure I illustrates the performance assessment process. Estimates of the release of radionuclides from the waste packages (i.e., source terms) are used for subsequent hydrogeologic calculations required by a performance assessment. Computer models are typically used to describe the complex interactions of water with LLW and to determine the transport of radionuclides. Several commonly used computer programs for evaluating source terms include GWSCREEN, BLT (Breach-Leach-Transport), DUST (Disposal Unit Source Term), BARRIER (Ref. 5), as well as SOURCE1 and SOURCE2 (which are used in this study). The SOURCE1 and SOURCE2 codes were prepared by Rogers and Associates Engineering Corporation for the Oak Ridge National Laboratory (ORNL). SOURCE1 is designed for tumulus-type facilities, and SOURCE2 is tailored for silo, well-in-silo, and trench-type disposal facilities. This paper focuses on the source term for ORNL disposal facilities, and it describes improved computational methods for determining radionuclide transport from waste packages

  2. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  3. Topographic filtering simulation model for sediment source apportionment

    Science.gov (United States)

    Cho, Se Jong; Wilcock, Peter; Hobbs, Benjamin

    2018-05-01

    We propose a Topographic Filtering simulation model (Topofilter) that can be used to identify those locations that are likely to contribute most of the sediment load delivered from a watershed. The reduced complexity model links spatially distributed estimates of annual soil erosion, high-resolution topography, and observed sediment loading to determine the distribution of sediment delivery ratio across a watershed. The model uses two simple two-parameter topographic transfer functions based on the distance and change in elevation from upland sources to the nearest stream channel and then down the stream network. The approach does not attempt to find a single best-calibrated solution of sediment delivery, but uses a model conditioning approach to develop a large number of possible solutions. For each model run, locations that contribute to 90% of the sediment loading are identified and those locations that appear in this set in most of the 10,000 model runs are identified as the sources that are most likely to contribute to most of the sediment delivered to the watershed outlet. Because the underlying model is quite simple and strongly anchored by reliable information on soil erosion, topography, and sediment load, we believe that the ensemble of simulation outputs provides a useful basis for identifying the dominant sediment sources in the watershed.

  4. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  5. Open source Modeling and optimization tools for Planning

    Energy Technology Data Exchange (ETDEWEB)

    Peles, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-10

    Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward to complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.

  6. Mathematical models for atmospheric pollutants. Appendix D. Available air quality models. Final report

    International Nuclear Information System (INIS)

    Drake, R.L.; McNaughton, D.J.; Huang, C.

    1979-08-01

    Models that are available for the analysis of airborne pollutants are summarized. In addition, recommendations are given concerning the use of particular models to aid in particular air quality decision making processes. The air quality models are characterized in terms of time and space scales, steady state or time dependent processes, reference frames, reaction mechanisms, treatment of turbulence and topography, and model uncertainty. Using these characteristics, the models are classified in the following manner: simple deterministic models, such as air pollution indices, simple area source models and rollback models; statistical models, such as averaging time models, time series analysis and multivariate analysis; local plume and puff models; box and multibox models; finite difference or grid models; particle models; physical models, such as wind tunnels and liquid flumes; regional models; and global models

  7. The Growth of open source: A look at how companies are utilizing open source software in their business models

    OpenAIRE

    Feare, David

    2009-01-01

    This paper examines how open source software is being incorporated into the business models of companies in the software industry. The goal is to answer the question of whether the open source model can help sustain economic growth. While some companies are able to maintain a "pure" open source approach with their business model, the reality is that most companies are relying on proprietary add-on value in order to generate revenue because open source itself is simply not big business. Ultima...

  8. Mitigating Spreadsheet Model Risk with Python Open Source Infrastructure

    OpenAIRE

    Beavers, Oliver

    2018-01-01

    Across an aggregation of EuSpRIG presentation papers, two maxims hold true: spreadsheets models are akin to software, yet spreadsheet developers are not software engineers. As such, the lack of traditional software engineering tools and protocols invites a higher rate of error in the end result. This paper lays ground work for spreadsheet modelling professionals to develop reproducible audit tools using freely available, open source packages built with the Python programming language, enablin...

  9. OSeMOSYS: The Open Source Energy Modeling System

    International Nuclear Information System (INIS)

    Howells, Mark; Rogner, Holger; Strachan, Neil; Heaps, Charles; Huntington, Hillard; Kypreos, Socrates; Hughes, Alison; Silveira, Semida; DeCarolis, Joe; Bazillian, Morgan; Roehrl, Alexander

    2011-01-01

    This paper discusses the design and development of the Open Source Energy Modeling System (OSeMOSYS). It describes the model's formulation in terms of a 'plain English' description, algebraic formulation, implementation-in terms of its full source code, as well as a detailed description of the model inputs, parameters, and outputs. A key feature of the OSeMOSYS implementation is that it is contained in less than five pages of documented, easily accessible code. Other existing energy system models that do not have this emphasis on compactness and openness makes the barrier to entry by new users much higher, as well as making the addition of innovative new functionality very difficult. The paper begins by describing the rationale for the development of OSeMOSYS and its structure. The current preliminary implementation of the model is then demonstrated for a discrete example. Next, we explain how new development efforts will build on the existing OSeMOSYS codebase. The paper closes with thoughts regarding the organization of the OSeMOSYS community, associated capacity development efforts, and linkages to other open source efforts including adding functionality to the LEAP model. - Highlights: → OSeMOSYS is a new free and open source energy systems. → This model is written in a simple, open, flexible and transparent manner to support teaching. → OSeMOSYS is based on free software and optimizes using a free solver. → This model replicates the results of many popular tools, such as MARKAL. → A link between OSeMOSYS and LEAP has been developed.

  10. MODEL OF A PERSONWALKING AS A STRUCTURE BORNE SOUND SOURCE

    DEFF Research Database (Denmark)

    Lievens, Matthias; Brunskog, Jonas

    2007-01-01

    has to be considered and the contact history must be integrated in the model. This is complicated by the fact that nonlinearities occur at different stages in the system either on the source or receiver side. ot only lightweight structures but also soft floor coverings would benefit from an accurate...

  11. Modeling Noise Sources and Propagation in External Gear Pumps

    Directory of Open Access Journals (Sweden)

    Sangbeom Woo

    2017-07-01

    Full Text Available As a key component in power transfer, positive displacement machines often represent the major source of noise in hydraulic systems. Thus, investigation into the sources of noise and discovering strategies to reduce noise is a key part of improving the performance of current hydraulic systems, as well as applying fluid power systems to a wider range of applications. The present work aims at developing modeling techniques on the topic of noise generation caused by external gear pumps for high pressure applications, which can be useful and effective in investigating the interaction between noise sources and radiated noise and establishing the design guide for a quiet pump. In particular, this study classifies the internal noise sources into four types of effective load functions and, in the proposed model, these load functions are applied to the corresponding areas of the pump case in a realistic way. Vibration and sound radiation can then be predicted using a combined finite element and boundary element vibro-acoustic model. The radiated sound power and sound pressure for the different operating conditions are presented as the main outcomes of the acoustic model. The noise prediction was validated through comparison with the experimentally measured sound power levels.

  12. Modeling of an autonomous microgrid for renewable energy sources integration

    DEFF Research Database (Denmark)

    Serban, I.; Teodorescu, Remus; Guerrero, Josep M.

    2009-01-01

    The frequency stability analysis in an autonomous microgrid (MG) with renewable energy sources (RES) is a continuously studied issue. This paper presents an original method for modeling an autonomous MG with a battery energy storage system (BESS) and a wind power plant (WPP), with the purpose...

  13. Modeling Secondary Organic Aerosol Formation From Emissions of Combustion Sources

    Science.gov (United States)

    Jathar, Shantanu Hemant

    Atmospheric aerosols exert a large influence on the Earth's climate and cause adverse public health effects, reduced visibility and material degradation. Secondary organic aerosol (SOA), defined as the aerosol mass arising from the oxidation products of gas-phase organic species, accounts for a significant fraction of the submicron atmospheric aerosol mass. Yet, there are large uncertainties surrounding the sources, atmospheric evolution and properties of SOA. This thesis combines laboratory experiments, extensive data analysis and global modeling to investigate the contribution of semi-volatile and intermediate volatility organic compounds (SVOC and IVOC) from combustion sources to SOA formation. The goals are to quantify the contribution of these emissions to ambient PM and to evaluate and improve models to simulate its formation. To create a database for model development and evaluation, a series of smog chamber experiments were conducted on evaporated fuel, which served as surrogates for real-world combustion emissions. Diesel formed the most SOA followed by conventional jet fuel / jet fuel derived from natural gas, gasoline and jet fuel derived from coal. The variability in SOA formation from actual combustion emissions can be partially explained by the composition of the fuel. Several models were developed and tested along with existing models using SOA data from smog chamber experiments conducted using evaporated fuel (this work, gasoline, fischertropschs, jet fuel, diesels) and published data on dilute combustion emissions (aircraft, on- and off-road gasoline, on- and off-road diesel, wood burning, biomass burning). For all of the SOA data, existing models under-predicted SOA formation if SVOC/IVOC were not included. For the evaporated fuel experiments, when SVOC/IVOC were included predictions using the existing SOA model were brought to within a factor of two of measurements with minor adjustments to model parameterizations. Further, a volatility

  14. Comparison of beam emission spectroscopy and gas puff imaging edge fluctuation measurements in National Spherical Torus Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Sechrest, Y.; Munsat, T. [Department of Physics, University of Colorado, Boulder, Colorado 80309 (United States); Smith, D. [Department of Engineering Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Stotler, D. P.; Zweben, S. J. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08540 (United States)

    2015-05-15

    In this study, the close physical proximity of the Gas Puff Imaging (GPI) and Beam Emission Spectroscopy (BES) diagnostics on the National Spherical torus Experiment (NSTX) is leveraged to directly compare fluctuation measurements, and to study the local effects of the GPI neutral deuterium puff during H-mode plasmas without large Edge Localized Modes. The GPI and BES views on NSTX provide partially overlapping coverage of the edge and scrape-off layer (SOL) regions above the outboard midplane. The separation in the toroidal direction is 16°, and field lines passing through diagnostic views are separated by ∼20 cm in the direction perpendicular to the magnetic field. Strong cross-correlation is observed, and strong cross-coherence is seen for frequencies between 5 and 15 kHz. Also, probability distribution functions of fluctuations measured ∼3 cm inside the separatrix exhibit only minor deviations from a normal distribution for both diagnostics, and good agreement between correlation length estimates, decorrelation times, and structure velocities is found at the ±40% level. While the two instruments agree closely in many respects, some discrepancies are observed. Most notably, GPI normalized fluctuation levels exceed BES fluctuations by a factor of ∼9. BES mean intensity is found to be sensitive to the GPI neutral gas puff, and BES normalized fluctuation levels for frequencies between 1 and 10 kHz are observed to increase during the GPI puff.

  15. Comparison of beam emission spectroscopy and gas puff imaging edge fluctuation measurements in National Spherical Torus Experiment

    International Nuclear Information System (INIS)

    Sechrest, Y.; Munsat, T.; Smith, D.; Stotler, D. P.; Zweben, S. J.

    2015-01-01

    In this study, the close physical proximity of the Gas Puff Imaging (GPI) and Beam Emission Spectroscopy (BES) diagnostics on the National Spherical torus Experiment (NSTX) is leveraged to directly compare fluctuation measurements, and to study the local effects of the GPI neutral deuterium puff during H-mode plasmas without large Edge Localized Modes. The GPI and BES views on NSTX provide partially overlapping coverage of the edge and scrape-off layer (SOL) regions above the outboard midplane. The separation in the toroidal direction is 16°, and field lines passing through diagnostic views are separated by ∼20 cm in the direction perpendicular to the magnetic field. Strong cross-correlation is observed, and strong cross-coherence is seen for frequencies between 5 and 15 kHz. Also, probability distribution functions of fluctuations measured ∼3 cm inside the separatrix exhibit only minor deviations from a normal distribution for both diagnostics, and good agreement between correlation length estimates, decorrelation times, and structure velocities is found at the ±40% level. While the two instruments agree closely in many respects, some discrepancies are observed. Most notably, GPI normalized fluctuation levels exceed BES fluctuations by a factor of ∼9. BES mean intensity is found to be sensitive to the GPI neutral gas puff, and BES normalized fluctuation levels for frequencies between 1 and 10 kHz are observed to increase during the GPI puff

  16. Source modelling in seismic risk analysis for nuclear power plants

    International Nuclear Information System (INIS)

    Yucemen, M.S.

    1978-12-01

    The proposed probabilistic procedure provides a consistent method for the modelling, analysis and updating of uncertainties that are involved in the seismic risk analysis for nuclear power plants. The potential earthquake activity zones are idealized as point, line or area sources. For these seismic source types, expressions to evaluate their contribution to seismic risk are derived, considering all the possible site-source configurations. The seismic risk at a site is found to depend not only on the inherent randomness of the earthquake occurrences with respect to magnitude, time and space, but also on the uncertainties associated with the predicted values of the seismic and geometric parameters, as well as the uncertainty in the attenuation model. The uncertainty due to the attenuation equation is incorporated into the analysis through the use of random correction factors. The influence of the uncertainty resulting from the insufficient information on the seismic parameters and source geometry is introduced into the analysis by computing a mean risk curve averaged over the various alternative assumptions on the parameters and source geometry. Seismic risk analysis is carried for the city of Denizli, which is located in the seismically most active zone of Turkey. The second analysis is for Akkuyu

  17. Developing seismogenic source models based on geologic fault data

    Science.gov (United States)

    Haller, Kathleen M.; Basili, Roberto

    2011-01-01

    Calculating seismic hazard usually requires input that includes seismicity associated with known faults, historical earthquake catalogs, geodesy, and models of ground shaking. This paper will address the input generally derived from geologic studies that augment the short historical catalog to predict ground shaking at time scales of tens, hundreds, or thousands of years (e.g., SSHAC 1997). A seismogenic source model, terminology we adopt here for a fault source model, includes explicit three-dimensional faults deemed capable of generating ground motions of engineering significance within a specified time frame of interest. In tectonically active regions of the world, such as near plate boundaries, multiple seismic cycles span a few hundred to a few thousand years. In contrast, in less active regions hundreds of kilometers from the nearest plate boundary, seismic cycles generally are thousands to tens of thousands of years long. Therefore, one should include sources having both longer recurrence intervals and possibly older times of most recent rupture in less active regions of the world rather than restricting the model to include only Holocene faults (i.e., those with evidence of large-magnitude earthquakes in the past 11,500 years) as is the practice in tectonically active regions with high deformation rates. During the past 15 years, our institutions independently developed databases to characterize seismogenic sources based on geologic data at a national scale. Our goal here is to compare the content of these two publicly available seismogenic source models compiled for the primary purpose of supporting seismic hazard calculations by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the U.S. Geological Survey (USGS); hereinafter we refer to the two seismogenic source models as INGV and USGS, respectively. This comparison is timely because new initiatives are emerging to characterize seismogenic sources at the continental scale (e.g., SHARE in the

  18. Race of source effects in the elaboration likelihood model.

    Science.gov (United States)

    White, P H; Harkins, S G

    1994-11-01

    In a series of experiments, we investigated the effect of race of source on persuasive communications in the Elaboration Likelihood Model (R.E. Petty & J.T. Cacioppo, 1981, 1986). In Experiment 1, we found no evidence that White participants responded to a Black source as a simple negative cue. Experiment 2 suggested the possibility that exposure to a Black source led to low-involvement message processing. In Experiments 3 and 4, a distraction paradigm was used to test this possibility, and it was found that participants under low involvement were highly motivated to process a message presented by a Black source. In Experiment 5, we found that attitudes toward the source's ethnic group, rather than violations of expectancies, accounted for this processing effect. Taken together, the results of these experiments are consistent with S.L. Gaertner and J.F. Dovidio's (1986) theory of aversive racism, which suggests that Whites, because of a combination of egalitarian values and underlying negative racial attitudes, are very concerned about not appearing unfavorable toward Blacks, leading them to be highly motivated to process messages presented by a source from this group.

  19. BREEDING SUPER-EARTHS AND BIRTHING SUPER-PUFFS IN TRANSITIONAL DISKS

    International Nuclear Information System (INIS)

    Lee, Eve J.; Chiang, Eugene

    2016-01-01

    The riddle posed by super-Earths (1–4R ⊕ , 2–20M ⊕ ) is that they are not Jupiters: their core masses are large enough to trigger runaway gas accretion, yet somehow super-Earths accreted atmospheres that weigh only a few percent of their total mass. We show that this puzzle is solved if super-Earths formed late, as the last vestiges of their parent gas disks were about to clear. This scenario would seem to present fine-tuning problems, but we show that there are none. Ambient gas densities can span many (in one case up to 9) orders of magnitude, and super-Earths can still robustly emerge after ∼0.1–1 Myr with percent-by-weight atmospheres. Super-Earth cores are naturally bred in gas-poor environments where gas dynamical friction has weakened sufficiently to allow constituent protocores to gravitationally stir one another and merge. So little gas is present at the time of core assembly that cores hardly migrate by disk torques: formation of super-Earths can be in situ. The basic picture—that close-in super-Earths form in a gas-poor (but not gas-empty) inner disk, fed continuously by gas that bleeds inward from a more massive outer disk—recalls the largely evacuated but still accreting inner cavities of transitional protoplanetary disks. We also address the inverse problem presented by super-puffs: an uncommon class of short-period planets seemingly too voluminous for their small masses (4–10R ⊕ , 2–6M ⊕ ). Super-puffs most easily acquire their thick atmospheres as dust-free, rapidly cooling worlds outside ∼1 AU where nebular gas is colder, less dense, and therefore less opaque. Unlike super-Earths, which can form in situ, super-puffs probably migrated in to their current orbits; they are expected to form the outer links of mean-motion resonant chains, and to exhibit greater water content. We close by confronting observations and itemizing remaining questions

  20. BREEDING SUPER-EARTHS AND BIRTHING SUPER-PUFFS IN TRANSITIONAL DISKS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eve J.; Chiang, Eugene, E-mail: evelee@berkeley.edu, E-mail: echiang@astro.berkeley.edu [Department of Astronomy, University of California Berkeley, Berkeley, CA 94720-3411 (United States)

    2016-02-01

    The riddle posed by super-Earths (1–4R{sub ⊕}, 2–20M{sub ⊕}) is that they are not Jupiters: their core masses are large enough to trigger runaway gas accretion, yet somehow super-Earths accreted atmospheres that weigh only a few percent of their total mass. We show that this puzzle is solved if super-Earths formed late, as the last vestiges of their parent gas disks were about to clear. This scenario would seem to present fine-tuning problems, but we show that there are none. Ambient gas densities can span many (in one case up to 9) orders of magnitude, and super-Earths can still robustly emerge after ∼0.1–1 Myr with percent-by-weight atmospheres. Super-Earth cores are naturally bred in gas-poor environments where gas dynamical friction has weakened sufficiently to allow constituent protocores to gravitationally stir one another and merge. So little gas is present at the time of core assembly that cores hardly migrate by disk torques: formation of super-Earths can be in situ. The basic picture—that close-in super-Earths form in a gas-poor (but not gas-empty) inner disk, fed continuously by gas that bleeds inward from a more massive outer disk—recalls the largely evacuated but still accreting inner cavities of transitional protoplanetary disks. We also address the inverse problem presented by super-puffs: an uncommon class of short-period planets seemingly too voluminous for their small masses (4–10R{sub ⊕}, 2–6M{sub ⊕}). Super-puffs most easily acquire their thick atmospheres as dust-free, rapidly cooling worlds outside ∼1 AU where nebular gas is colder, less dense, and therefore less opaque. Unlike super-Earths, which can form in situ, super-puffs probably migrated in to their current orbits; they are expected to form the outer links of mean-motion resonant chains, and to exhibit greater water content. We close by confronting observations and itemizing remaining questions.

  1. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  2. Absorptivity Measurements and Heat Source Modeling to Simulate Laser Cladding

    Science.gov (United States)

    Wirth, Florian; Eisenbarth, Daniel; Wegener, Konrad

    The laser cladding process gains importance, as it does not only allow the application of surface coatings, but also additive manufacturing of three-dimensional parts. In both cases, process simulation can contribute to process optimization. Heat source modeling is one of the main issues for an accurate model and simulation of the laser cladding process. While the laser beam intensity distribution is readily known, the other two main effects on the process' heat input are non-trivial. Namely the measurement of the absorptivity of the applied materials as well as the powder attenuation. Therefore, calorimetry measurements were carried out. The measurement method and the measurement results for laser cladding of Stellite 6 on structural steel S 235 and for the processing of Inconel 625 are presented both using a CO2 laser as well as a high power diode laser (HPDL). Additionally, a heat source model is deduced.

  3. Diffusion theory model for optimization calculations of cold neutron sources

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1987-01-01

    Cold neutron sources are becoming increasingly important and common experimental facilities made available at many research reactors around the world due to the high utility of cold neutrons in scattering experiments. The authors describe a simple two-group diffusion model of an infinite slab LD 2 cold source. The simplicity of the model permits to obtain an analytical solution from which one can deduce the reason for the optimum thickness based solely on diffusion-type phenomena. Also, a second more sophisticated model is described and the results compared to a deterministic transport calculation. The good (particularly qualitative) agreement between the results suggests that diffusion theory methods can be used in parametric and optimization studies to avoid the generally more expensive transport calculations

  4. Residential radon in Finland: sources, variation, modelling and dose comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Arvela, H

    1995-09-01

    The study deals with sources of indoor radon in Finland, seasonal variations in radon concentration, the effect of house construction and ventilation and also with the radiation dose from indoor radon and terrestrial gamma radiation. The results are based on radon measurements in approximately 4000 dwellings and on air exchange measurements in 250 dwellings as well as on model calculations. The results confirm that convective soil air flow is by far the most important source of indoor radon in Finnish low-rise residential housing. (97 refs., 61 figs., 30 tabs.).

  5. Residential radon in Finland: sources, variation, modelling and dose comparisons

    International Nuclear Information System (INIS)

    Arvela, H.

    1995-09-01

    The study deals with sources of indoor radon in Finland, seasonal variations in radon concentration, the effect of house construction and ventilation and also with the radiation dose from indoor radon and terrestrial gamma radiation. The results are based on radon measurements in approximately 4000 dwellings and on air exchange measurements in 250 dwellings as well as on model calculations. The results confirm that convective soil air flow is by far the most important source of indoor radon in Finnish low-rise residential housing. (97 refs., 61 figs., 30 tabs.)

  6. Dynamic modeling of the advanced neutron source reactor

    International Nuclear Information System (INIS)

    March-Leuba, J.; Ibn-Khayat, M.

    1990-01-01

    The purpose of this paper is to provide a summary description and some applications of a computer model that has been developed to simulate the dynamic behavior of the advanced neutron source (ANS) reactor. The ANS dynamic model is coded in the advanced continuous simulation language (ACSL), and it represents the reactor core, vessel, primary cooling system, and secondary cooling systems. The use of a simple dynamic model in the early stages of the reactor design has proven very valuable not only in the development of the control and plant protection system but also of components such as pumps and heat exchangers that are usually sized based on steady-state calculations

  7. Mean shear flow in recirculating turbulent urban convection and the plume-puff eddy structure below stably stratified inversion layers

    Science.gov (United States)

    Fan, Yifan; Hunt, Julian; Yin, Shi; Li, Yuguo

    2018-03-01

    The mean and random components of the velocity field at very low wind speeds in a convective boundary layer (CBL) over a wide urban area are dominated by large eddy structures—either turbulent plumes or puffs. In the mixed layer at either side of the edges of urban areas, local mean recirculating flows are generated by sharp horizontal temperature gradients. These recirculation regions also control the mean shear profile and the bent-over plumes across the mixed layer, extending from the edge to the center of the urban area. A simplified physical model was proposed to calculate the mean flow speed at the edges of urban areas. Water tank experiments were carried out to study the mean recirculating flow and turbulent plume structures. The mean speed at urban edges was measured by the particle image velocimetry (PIV), and the plume structures were visualized by the thermalchromic liquid crystal (TLC) sheets. The horizontal velocity calculated by the physical model at the urban edge agrees well with that measured in the water tank experiments, with a root mean square of 0.03. The experiments also show that the pattern of the mean flow over the urban area changes significantly if the shape of the heated area changes or if the form of the heated urban area becomes sub-divided, for example by the creation of nearby but separated "satellite cities." The convective flow over the square urban area is characterized as the diagonal inflow at the lower level and the side outflow at the upper level. The outflow of the small city can be drawn into the inflow region of the large city in the "satellite city" case. A conceptual analysis shows how these changes significantly affect the patterns of dispersion of pollutants in different types of urban areas.

  8. Numerical model of electron cyclotron resonance ion source

    Directory of Open Access Journals (Sweden)

    V. Mironov

    2015-12-01

    Full Text Available Important features of the electron cyclotron resonance ion source (ECRIS operation are accurately reproduced with a numerical code. The code uses the particle-in-cell technique to model the dynamics of ions in ECRIS plasma. It is shown that a gas dynamical ion confinement mechanism is sufficient to provide the ion production rates in ECRIS close to the experimentally observed values. Extracted ion currents are calculated and compared to the experiment for a few sources. Changes in the simulated extracted ion currents are obtained with varying the gas flow into the source chamber and the microwave power. Empirical scaling laws for ECRIS design are studied and the underlying physical effects are discussed.

  9. Mathematical modelling of electricity market with renewable energy sources

    International Nuclear Information System (INIS)

    Marchenko, O.V.

    2007-01-01

    The paper addresses the electricity market with conventional energy sources on fossil fuel and non-conventional renewable energy sources (RESs) with stochastic operating conditions. A mathematical model of long-run (accounting for development of generation capacities) equilibrium in the market is constructed. The problem of determining optimal parameters providing the maximum social criterion of efficiency is also formulated. The calculations performed have shown that the adequate choice of price cap, environmental tax, subsidies to RESs and consumption tax make it possible to take into account external effects (environmental damage) and to create incentives for investors to construct conventional and renewable energy sources in an optimal (from the society view point) mix. (author)

  10. A FRAMEWORK FOR AN OPEN SOURCE GEOSPATIAL CERTIFICATION MODEL

    Directory of Open Access Journals (Sweden)

    T. U. R. Khan

    2016-06-01

    Full Text Available The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission “Making geospatial education and opportunities accessible to all”. Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the “Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM. The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and

  11. a Framework for AN Open Source Geospatial Certification Model

    Science.gov (United States)

    Khan, T. U. R.; Davis, P.; Behr, F.-J.

    2016-06-01

    The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission "Making geospatial education and opportunities accessible to all". Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the "Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM). The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and evaluated with 105

  12. Numerical studies of neon gas-puff Z-pinch dynamic processes

    International Nuclear Information System (INIS)

    Ning Cheng; Yang Zhenhua; Ding Ning

    2003-01-01

    Dynamic processes of neon gas-puff Z-pinch are studied numerically in this paper. A high temperature plasma with a high density can be generated in the process. Based on some physical analysis and assumption, a set of equations of one-dimensional Lagrangian radiation magneto-hydrodynamic (MHD) and its code are developed to solve the problem. Spatio-temporal distributions of plasma parameters in the processes are obtained, and their dynamic variations show that the major results are self-consistent. The duration for the plasma pinched to centre, as well as the width and the total energy of the x-ray pulse caused by the Z-pinch are in reasonable agreement with experimental results of GAMBLE-II. A zipping effect is also clearly shown in the simulation

  13. Case report: Amputation for a puff adder (Bitis arietans envenomation in a child - 1954

    Directory of Open Access Journals (Sweden)

    Charles T West

    2014-02-01

    Full Text Available Diaries spanning three decades (1943-1964 have been discovered that tell the story of the life of missionary nurses, doctors and surgeons working at the Lui and Leer Hospitals in South Sudan (then known as Southern Sudan. The medical facility at Leer during this period covered a 300 miles radius serving approximately 60,000 of the Nilotic Western Nuer tribe [1]. It was among these records that the following case description was found. The puff adder (Bitis arietans is one of the commonest African snakes, causing more bites in animals and humans than all other species of snake put together in sub-Saharan regions. It commonly inhabits the banks of the Nile

  14. Current distribution measurements inside an electromagnetic plasma gun operated in a gas-puff mode.

    Science.gov (United States)

    Poehlmann, Flavio R; Cappelli, Mark A; Rieker, Gregory B

    2010-12-01

    Measurements are presented of the time-dependent current distribution inside a coaxial electromagnetic plasma gun. The measurements are carried out using an array of six axially distributed dual-Rogowski coils in a balanced circuit configuration. The radial current distributions indicate that operation in the gas-puff mode, i.e., the mode in which the electrode voltage is applied before injection of the gas, results in a stationary ionization front consistent with the presence of a plasma deflagration. The effects of varying the bank capacitance, transmission line inductance, and applied electrode voltage were studied over the range from 14 to 112 μF, 50 to 200 nH, and 1 to 3 kV, respectively.

  15. Application of bio-huff-`n`-puff technology at Jilin oil field

    Energy Technology Data Exchange (ETDEWEB)

    Xiu-Yuan Wang; Yan-Fed Xue; Gang Dai; Ling Zhao [Institute of Microbiology, Beijing (China)] [and others

    1995-12-31

    An enriched culture 48, capable of adapting to the reservoir conditions and fermenting molasses to produce gas and acid, was used as an inoculum for bio- huff-`n`-puff tests at Fuyu oil area of Jilin oil field. The production well was injected with water containing 4-6% (v/v) molasses and inoculum, and then shut in. After 15-21 days, the well was placed back in operation. A total of 44 wells were treated, of which only two wells showed no effects. The daily oil production of treated wells increased by 33.3-733.3%. Up to the end of 1994, the oil production was increased by 204 tons per well on average. Results obtained from various types of production wells were discussed.

  16. Measurements of the initial density distribution of gas puff liners by using Rayleigh scattering

    Energy Technology Data Exchange (ETDEWEB)

    Kalinin, Yu G; Shashkov, A Yu [Kurchatov Institute, Moscow (Russian Federation)

    1997-12-31

    Rayleigh scattering of a laser beam in a gas jet is proposed for the measurements of initial density distribution of gas-puff liners. The scattering method has several advantages when compared with interferometry. In particular, it provides information on the local gas density, it is more sensitive, and the output data can be absolutely calibrated. Theoretical background of the method is briefly discussed in the paper and the optical setup used in real experiments is described. Imaging of the scattering object make it possible to detect detailed profiles of the investigated gas jet, as illustrated by several examples taken from the experiment. In some cases even the gas jet stratification has been observed. (J.U.). 1 tab., 3 figs., 1 ref.

  17. Modeling a Hypothetical 170Tm Source for Brachytherapy Applications

    International Nuclear Information System (INIS)

    Enger, Shirin A.; D'Amours, Michel; Beaulieu, Luc

    2011-01-01

    Purpose: To perform absorbed dose calculations based on Monte Carlo simulations for a hypothetical 170 Tm source and to investigate the influence of encapsulating material on the energy spectrum of the emitted electrons and photons. Methods: GEANT4 Monte Carlo code version 9.2 patch 2 was used to simulate the decay process of 170 Tm and to calculate the absorbed dose distribution using the GEANT4 Penelope physics models. A hypothetical 170 Tm source based on the Flexisource brachytherapy design with the active core set as a pure thulium cylinder (length 3.5 mm and diameter 0.6 mm) and different cylindrical source encapsulations (length 5 mm and thickness 0.125 mm) constructed of titanium, stainless-steel, gold, or platinum were simulated. The radial dose function for the line source approximation was calculated following the TG-43U1 formalism for the stainless-steel encapsulation. Results: For the titanium and stainless-steel encapsulation, 94% of the total bremsstrahlung is produced inside the core, 4.8 and 5.5% in titanium and stainless-steel capsules, respectively, and less than 1% in water. For the gold capsule, 85% is produced inside the core, 14.2% inside the gold capsule, and a negligible amount ( 170 Tm source is primarily a bremsstrahlung source, with the majority of bremsstrahlung photons being generated in the source core and experiencing little attenuation in the source encapsulation. Electrons are efficiently absorbed by the gold and platinum encapsulations. However, for the stainless-steel capsule (or other lower Z encapsulations) electrons will escape. The dose from these electrons is dominant over the photon dose in the first few millimeter but is not taken into account by current standard treatment planning systems. The total energy spectrum of photons emerging from the source depends on the encapsulation composition and results in mean photon energies well above 100 keV. This is higher than the main gamma-ray energy peak at 84 keV. Based on our

  18. Open Sourcing Social Change: Inside the Constellation Model

    Directory of Open Access Journals (Sweden)

    Tonya Surman

    2008-09-01

    Full Text Available The constellation model was developed by and for the Canadian Partnership for Children's Health and the Environment. The model offers an innovative approach to organizing collaborative efforts in the social mission sector and shares various elements of the open source model. It emphasizes self-organizing and concrete action within a network of partner organizations working on a common issue. Constellations are self-organizing action teams that operate within the broader strategic vision of a partnership. These constellations are outwardly focused, placing their attention on creating value for those in the external environment rather than on the partnership itself. While serious effort is invested into core partnership governance and management, most of the energy is devoted to the decision making, resources and collaborative effort required to create social value. The constellations drive and define the partnership. The constellation model emerged from a deep understanding of the power of networks and peer production. Leadership rotates fluidly amongst partners, with each partner having the freedom to head up a constellation and to participate in constellations that carry out activities that are of more peripheral interest. The Internet provided the platform, the partner network enabled the expertise to align itself, and the goal of reducing chemical exposure in children kept the energy flowing. Building on seven years of experience, this article provides an overview of the constellation model, discusses the results from the CPCHE, and identifies similarities and differences between the constellation and open source models.

  19. Model of the Sgr B2 radio source

    International Nuclear Information System (INIS)

    Gosachinskij, I.V.; Khersonskij, V.K.

    1981-01-01

    The dynamical model of the gas cloud around the radio source Sagittarius B2 is suggested. This model describes the kinematic features of the gas in this source: contraction of the core and rotation of the envelope. The stability of the cloud at the initial stage is supported by the turbulent motion of the gas, turbulence energy dissipates due to magnetic viscosity. This process is occurring more rapidly in the dense core and the core begins to collapse but the envelope remains stable. The parameters of the primary cloud and some parameters (mass, density and size) of the collapse are calculated. The conditions in the core at the moment of its fragmentation into masses of stellar order are established [ru

  20. CO{sub 2} Huff-n-Puff process in a light oil shallow shelf carbonate reservoir. 1994 Annual report

    Energy Technology Data Exchange (ETDEWEB)

    Wehner, S.C.

    1995-05-01

    It is anticipated that this project will show that the application of the CO{sub 2} Huff-n-Puff process in shallow shelf carbonates can be economically implemented to recover appreciable volumes of light oil. The goals of the project are the development of guidelines for cost-effective selection of candidate reservoirs and wells, along with estimating recovery potential. The selected site for the demonstration project is the Central Vacuum Unit waterflood in Lea County, New Mexico. Work is nearing completion on the reservoir characterization components of the project. The near-term emphasis is to, (1) provide an accurate distribution of original oil-in-place on a waterflood pattern entity level, (2) evaluate past recovery efficiencies, (3) perform parametric simulations, and (4) forecast performance for a site specific field demonstration of the proposed technology. Macro zonation now exists throughout the study area and cross-sections are available. The Oil-Water Contact has been defined. Laboratory capillary pressure data was used to define the initial water saturations within the pay horizon. The reservoir`s porosity distribution has been enhanced with the assistance of geostatistical software. Three-Dimensional kriging created the spatial distributions of porosity at interwell locations. Artificial intelligence software was utilized to relate core permeability to core porosity, which in turn was applied to the 3-D geostatistical porosity gridding. An Equation-of-State has been developed and refined for upcoming compositional simulation exercises. Options for local grid-refinement in the model are under consideration. These tasks will be completed by mid-1995, prior to initiating the field demonstrations in the second budget period.

  1. Nitrate source apportionment in a subtropical watershed using Bayesian model

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Liping; Han, Jiangpei; Xue, Jianlong; Zeng, Lingzao [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Shi, Jiachun, E-mail: jcshi@zju.edu.cn [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Wu, Laosheng, E-mail: laowu@zju.edu.cn [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Jiang, Yonghai [State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research Academy of Environmental Sciences, Beijing, 100012 (China)

    2013-10-01

    Nitrate (NO{sub 3}{sup −}) pollution in aquatic system is a worldwide problem. The temporal distribution pattern and sources of nitrate are of great concern for water quality. The nitrogen (N) cycling processes in a subtropical watershed located in Changxing County, Zhejiang Province, China were greatly influenced by the temporal variations of precipitation and temperature during the study period (September 2011 to July 2012). The highest NO{sub 3}{sup −} concentration in water was in May (wet season, mean ± SD = 17.45 ± 9.50 mg L{sup −1}) and the lowest concentration occurred in December (dry season, mean ± SD = 10.54 ± 6.28 mg L{sup −1}). Nevertheless, no water sample in the study area exceeds the WHO drinking water limit of 50 mg L{sup −1} NO{sub 3}{sup −}. Four sources of NO{sub 3}{sup −} (atmospheric deposition, AD; soil N, SN; synthetic fertilizer, SF; manure and sewage, M and S) were identified using both hydrochemical characteristics [Cl{sup −}, NO{sub 3}{sup −}, HCO{sub 3}{sup −}, SO{sub 4}{sup 2−}, Ca{sup 2+}, K{sup +}, Mg{sup 2+}, Na{sup +}, dissolved oxygen (DO)] and dual isotope approach (δ{sup 15}N–NO{sub 3}{sup −} and δ{sup 18}O–NO{sub 3}{sup −}). Both chemical and isotopic characteristics indicated that denitrification was not the main N cycling process in the study area. Using a Bayesian model (stable isotope analysis in R, SIAR), the contribution of each source was apportioned. Source apportionment results showed that source contributions differed significantly between the dry and wet season, AD and M and S contributed more in December than in May. In contrast, SN and SF contributed more NO{sub 3}{sup −} to water in May than that in December. M and S and SF were the major contributors in December and May, respectively. Moreover, the shortcomings and uncertainties of SIAR were discussed to provide implications for future works. With the assessment of temporal variation and sources of NO{sub 3}{sup −}, better

  2. Nitrate source apportionment in a subtropical watershed using Bayesian model

    International Nuclear Information System (INIS)

    Yang, Liping; Han, Jiangpei; Xue, Jianlong; Zeng, Lingzao; Shi, Jiachun; Wu, Laosheng; Jiang, Yonghai

    2013-01-01

    Nitrate (NO 3 − ) pollution in aquatic system is a worldwide problem. The temporal distribution pattern and sources of nitrate are of great concern for water quality. The nitrogen (N) cycling processes in a subtropical watershed located in Changxing County, Zhejiang Province, China were greatly influenced by the temporal variations of precipitation and temperature during the study period (September 2011 to July 2012). The highest NO 3 − concentration in water was in May (wet season, mean ± SD = 17.45 ± 9.50 mg L −1 ) and the lowest concentration occurred in December (dry season, mean ± SD = 10.54 ± 6.28 mg L −1 ). Nevertheless, no water sample in the study area exceeds the WHO drinking water limit of 50 mg L −1 NO 3 − . Four sources of NO 3 − (atmospheric deposition, AD; soil N, SN; synthetic fertilizer, SF; manure and sewage, M and S) were identified using both hydrochemical characteristics [Cl − , NO 3 − , HCO 3 − , SO 4 2− , Ca 2+ , K + , Mg 2+ , Na + , dissolved oxygen (DO)] and dual isotope approach (δ 15 N–NO 3 − and δ 18 O–NO 3 − ). Both chemical and isotopic characteristics indicated that denitrification was not the main N cycling process in the study area. Using a Bayesian model (stable isotope analysis in R, SIAR), the contribution of each source was apportioned. Source apportionment results showed that source contributions differed significantly between the dry and wet season, AD and M and S contributed more in December than in May. In contrast, SN and SF contributed more NO 3 − to water in May than that in December. M and S and SF were the major contributors in December and May, respectively. Moreover, the shortcomings and uncertainties of SIAR were discussed to provide implications for future works. With the assessment of temporal variation and sources of NO 3 − , better agricultural management practices and sewage disposal programs can be implemented to sustain water quality in subtropical watersheds

  3. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Smidts, C.; Sova, D.

    1999-01-01

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  4. Receptor models for source apportionment of remote aerosols in Brazil

    International Nuclear Information System (INIS)

    Artaxo Netto, P.E.

    1985-11-01

    The PIXE (particle induced X-ray emission), and PESA (proton elastic scattering analysis) method were used in conjunction with receptor models for source apportionment of remote aerosols in Brazil. The PIXE used in the determination of concentration for elements with Z >- 11, has a detection limit of about 1 ng/m 3 . The concentrations of carbon, nitrogen and oxygen in the fine fraction of Amazon Basin aerosols was measured by PESA. We sampled in Jureia (SP), Fernando de Noronha, Arembepe (BA), Firminopolis (GO), Itaberai (GO) and Amazon Basin. For collecting the airbone particles we used cascade impactors, stacked filter units, and streaker samplers. Three receptor models were used: chemical mass balance, stepwise multiple regression analysis and principal factor analysis. The elemental and gravimetric concentrations were explained by the models within the experimental errors. Three sources of aerosol were quantitatively distinguished: marine aerosol, soil dust and aerosols related to forests. The emission of aerosols by vegetation is very clear for all the sampling sites. In Amazon Basin and Jureia it is the major source, responsible for 60 to 80% of airborne concentrations. (Author) [pt

  5. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Science.gov (United States)

    Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  6. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Directory of Open Access Journals (Sweden)

    Obioma Nwankwo

    Full Text Available To introduce a new method of deriving a virtual source model (VSM of a linear accelerator photon beam from a phase space file (PSF for Monte Carlo (MC dose calculation.A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses.The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate for the evaluated fields.A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  7. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  8. Receptor Model Source Apportionment of Nonmethane Hydrocarbons in Mexico City

    Directory of Open Access Journals (Sweden)

    V. Mugica

    2002-01-01

    Full Text Available With the purpose of estimating the source contributions of nonmethane hydrocarbons (NMHC to the atmosphere at three different sites in the Mexico City Metropolitan Area, 92 ambient air samples were measured from February 23 to March 22 of 1997. Light- and heavy-duty vehicular profiles were determined to differentiate the NMHC contribution of diesel and gasoline to the atmosphere. Food cooking source profiles were also determined for chemical mass balance receptor model application. Initial source contribution estimates were carried out to determine the adequate combination of source profiles and fitting species. Ambient samples of NMHC were apportioned to motor vehicle exhaust, gasoline vapor, handling and distribution of liquefied petroleum gas (LP gas, asphalt operations, painting operations, landfills, and food cooking. Both gasoline and diesel motor vehicle exhaust were the major NMHC contributors for all sites and times, with a percentage of up to 75%. The average motor vehicle exhaust contributions increased during the day. In contrast, LP gas contribution was higher during the morning than in the afternoon. Apportionment for the most abundant individual NMHC showed that the vehicular source is the major contributor to acetylene, ethylene, pentanes, n-hexane, toluene, and xylenes, while handling and distribution of LP gas was the major source contributor to propane and butanes. Comparison between CMB estimates of NMHC and the emission inventory showed a good agreement for vehicles, handling and distribution of LP gas, and painting operations; nevertheless, emissions from diesel exhaust and asphalt operations showed differences, and the results suggest that these emissions could be underestimated.

  9. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...

  10. Atmospheric mercury dispersion modelling from two nearest hypothetical point sources

    Energy Technology Data Exchange (ETDEWEB)

    Al Razi, Khandakar Md Habib; Hiroshi, Moritomi; Shinji, Kambara [Environmental and Renewable Energy System (ERES), Graduate School of Engineering, Gifu University, Yanagido, Gifu City, 501-1193 (Japan)

    2012-07-01

    The Japan coastal areas are still environmentally friendly, though there are multiple air emission sources originating as a consequence of several developmental activities such as automobile industries, operation of thermal power plants, and mobile-source pollution. Mercury is known to be a potential air pollutant in the region apart from SOX, NOX, CO and Ozone. Mercury contamination in water bodies and other ecosystems due to deposition of atmospheric mercury is considered a serious environmental concern. Identification of sources contributing to the high atmospheric mercury levels will be useful for formulating pollution control and mitigation strategies in the region. In Japan, mercury and its compounds were categorized as hazardous air pollutants in 1996 and are on the list of 'Substances Requiring Priority Action' published by the Central Environmental Council of Japan. The Air Quality Management Division of the Environmental Bureau, Ministry of the Environment, Japan, selected the current annual mean environmental air quality standard for mercury and its compounds of 0.04 ?g/m3. Long-term exposure to mercury and its compounds can have a carcinogenic effect, inducing eg, Minamata disease. This study evaluates the impact of mercury emissions on air quality in the coastal area of Japan. Average yearly emission of mercury from an elevated point source in this area with background concentration and one-year meteorological data were used to predict the ground level concentration of mercury. To estimate the concentration of mercury and its compounds in air of the local area, two different simulation models have been used. The first is the National Institute of Advanced Science and Technology Atmospheric Dispersion Model for Exposure and Risk Assessment (AIST-ADMER) that estimates regional atmospheric concentration and distribution. The second is the Hybrid Single Particle Lagrangian Integrated trajectory Model (HYSPLIT) that estimates the atmospheric

  11. Greenhouse Gas Source Attribution: Measurements Modeling and Uncertainty Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhen [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States); LaFranchi, Brian W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ivey, Mark D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Schrader, Paul E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Michelsen, Hope A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bambha, Ray P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2014-09-01

    In this project we have developed atmospheric measurement capabilities and a suite of atmospheric modeling and analysis tools that are well suited for verifying emissions of green- house gases (GHGs) on an urban-through-regional scale. We have for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate atmospheric CO2 . This will allow for the examination of regional-scale transport and distribution of CO2 along with air pollutants traditionally studied using CMAQ at relatively high spatial and temporal resolution with the goal of leveraging emissions verification efforts for both air quality and climate. We have developed a bias-enhanced Bayesian inference approach that can remedy the well-known problem of transport model errors in atmospheric CO2 inversions. We have tested the approach using data and model outputs from the TransCom3 global CO2 inversion comparison project. We have also performed two prototyping studies on inversion approaches in the generalized convection-diffusion context. One of these studies employed Polynomial Chaos Expansion to accelerate the evaluation of a regional transport model and enable efficient Markov Chain Monte Carlo sampling of the posterior for Bayesian inference. The other approach uses de- terministic inversion of a convection-diffusion-reaction system in the presence of uncertainty. These approaches should, in principle, be applicable to realistic atmospheric problems with moderate adaptation. We outline a regional greenhouse gas source inference system that integrates (1) two ap- proaches of atmospheric dispersion simulation and (2) a class of Bayesian inference and un- certainty quantification algorithms. We use two different and complementary approaches to simulate atmospheric dispersion. Specifically, we use a Eulerian chemical transport model CMAQ and a Lagrangian Particle Dispersion Model - FLEXPART-WRF. These two models share the same WRF

  12. Modeling of low pressure plasma sources for microelectronics fabrication

    International Nuclear Information System (INIS)

    Agarwal, Ankur; Bera, Kallol; Kenney, Jason; Rauf, Shahid; Likhanskii, Alexandre

    2017-01-01

    Chemically reactive plasmas operating in the 1 mTorr–10 Torr pressure range are widely used for thin film processing in the semiconductor industry. Plasma modeling has come to play an important role in the design of these plasma processing systems. A number of 3-dimensional (3D) fluid and hybrid plasma modeling examples are used to illustrate the role of computational investigations in design of plasma processing hardware for applications such as ion implantation, deposition, and etching. A model for a rectangular inductively coupled plasma (ICP) source is described, which is employed as an ion source for ion implantation. It is shown that gas pressure strongly influences ion flux uniformity, which is determined by the balance between the location of plasma production and diffusion. The effect of chamber dimensions on plasma uniformity in a rectangular capacitively coupled plasma (CCP) is examined using an electromagnetic plasma model. Due to high pressure and small gap in this system, plasma uniformity is found to be primarily determined by the electric field profile in the sheath/pre-sheath region. A 3D model is utilized to investigate the confinement properties of a mesh in a cylindrical CCP. Results highlight the role of hole topology and size on the formation of localized hot-spots. A 3D electromagnetic plasma model for a cylindrical ICP is used to study inductive versus capacitive power coupling and how placement of ground return wires influences it. Finally, a 3D hybrid plasma model for an electron beam generated magnetized plasma is used to understand the role of reactor geometry on plasma uniformity in the presence of E  ×  B drift. (paper)

  13. Modeling of low pressure plasma sources for microelectronics fabrication

    Science.gov (United States)

    Agarwal, Ankur; Bera, Kallol; Kenney, Jason; Likhanskii, Alexandre; Rauf, Shahid

    2017-10-01

    Chemically reactive plasmas operating in the 1 mTorr-10 Torr pressure range are widely used for thin film processing in the semiconductor industry. Plasma modeling has come to play an important role in the design of these plasma processing systems. A number of 3-dimensional (3D) fluid and hybrid plasma modeling examples are used to illustrate the role of computational investigations in design of plasma processing hardware for applications such as ion implantation, deposition, and etching. A model for a rectangular inductively coupled plasma (ICP) source is described, which is employed as an ion source for ion implantation. It is shown that gas pressure strongly influences ion flux uniformity, which is determined by the balance between the location of plasma production and diffusion. The effect of chamber dimensions on plasma uniformity in a rectangular capacitively coupled plasma (CCP) is examined using an electromagnetic plasma model. Due to high pressure and small gap in this system, plasma uniformity is found to be primarily determined by the electric field profile in the sheath/pre-sheath region. A 3D model is utilized to investigate the confinement properties of a mesh in a cylindrical CCP. Results highlight the role of hole topology and size on the formation of localized hot-spots. A 3D electromagnetic plasma model for a cylindrical ICP is used to study inductive versus capacitive power coupling and how placement of ground return wires influences it. Finally, a 3D hybrid plasma model for an electron beam generated magnetized plasma is used to understand the role of reactor geometry on plasma uniformity in the presence of E  ×  B drift.

  14. Sensitivity of numerical dispersion modeling to explosive source parameters

    International Nuclear Information System (INIS)

    Baskett, R.L.; Cederwall, R.T.

    1991-01-01

    The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs

  15. Particle model of a cylindrical inductively coupled ion source

    Science.gov (United States)

    Ippolito, N. D.; Taccogna, F.; Minelli, P.; Cavenago, M.; Veltri, P.

    2017-08-01

    In spite of the wide use of RF sources, a complete understanding of the mechanisms regulating the RF-coupling of the plasma is still lacking so self-consistent simulations of the involved physics are highly desirable. For this reason we are developing a 2.5D fully kinetic Particle-In-Cell Monte-Carlo-Collision (PIC-MCC) model of a cylindrical ICP-RF source, keeping the time step of the simulation small enough to resolve the plasma frequency scale. The grid cell dimension is now about seven times larger than the average Debye length, because of the large computational demand of the code. It will be scaled down in the next phase of the development of the code. The filling gas is Xenon, in order to minimize the time lost by the MCC collision module in the first stage of development of the code. The results presented here are preliminary, with the code already showing a good robustness. The final goal will be the modeling of the NIO1 (Negative Ion Optimization phase 1) source, operating in Padua at Consorzio RFX.

  16. A theoretical model of a liquid metal ion source

    International Nuclear Information System (INIS)

    Kingham, D.R.; Swanson, L.W.

    1984-01-01

    A model of liquid metal ion source (LMIS) operation has been developed which gives a consistent picture of three different aspects of LMI sources: (i) the shape and size of the ion emitting region; (ii) the mechanism of ion formation; (iii) properties of the ion beam such as angular intensity and energy spread. It was found that the emitting region takes the shape of a jet-like protrusion on the end of a Taylor cone with ion emission from an area only a few tens of A across, in agreement with recent TEM pictures by Sudraud. This is consistent with ion formation predominantly by field evaporation. Calculated angular intensities and current-voltage characteristics based on our fluid dynamic jet-like protrusion model agree well with experiment. The formation of doubly charged ions is attributed to post-ionization of field evaporated singly charged ions and an apex field strength of about 2.0 V A -1 was calculated for a Ga source. The ion energy spread is mainly due to space charge effects, it is known to be reduced for doubly charged ions in agreement with this post-ionization mechanism. (author)

  17. Extended gamma sources modelling using multipole expansion: Application to the Tunisian gamma source load planning

    International Nuclear Information System (INIS)

    Loussaief, Abdelkader

    2007-01-01

    In this work we extend the use of multipole moments expansion to the case of inner radiation fields. A series expansion of the photon flux was established. The main advantage of this approach is that it offers the opportunity to treat both inner and external radiation field cases. We determined the expression of the inner multipole moments in both spherical harmonics and in cartesian coordinates. As an application we applied the analytical model to a radiation facility used for small target irradiation. Theoretical, experimental and simulation studies were performed, in air and in a product, and good agreement was reached.Conventional dose distribution study for gamma irradiation facility involves the use of isodose maps. The establishment of these maps requires the measurement of the absorbed dose in many points, which makes the task expensive experimentally and very long by simulation. However, a lack of points of measurement can distort the dose distribution cartography. To overcome these problems, we present in this paper a mathematical method to describe the dose distribution in air. This method is based on the multipole expansion in spherical harmonics of the photon flux emitted by the gamma source. The determination of the multipole coefficients of this development allows the modeling of the radiation field around the gamma source. (Author)

  18. SOURCE 2.0 model development: UO2 thermal properties

    International Nuclear Information System (INIS)

    Reid, P.J.; Richards, M.J.; Iglesias, F.C.; Brito, A.C.

    1997-01-01

    During analysis of CANDU postulated accidents, the reactor fuel is estimated to experience large temperature variations and to be exposed to a variety of environments from highly oxidized to mildly reducing. The exposure of CANDU fuel to these environments and temperatures may affect fission product releases from the fuel and cause degradation of the fuel thermal properties. The SOURCE 2.0 project is a safety analysis code which will model the necessary mechanisms required to calculate fission product release for a variety of accident scenarios, including large break loss of coolant accidents (LOCAs) with or without emergency core cooling. The goal of the model development is to generate models which are consistent with each other and phenomenologically based, insofar as that is possible given the state of theoretical understanding

  19. RF Plasma modeling of the Linac4 H− ion source

    CERN Document Server

    Mattei, S; Hatayama, A; Lettry, J; Kawamura, Y; Yasumoto, M; Schmitzer, C

    2013-01-01

    This study focuses on the modelling of the ICP RF-plasma in the Linac4 H− ion source currently being constructed at CERN. A self-consistent model of the plasma dynamics with the RF electromagnetic field has been developed by a PIC-MCC method. In this paper, the model is applied to the analysis of a low density plasma discharge initiation, with particular interest on the effect of the external magnetic field on the plasma properties, such as wall loss, electron density and electron energy. The use of a multi-cusp magnetic field effectively limits the wall losses, particularly in the radial direction. Preliminary results however indicate that a reduced heating efficiency results in such a configuration. The effect is possibly due to trapping of electrons in the multi-cusp magnetic field, preventing their continuous acceleration in the azimuthal direction.

  20. How to Model Super-Soft X-ray Sources?

    Science.gov (United States)

    Rauch, Thomas

    2012-07-01

    During outbursts, the surface temperatures of white dwarfs in cataclysmic variables exceed by far half a million Kelvin. In this phase, they may become the brightest super-soft sources (SSS) in the sky. Time-series of high-resolution, high S/N X-ray spectra taken during rise, maximum, and decline of their X-ray luminosity provide insights into the processes following such outbursts as well as in the surface composition of the white dwarf. Their analysis requires adequate NLTE model atmospheres. The Tuebingen Non-LTE Model-Atmosphere Package (TMAP) is a powerful tool for their calculation. We present the application of TMAP models to SSS spectra and discuss their validity.

  1. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  2. CRUNCH, Dispersion Model for Continuous Dense Vapour Release in Atmosphere

    International Nuclear Information System (INIS)

    Jagger, S.F.

    1987-01-01

    1 - Description of program or function: The situation modelled is as follows. A dense gas emerges from a source such that it can be considered to emerge through a rectangular area, placed in the vertical plane and perpendicular to the plume direction, which assumes that of the ambient wind. The gas flux at the source, and in every plane perpendicular to the plume direction, is constant in time and a stationary flow field has been attained. For this to apply, the characteristic time of release must be much larger than that for dispersal of the contaminant. The plume can be thought to consist of a number of rectangular elements or 'puffs' emerging from the source at regular time intervals. The model follows the development of these puffs at a series of downwind points. These puffs are immediately assumed to advect with the ambient wind at their half-height. The plume also slumps due to the action of gravity and is allowed to entrain air through its sides and top surface. Spreading of a fluid element is caused by pressure differences across this element and since the pressure gradient in the wind direction is small, the resulting pressure differences and slumping velocities are small also, thus permitting this convenient approximation. Initially, as the plume slumps, its vertical dimension decreases and with it the slumping velocity and advection velocity. Thus the plume advection velocity varies as a function of downwind distance. With the present steady state modelling, and to satisfy continuity constraints, there must be consequent adjustment of plume height. Calculation of this parameter from the volume flux ensures this occurs. As the cloud height begins to grow, the advection velocity increases and the plume height decreases accordingly. With advection downwind, the cloud gains buoyancy by entraining air and, if the cloud is cold, by absorbing heat from the ground. Eventually the plume begins to disperse as would a passive pollutant, through the action of

  3. Enhancing the x-ray output of a single-wire explosion with a gas-puff based plasma opening switch

    Science.gov (United States)

    Engelbrecht, Joseph T.; Ouart, Nicholas D.; Qi, Niansheng; de Grouchy, Philip W.; Shelkovenko, Tatiana A.; Pikuz, Sergey A.; Banasek, Jacob T.; Potter, William M.; Rocco, Sophia V.; Hammer, David A.; Kusse, Bruce R.; Giuliani, John L.

    2018-02-01

    We present experiments performed on the 1 MA COBRA generator using a low density, annular, gas-puff z-pinch implosion as an opening switch to rapidly transfer a current pulse into a single metal wire on axis. This gas-puff on axial wire configuration was studied for its promise as an opening switch and as a means of enhancing the x-ray output of the wire. We demonstrate that current can be switched from the gas-puff plasma into the wire, and that the timing of the switch can be controlled by the gas-puff plenum backing pressure. X-ray detector measurements indicate that for low plenum pressure Kr or Xe shots with a copper wire, this configuration can offer a significant enhancement in the peak intensity and temporal distribution of radiation in the 1-10 keV range.

  4. Modeling Degradation Product Partitioning in Chlorinated-DNAPL Source Zones

    Science.gov (United States)

    Boroumand, A.; Ramsburg, A.; Christ, J.; Abriola, L.

    2009-12-01

    Metabolic reductive dechlorination degrades aqueous phase contaminant concentrations, increasing the driving force for DNAPL dissolution. Results from laboratory and field investigations suggest that accumulation of cis-dichloroethene (cis-DCE) and vinyl chloride (VC) may occur within DNAPL source zones. The lack of (or slow) degradation of cis-DCE and VC within bioactive DNAPL source zones may result in these dechlorination products becoming distributed among the solid, aqueous, and organic phases. Partitioning of cis-DCE and VC into the organic phase may reduce aqueous phase concentrations of these contaminants and result in the enrichment of these dechlorination products within the non-aqueous phase. Enrichment of degradation products within DNAPL may reduce some of the advantages associated with the application of bioremediation in DNAPL source zones. Thus, it is important to quantify how partitioning (between the aqueous and organic phases) influences the transport of cis-DCE and VC within bioactive DNAPL source zones. In this work, abiotic two-phase (PCE-water) one-dimensional column experiments are modeled using analytical and numerical methods to examine the rate of partitioning and the capacity of PCE-DNAPL to reversibly sequester cis-DCE. These models consider aqueous-phase, nonaqueous phase, and aqueous plus nonaqueous phase mass transfer resistance using linear driving force and spherical diffusion expressions. Model parameters are examined and compared for different experimental conditions to evaluate the mechanisms controlling partitioning. Biot number, a dimensionless number which is an index of the ratio of the aqueous phase mass transfer rate in boundary layer to the mass transfer rate within the NAPL, is used to characterize conditions in which either or both processes are controlling. Results show that application of a single aqueous resistance is capable to capture breakthrough curves when DNAPL is distributed in porous media as low

  5. Cardiac magnetic source imaging based on current multipole model

    International Nuclear Information System (INIS)

    Tang Fa-Kuan; Wang Qian; Hua Ning; Lu Hong; Tang Xue-Zheng; Ma Ping

    2011-01-01

    It is widely accepted that the heart current source can be reduced into a current multipole. By adopting three linear inverse methods, the cardiac magnetic imaging is achieved in this article based on the current multipole model expanded to the first order terms. This magnetic imaging is realized in a reconstruction plane in the centre of human heart, where the current dipole array is employed to represent realistic cardiac current distribution. The current multipole as testing source generates magnetic fields in the measuring plane, serving as inputs of cardiac magnetic inverse problem. In the heart-torso model constructed by boundary element method, the current multipole magnetic field distribution is compared with that in the homogeneous infinite space, and also with the single current dipole magnetic field distribution. Then the minimum-norm least-squares (MNLS) method, the optimal weighted pseudoinverse method (OWPIM), and the optimal constrained linear inverse method (OCLIM) are selected as the algorithms for inverse computation based on current multipole model innovatively, and the imaging effects of these three inverse methods are compared. Besides, two reconstructing parameters, residual and mean residual, are also discussed, and their trends under MNLS, OWPIM and OCLIM each as a function of SNR are obtained and compared. (general)

  6. A model for managing sources of groundwater pollution

    Science.gov (United States)

    Gorelick, Steven M.

    1982-01-01

    The waste disposal capacity of a groundwater system can be maximized while maintaining water quality at specified locations by using a groundwater pollutant source management model that is based upon linear programing and numerical simulation. The decision variables of the management model are solute waste disposal rates at various facilities distributed over space. A concentration response matrix is used in the management model to describe transient solute transport and is developed using the U.S. Geological Survey solute transport simulation model. The management model was applied to a complex hypothetical groundwater system. Large-scale management models were formulated as dual linear programing problems to reduce numerical difficulties and computation time. Linear programing problems were solved using a numerically stable, available code. Optimal solutions to problems with successively longer management time horizons indicated that disposal schedules at some sites are relatively independent of the number of disposal periods. Optimal waste disposal schedules exhibited pulsing rather than constant disposal rates. Sensitivity analysis using parametric linear programing showed that a sharp reduction in total waste disposal potential occurs if disposal rates at any site are increased beyond their optimal values.

  7. Plant model of KIPT neutron source facility simulator

    International Nuclear Information System (INIS)

    Cao, Yan; Wei, Thomas Y.; Grelle, Austin L.; Gohar, Yousry

    2016-01-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  8. Plant model of KIPT neutron source facility simulator

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Yan [Argonne National Lab. (ANL), Argonne, IL (United States); Wei, Thomas Y. [Argonne National Lab. (ANL), Argonne, IL (United States); Grelle, Austin L. [Argonne National Lab. (ANL), Argonne, IL (United States); Gohar, Yousry [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-02-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  9. Sources

    International Nuclear Information System (INIS)

    Duffy, L.P.

    1991-01-01

    This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community

  10. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Open-source Software for Exoplanet Atmospheric Modeling

    Science.gov (United States)

    Cubillos, Patricio; Blecic, Jasmina; Harrington, Joseph

    2018-01-01

    I will present a suite of self-standing open-source tools to model and retrieve exoplanet spectra implemented for Python. These include: (1) a Bayesian-statistical package to run Levenberg-Marquardt optimization and Markov-chain Monte Carlo posterior sampling, (2) a package to compress line-transition data from HITRAN or Exomol without loss of information, (3) a package to compute partition functions for HITRAN molecules, (4) a package to compute collision-induced absorption, and (5) a package to produce radiative-transfer spectra of transit and eclipse exoplanet observations and atmospheric retrievals.

  12. Demonstration of a neonlike argon soft-x-ray laser with a picosecond-laser-irradiated gas puff target.

    Science.gov (United States)

    Fiedorowicz, H; Bartnik, A; Dunn, J; Smith, R F; Hunter, J; Nilsen, J; Osterheld, A L; Shlyaptsev, V N

    2001-09-15

    We demonstrate a neonlike argon-ion x-ray laser, using a short-pulse laser-irradiated gas puff target. The gas puff target was formed by pulsed injection of gas from a high-pressure solenoid valve through a nozzle in the form of a narrow slit and irradiated with a combination of long, 600-ps and short, 6-ps high-power laser pulses with a total of 10 J of energy in a traveling-wave excitation scheme. Lasing was observed on the 3p (1)S(0)?3s (1)P(1) transition at 46.9 nm and the 3d (1)P(1)?3p (1)P(1) transition at 45.1 nm. A gain of 11 cm(-1) was measured on these transitions for targets up to 0.9 cm long.

  13. Effect of salt reduction on wheat-dough properties and quality characteristics of puff pastry with full and reduced fat content.

    Science.gov (United States)

    Silow, Christoph; Zannini, Emanuele; Axel, Claudia; Lynch, Kieran M; Arendt, Elke K

    2016-11-01

    Puff pastry is a major contributor of fat and sodium intake in many countries. The objective of this research was to determine the impact of salt (0-8.4g/100g flour) on the structure and quality characteristics of puff pastry with full and reduced (-40%) fat content as well as the rheological properties of the resulting dough. Therefore, empirical rheological tests were carried out including dough extensibility, dough stickiness and GlutoPeak test. The quality of the puff pastry was characterized with the VolScan, Texture Analyzer and C-Cell. NaCl reduction significantly changed rheological properties of the basic dough as well as a number of major quality characteristics of the puff pastry. Significant differences due to NaCl addition were found in particular for dough resistance, dough stickiness, Peak Maximum Time and Maximum Torque (ppastry containing full fat. Likewise, maximal lift, specific volume, number of cells and slice brightness increased with increasing NaCl at both fat levels. Although a sensorial comparison of puff pastries revealed that salt reduction (30%) was perceptible, no significant differences were found for all other investigated attributes. Nevertheless, a reduction of 30% salt and 40% fat in puff pastry is achievable as neither the perception and visual impression nor attributes such as volume, firmness and flavour of the final products were significantly affected. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Efficient generation of fast neutrons by magnetized deuterons in an optimized deuterium gas-puff z-pinch

    Czech Academy of Sciences Publication Activity Database

    Klir, D.; Shishlov, A. V.; Kokshenev, V. A.; Kubeš, P.; Labetsky, A. Yu.; Řezáč, K.; Cherdizov, R. K.; Cikhardt, J.; Cikhardtová, B.; Dudkin, G. N.; Fursov, F. I.; Garapatsky, A. A.; Kovalchuk, B. M.; Kravařík, J.; Kurmaev, N. E.; Orčíková, Hana; Padalko, V. N.; Ratakhin, N. A.; Šíla, O.; Turek, Karel; Varlachev, V. A.

    2015-01-01

    Roč. 57, č. 4 (2015), s. 044005 ISSN 0741-3335 R&D Projects: GA ČR GAP205/12/0454; GA MŠk(CZ) LD14089; GA MŠk(CZ) LG13029 Grant - others:GA MŠk(CZ) LH13283 Institutional support: RVO:61389005 Keywords : z-pinch * gas puff * deuterium * fast neutrons * plasma guns Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 2.404, year: 2015

  15. Impact of low-trans fat compositions on the quality of conventional and fat-reduced puff pastry

    OpenAIRE

    Silow, Christoph; Zannini, Emanuele; Arendt, Elke K.

    2016-01-01

    Four vegetable fat blends (FBs) with low trans-fatty acid (TFA???0.6?%) content with various ratios of palm stearin (PS) and rapeseed oil (RO) were characterised and examined for their application in puff pastry production. The amount of PS decreased from FB1 to FB4 and simultaneously the RO content increased. A range of analytical methods were used to characterise the FBs, including solid fat content (SFC), differential scanning calorimetry (DSC), cone penetrometry and rheological measuremen...

  16. Neutron energy distribution function reconstructed from time-of-flight signals in deuterium gas-puff Z-pinch

    Czech Academy of Sciences Publication Activity Database

    Klír, D.; Kravárik, J.; Kubeš, J.; Rezac, K.; Ananev, S.S.; Bakshaev, Y. L.; Blinov, P. I.; Chernenko, A. S.; Kazakov, E.D.; Korolev, V. D.; Ustroev, G. I.; Juha, Libor; Krása, Josef; Velyhan, Andriy

    2009-01-01

    Roč. 37, č. 3 (2009), s. 425-432 ISSN 0093-3813 R&D Projects: GA MŠk(CZ) LC528; GA MŠk LA08024 Grant - others:IAEA(XE) RC 14817 Institutional research plan: CEZ:AV0Z10100523 Keywords : deuterium * fusion reaction * gas puff * Monte Carlo reconstruction * neutron energy spectra * neutron s * Z-pinch Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.043, year: 2009

  17. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  18. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  19. A source-controlled data center network model.

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.

  20. Source characterization and dynamic fault modeling of induced seismicity

    Science.gov (United States)

    Lui, S. K. Y.; Young, R. P.

    2017-12-01

    In recent years there are increasing concerns worldwide that industrial activities in the sub-surface can cause or trigger damaging earthquakes. In order to effectively mitigate the damaging effects of induced seismicity, the key is to better understand the source physics of induced earthquakes, which still remain elusive at present. Furthermore, an improved understanding of induced earthquake physics is pivotal to assess large-magnitude earthquake triggering. A better quantification of the possible causes of induced earthquakes can be achieved through numerical simulations. The fault model used in this study is governed by the empirically-derived rate-and-state friction laws, featuring a velocity-weakening (VW) patch embedded into a large velocity-strengthening (VS) region. Outside of that, the fault is slipping at the background loading rate. The model is fully dynamic, with all wave effects resolved, and is able to resolve spontaneous long-term slip history on a fault segment at all stages of seismic cycles. An earlier study using this model has established that aseismic slip plays a major role in the triggering of small repeating earthquakes. This study presents a series of cases with earthquakes occurring on faults with different fault frictional properties and fluid-induced stress perturbations. The effects to both the overall seismicity rate and fault slip behavior are investigated, and the causal relationship between the pre-slip pattern prior to the event and the induced source characteristics is discussed. Based on simulation results, the subsequent step is to select specific cases for laboratory experiments which allow well controlled variables and fault parameters. Ultimately, the aim is to provide better constraints on important parameters for induced earthquakes based on numerical modeling and laboratory data, and hence to contribute to a physics-based induced earthquake hazard assessment.

  1. A source-controlled data center network model

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925

  2. A Model fot the Sources of the Slow Solar Wind

    Science.gov (United States)

    Antiochos, S. K.; Mikic, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.

    2011-01-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: the slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind also has large angular width, up to approx.60deg, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far from the heliospheric current sheet. We then use an MHD code and MDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind, and magnetic field for a time period preceding the 2008 August 1 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere and propose further tests of the model. Key words: solar wind - Sun: corona - Sun: magnetic topology

  3. A Model for the Sources of the Slow Solar Wind

    Science.gov (United States)

    Antiochos, S. K.; Mikić, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.

    2011-04-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: the slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind also has large angular width, up to ~60°, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far from the heliospheric current sheet. We then use an MHD code and MDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind, and magnetic field for a time period preceding the 2008 August 1 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere and propose further tests of the model.

  4. Source modelling at the dawn of gravitational-wave astronomy

    Science.gov (United States)

    Gerosa, Davide

    2016-09-01

    The age of gravitational-wave astronomy has begun. Gravitational waves are propagating spacetime perturbations ("ripples in the fabric of space-time") predicted by Einstein's theory of General Relativity. These signals propagate at the speed of light and are generated by powerful astrophysical events, such as the merger of two black holes and supernova explosions. The first detection of gravitational waves was performed in 2015 with the LIGO interferometers. This constitutes a tremendous breakthrough in fundamental physics and astronomy: it is not only the first direct detection of such elusive signals, but also the first irrefutable observation of a black-hole binary system. The future of gravitational-wave astronomy is bright and loud: the LIGO experiments will soon be joined by a network of ground-based interferometers; the space mission eLISA has now been fully approved by the European Space Agency with a proof-of-concept mission called LISA Pathfinder launched in 2015. Gravitational-wave observations will provide unprecedented tests of gravity as well as a qualitatively new window on the Universe. Careful theoretical modelling of the astrophysical sources of gravitational-waves is crucial to maximize the scientific outcome of the detectors. In this Thesis, we present several advances on gravitational-wave source modelling, studying in particular: (i) the precessional dynamics of spinning black-hole binaries; (ii) the astrophysical consequences of black-hole recoils; and (iii) the formation of compact objects in the framework of scalar-tensor theories of gravity. All these phenomena are deeply characterized by a continuous interplay between General Relativity and astrophysics: despite being a truly relativistic messenger, gravitational waves encode details of the astrophysical formation and evolution processes of their sources. We work out signatures and predictions to extract such information from current and future observations. At the dawn of a revolutionary

  5. Self-consistent modeling of electron cyclotron resonance ion sources

    International Nuclear Information System (INIS)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lecot, C.

    2004-01-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally

  6. Self-consistent modeling of electron cyclotron resonance ion sources

    Science.gov (United States)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lécot, C.

    2004-05-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally.

  7. Modeling and simulation of RF photoinjectors for coherent light sources

    Science.gov (United States)

    Chen, Y.; Krasilnikov, M.; Stephan, F.; Gjonaj, E.; Weiland, T.; Dohlus, M.

    2018-05-01

    We propose a three-dimensional fully electromagnetic numerical approach for the simulation of RF photoinjectors for coherent light sources. The basic idea consists in incorporating a self-consistent photoemission model within a particle tracking code. The generation of electron beams in the injector is determined by the quantum efficiency (QE) of the cathode, the intensity profile of the driving laser as well as by the accelerating field and magnetic focusing conditions in the gun. The total charge emitted during an emission cycle can be limited by the space charge field at the cathode. Furthermore, the time and space dependent electromagnetic field at the cathode may induce a transient modulation of the QE due to surface barrier reduction of the emitting layer. In our modeling approach, all these effects are taken into account. The beam particles are generated dynamically according to the local QE of the cathode and the time dependent laser intensity profile. For the beam dynamics, a tracking code based on the Lienard-Wiechert retarded field formalism is employed. This code provides the single particle trajectories as well as the transient space charge field distribution at the cathode. As an application, the PITZ injector is considered. Extensive electron bunch emission simulations are carried out for different operation conditions of the injector, in the source limited as well as in the space charge limited emission regime. In both cases, fairly good agreement between measurements and simulations is obtained.

  8. Towards a Unified Source-Propagation Model of Cosmic Rays

    Science.gov (United States)

    Taylor, M.; Molla, M.

    2010-07-01

    It is well known that the cosmic ray energy spectrum is multifractal with the analysis of cosmic ray fluxes as a function of energy revealing a first “knee” slightly below 1016 eV, a second knee slightly below 1018 eV and an “ankle” close to 1019 eV. The behaviour of the highest energy cosmic rays around and above the ankle is still a mystery and precludes the development of a unified source-propagation model of cosmic rays from their source origin to Earth. A variety of acceleration and propagation mechanisms have been proposed to explain different parts of the spectrum the most famous of course being Fermi acceleration in magnetised turbulent plasmas (Fermi 1949). Many others have been proposd for energies at and below the first knee (Peters & Cimento (1961); Lagage & Cesarsky (1983); Drury et al. (1984); Wdowczyk & Wolfendale (1984); Ptuskin et al. (1993); Dova et al. (0000); Horandel et al. (2002); Axford (1991)) as well as at higher energies between the first knee and the ankle (Nagano & Watson (2000); Bhattacharjee & Sigl (2000); Malkov & Drury (2001)). The recent fit of most of the cosmic ray spectrum up to the ankle using non-extensive statistical mechanics (NESM) (Tsallis et al. (2003)) provides what may be the strongest evidence for a source-propagation system deviating significantly from Boltmann statistics. As Tsallis has shown (Tsallis et al. (2003)), the knees appear as crossovers between two fractal-like thermal regimes. In this work, we have developed a generalisation of the second order NESM model (Tsallis et al. (2003)) to higher orders and we have fit the complete spectrum including the ankle with third order NESM. We find that, towards the GDZ limit, a new mechanism comes into play. Surprisingly it also presents as a modulation akin to that in our own local neighbourhood of cosmic rays emitted by the sun. We propose that this is due to modulation at the source and is possibly due to processes in the shell of the originating supernova. We

  9. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  10. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  11. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  12. Fast temperature optimization of multi-source hyperthermia applicators with reduced-order modeling of 'virtual sources'

    International Nuclear Information System (INIS)

    Cheng, K-S; Stakhursky, Vadim; Craciunescu, Oana I; Stauffer, Paul; Dewhirst, Mark; Das, Shiva K

    2008-01-01

    The goal of this work is to build the foundation for facilitating real-time magnetic resonance image guided patient treatment for heating systems with a large number of physical sources (e.g. antennas). Achieving this goal requires knowledge of how the temperature distribution will be affected by changing each source individually, which requires time expenditure on the order of the square of the number of sources. To reduce computation time, we propose a model reduction approach that combines a smaller number of predefined source configurations (fewer than the number of actual sources) that are most likely to heat tumor. The source configurations consist of magnitude and phase source excitation values for each actual source and may be computed from a CT scan based plan or a simplified generic model of the corresponding patient anatomy. Each pre-calculated source configuration is considered a 'virtual source'. We assume that the actual best source settings can be represented effectively as weighted combinations of the virtual sources. In the context of optimization, each source configuration is treated equivalently to one physical source. This model reduction approach is tested on a patient upper-leg tumor model (with and without temperature-dependent perfusion), heated using a 140 MHz ten-antenna cylindrical mini-annular phased array. Numerical simulations demonstrate that using only a few pre-defined source configurations can achieve temperature distributions that are comparable to those from full optimizations using all physical sources. The method yields close to optimal temperature distributions when using source configurations determined from a simplified model of the tumor, even when tumor position is erroneously assumed to be ∼2.0 cm away from the actual position as often happens in practical clinical application of pre-treatment planning. The method also appears to be robust under conditions of changing, nonlinear, temperature-dependent perfusion. The

  13. Protracted releases: inferring source terms and predicting dispersal

    International Nuclear Information System (INIS)

    Vamanu, D.V.

    1988-02-01

    Analytical solutions are given to the transport-diffusion equation for archetype, atmospheric protracted releases featuring fronts of initiation, culminations, and tails of extinction. The interplay of the fitting parameters ensures that the model accommodates a wide typology of events, nearing in the extremes the instantaneous puff of the Lagrangian models, and the continuous stack emission of the Gaussian models, respectively. (author)

  14. Measurement of an electronic cigarette aerosol size distribution during a puff

    Directory of Open Access Journals (Sweden)

    Belka Miloslav

    2017-01-01

    Full Text Available Electronic cigarettes (e-cigarettes have become very popular recently because they are marketed as a healthier alternative to tobacco smoking and as a useful tool to smoking cessation. E-cigarettes use a heating element to create an aerosol from a solution usually consisting of propylene glycol, glycerol, and nicotine. Despite the wide spread of e-cigarettes, information about aerosol size distributions is rather sparse. This can be caused by the relative newness of e-cigarettes and by the difficulty of the measurements, in which one has to deal with high concentration aerosol containing volatile compounds. Therefore, we assembled an experimental setup for size measurements of e-cigarette aerosol in conjunction with a piston based machine in order to simulate a typical puff. A TSI scanning mobility particle sizer 3936 was employed to provide information about particle concentrations and sizes. An e-cigarette commercially available on the Czech Republic market was tested and the results were compared with a conventional tobacco cigarette. The particles emitted from the e-cigarette were smaller than those of the conventional cigarette having a CMD of 150 and 200 nm. However, the total concentration of particles from e-cigarette was higher.

  15. Measurement of an electronic cigarette aerosol size distribution during a puff

    Science.gov (United States)

    Belka, Miloslav; Lizal, Frantisek; Jedelsky, Jan; Jicha, Miroslav; Pospisil, Jiri

    Electronic cigarettes (e-cigarettes) have become very popular recently because they are marketed as a healthier alternative to tobacco smoking and as a useful tool to smoking cessation. E-cigarettes use a heating element to create an aerosol from a solution usually consisting of propylene glycol, glycerol, and nicotine. Despite the wide spread of e-cigarettes, information about aerosol size distributions is rather sparse. This can be caused by the relative newness of e-cigarettes and by the difficulty of the measurements, in which one has to deal with high concentration aerosol containing volatile compounds. Therefore, we assembled an experimental setup for size measurements of e-cigarette aerosol in conjunction with a piston based machine in order to simulate a typical puff. A TSI scanning mobility particle sizer 3936 was employed to provide information about particle concentrations and sizes. An e-cigarette commercially available on the Czech Republic market was tested and the results were compared with a conventional tobacco cigarette. The particles emitted from the e-cigarette were smaller than those of the conventional cigarette having a CMD of 150 and 200 nm. However, the total concentration of particles from e-cigarette was higher.

  16. Soft X-ray images of krypton gas-puff Z-pinches

    International Nuclear Information System (INIS)

    Qiu Mengtong; Kuai Bin; Zeng Zhengzhong; Lu Min; Wang Kuilu; Qiu Aici; Zhang Mei; Luo Jianhui

    2002-01-01

    A series of experiments has been carried out on Qiang-guang I generator to study the dynamics of krypton gas-puff Z-pinches. The generator was operated at a peak current of 1.5 MA with a rise-time of 80 ns. The specific linear mass of gas liner was about 20 μg/cm in these experiments. In the diagnostic system, a four-frame x-ray framing camera and a pinhole camera were employed. A novel feature of this camera is that it can give time-resolved x-ray images with four frames and energy-resolved x-ray images with two different filters and an array of 8 pinholes integrated into one compact assemble. As a typical experimental result, an averaged radial imploding velocity of 157 km/s over 14 ns near the late phase of implosion was measured from the time-resolved x-ray images. From the time-integrated x-ray image an averaged radial convergence of 0.072 times of the original size was measured. An averaged radial expansion velocity was 130 km/s and the maximum radial convergence of 0.04 times of the original size were measured from the time-resolved x-ray images. The dominant axial wavelengths of instabilities in the plasma were between 1 and 2 mm. The change in average photons energy was observed from energy spectrum- and time-resolved x-ray images

  17. Prophylaxis of postintubation sore throat by the use of single puff inhalation of clomethasone dipropionate preoperatively

    International Nuclear Information System (INIS)

    Bashir, I.; Masood, N.

    2014-01-01

    Objective: The objective of this study was to asses the occurrence and severity of sore throat following endotracheal anesthesia and its reduction by beclomethasone inhalation. Study Design: A randomized controlled trial. Place and Duration of Study: This study was carried out at the main operation theatre, Combined Military Hospital Rawalpindi from October 2002 to April 2003. Patients and Methods: Two hundred patients undergoing general anaesthesia for elective surgery were included. Patients were randomly assigned to two groups of 100 patients each. The patients in group A were given one puff inhalation of beclomethasone before intubation while group B was control group. The patients were evaluated for occurrence and severity of postoperative sore throat by direct questions 6, 12, 24 and 48 hours after surgery. Results: In the beclomethasone group, 10 patients had sore throat as compared to 55 in control group (p<0.01). All 10 patients who experienced symptoms in beclomethasone group had mild sore throat while among the patients in the control group 22 had mild, 13 had moderate and 20 had severe sore throat. After 48 hours, no patient had the symptoms in the study group while 9 of the control group still suffered from sore throat. No drug related side effects were observed. Conclusion: Postoperative sore throat after general anaesthesia is common (occurrence rate of 55%). Beclomethasone inhaler is highly effective in the prevention of postoperative sore throat. It reduces both the occurrence and severity of sore throat. (author)

  18. Corneal Vibrations during Intraocular Pressure Measurement with an Air-Puff Method

    Directory of Open Access Journals (Sweden)

    Robert Koprowski

    2018-01-01

    Full Text Available Introduction. The paper presents a commentary on the method of analysis of corneal vibrations occurring during eye pressure measurements with air-puff tonometers, for example, Corvis. The presented definition and measurement method allow for the analysis of image sequences of eye responses—cornea deformation. In particular, the outer corneal contour and sclera fragments are analysed, and 3D reconstruction is performed. Methods. On this basis, well-known parameters such as eyeball reaction or corneal response are determined. The next steps of analysis allow for automatic and reproducible separation of four different corneal vibrations. These vibrations are associated with (1 the location of the maximum of cornea deformation; (2 the cutoff area measured in relation to the cornea in a steady state; (3 the maximum of peaks occurring between applanations; and (4 the other characteristic points of the corneal contour. Results. The results obtained enable (1 automatic determination of the amplitude of vibrations; (2 determination of the frequency of vibrations; and (3 determination of the correlation between the selected types of vibrations. Conclusions. These are diagnostic features that can be directly applied clinically for new and archived data.

  19. Neuromagnetic detection of the laryngeal area: Sensory-evoked fields to air-puff stimulation.

    Science.gov (United States)

    Miyaji, Hideaki; Hironaga, Naruhito; Umezaki, Toshiro; Hagiwara, Koichi; Shigeto, Hiroshi; Sawatsubashi, Motohiro; Tobimatsu, Shozo; Komune, Shizuo

    2014-03-01

    The sensory projections from the oral cavity, pharynx, and larynx are crucial in assuring safe deglutition, coughing, breathing, and voice production/speaking. Although several studies using neuroimaging techniques have demonstrated cortical activation related to pharyngeal and laryngeal functions, little is known regarding sensory projections from the laryngeal area to the somatosensory cortex. The purpose of this study was to establish the cortical activity evoked by somatic air-puff stimulation at the laryngeal mucosa using magnetoencephalography. Twelve healthy volunteers were trained to inhibit swallowing in response to air stimuli delivered to the larynx. Minimum norm estimates was performed on the laryngeal somatosensory evoked fields (LSEFs) to best differentiate the target activations from non-task-related activations. Evoked magnetic fields were recorded with acceptable reproducibility in the left hemisphere, with a peak latency of approximately 100ms in 10 subjects. Peak activation was estimated at the caudolateral region of the primary somatosensory area (S1). These results establish the ability to detect LSEFs with an acceptable reproducibility within a single subject and among subjects. These results also suggest the existence of laryngeal somatic afferent input to the caudolateral region of S1 in human. Our findings indicate that further investigation in this area is needed, and should focus on laryngeal lateralization, swallowing, and speech processing. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Soft X-ray Images of Krypton Gas-Puff Z-Pinches

    Institute of Scientific and Technical Information of China (English)

    邱孟通; 蒯斌; 曾正中; 吕敏; 王奎禄; 邱爱慈; 张美; 罗建辉

    2002-01-01

    A series of experiments has been carried out on Qiang-guang Ⅰ generator to study the dynamics of krypton gas-puff Z-pinches. The generator was operated at a peak current of 1.5 MA with a rise-time of 80 ns. The specific linear mass of gas liner was about 20 μg/cm in these experiments. In the diagnostic system, a four-frame x-ray framing camera and a pinhole camera were employed. A novel feature of this camera is that it can give time-resolved x-ray images with four frames and energy-resolved x-ray images with two different filters and an array of 8 pinholes integrated into one compact assemble. As a typical experimental result, an averaged radial imploding velocity of 157 km/s over 14 ns near the late phase of implosion was measured from the time-resolved x-ray images. From the time-integrated x-ray image an averaged radial convergence of 0.072 times of the original size was measured. An averaged radial expansion velocity was 130 km/s and the maximum radial convergence of 0.04 times of the original size were measured from the time-resolved x-ray images. The dominant axial wavelengths of instabilities in the plasma were between 1 and 2 mm. The change in average photons energy was observed from energy spectrum- and time-resolved x-ray images.

  1. Independent sailing with high tetraplegia using sip and puff controls: integration into a community sailing center.

    Science.gov (United States)

    Rojhani, Solomon; Stiens, Steven A; Recio, Albert C

    2017-07-01

    We are continually rediscovering how adapted recreational activity complements the rehabilitation process, enriches patients' lives and positively impacts outcome measures. Although sports for people with spinal cord injuries (SCI) has achieved spectacular visibility, participation by high cervical injuries is often restricted due to poor accessibility, safety concerns, lack of adaptability, and high costs of technology. We endeavor to demonstrate the mechanisms, adaptability, accessibility, and benefits the sport of sailing creates in the rehabilitative process. Our sailor is a 27-year-old man with a history of traumatic SCI resulting in C4 complete tetraplegia. The participant completed an adapted introductory sailing course, and instruction on the sip-and-puff sail and tiller control mechanism. With practice, he navigated an on-water course in moderate winds of 5 to 15 knots. Despite trends toward shorter rehabilitation stays, aggressive transdisciplinary collaboration with recreation therapy can provide community and natural environment experiences while inpatient and continuing post discharge. Such peak physical and psychological experiences provide a positive perspective for the future that can be shared on the inpatient unit, with families and support systems like sailing clubs in the community. Rehabilitation theory directs a team process to achieve patient self-awareness and initiate self-actualization in spite of disablement. Utilization of local community sailing centers that have provided accessible assisted options provides person-centered self-realization of goals as assisted by family and natural supports. Such successful patients become native guides for others seeking the same experience.

  2. Effects of Electronic Cigarette Liquid Nicotine Concentration on Plasma Nicotine and Puff Topography in Tobacco Cigarette Smokers: A Preliminary Report.

    Science.gov (United States)

    Lopez, Alexa A; Hiler, Marzena M; Soule, Eric K; Ramôa, Carolina P; Karaoghlanian, Nareg V; Lipato, Thokozeni; Breland, Alison B; Shihadeh, Alan L; Eissenberg, Thomas

    2016-05-01

    Electronic cigarettes (ECIGs) aerosolize a liquid that usually contains propylene glycol and/or vegetable glycerin, flavorants, and the dependence-producing drug nicotine in various concentrations. This study examined the extent to which ECIG liquid nicotine concentration is related to user plasma nicotine concentration in ECIG-naïve tobacco cigarette smokers. Sixteen ECIG-naïve cigarette smokers completed four laboratory sessions that differed by the nicotine concentration of the liquid (0, 8, 18, or 36 mg/ml) that was placed into a 1.5 Ohm, dual coil "cartomizer" powered by a 3.3V battery. In each session, participants completed two, 10-puff ECIG use bouts with a 30-second inter-puff interval; bouts were separated by 60 minutes. Venous blood was sampled before and after bouts for later analysis of plasma nicotine concentration; puff duration, volume, and average flow rate were measured during each bout. In bout 1, relative to the 0mg/ml nicotine condition (mean = 3.8 ng/ml, SD = 3.3), plasma nicotine concentration increased significantly immediately after the bout for the 8 (mean = 8.8 ng/ml, SD = 6.3), 18 (mean = 13.2 ng/ml, SD = 13.2), and 36 mg/ml (mean = 17.0 ng/ml, SD = 17.9) liquid concentration. A similar pattern was observed after bout 2. Average puff duration in the 36 mg/ml condition was significantly shorter compared to the 0mg/ml nicotine condition. Puff volume increased during the second bout for 8 and 18 mg/ml conditions. For a given ECIG device, nicotine delivery may be directly related to liquid concentration. ECIG-naïve cigarette smokers can, from their first use bout, attain cigarette-like nicotine delivery profiles with some currently available ECIG products. Liquid nicotine concentration can influence plasma nicotine concentration in ECIG-naïve cigarette smokers, and, at some concentrations, the nicotine delivery profile of a 3.3V ECIG with a dual coil, 1.5-Ohm cartomizer approaches that of a combustible tobacco cigarette in this

  3. HYDROLOGY AND SEDIMENT MODELING USING THE BASINS NON-POINT SOURCE MODEL

    Science.gov (United States)

    The Non-Point Source Model (Hydrologic Simulation Program-Fortran, or HSPF) within the EPA Office of Water's BASINS watershed modeling system was used to simulate streamflow and total suspended solids within Contentnea Creek, North Carolina, which is a tributary of the Neuse Rive...

  4. Modelling and optimisation of fs laser-produced Kα sources

    International Nuclear Information System (INIS)

    Gibbon, P.; Masek, M.; Teubner, U.; Lu, W.; Nicoul, M.; Shymanovich, U.; Tarasevitch, A.; Zhou, P.; Sokolowski-Tinten, K.; Linde, D. von der

    2009-01-01

    Recent theoretical and numerical studies of laser-driven femtosecond K α sources are presented, aimed at understanding a recent experimental campaign to optimize emission from thin coating targets. Particular attention is given to control over the laser-plasma interaction conditions defined by the interplay between a controlled prepulse and the angle of incidence. It is found that the x-ray efficiency for poor-contrast laser systems in which a large preplasma is suspected can be enhanced by using a near-normal incidence geometry even at high laser intensities. With high laser contrast, similar efficiencies can be achieved by going to larger incidence angles, but only at the expense of larger X-ray spot size. New developments in three-dimensional modelling are also reported with the goal of handling interactions with geometrically complex targets and finite resistivity. (orig.)

  5. Modeling in control of the Advanced Light Source

    International Nuclear Information System (INIS)

    Bengtsson, J.; Forest, E.; Nishimura, H.; Schachinger, L.

    1991-05-01

    A software system for control of accelerator physics parameters of the Advanced Light Source (ALS) is being designed and implemented at LBL. Some of the parameters we wish to control are tunes, chromaticities, and closed orbit distortions as well as linear lattice distortions and, possibly, amplitude- and momentum-dependent tune shifts. In all our applications, the goal is to allow the user to adjust physics parameters of the machine, instead of turning knobs that control magnets directly. This control will take place via a highly graphical user interface, with both a model appropriate to the application and any correction algorithm running alongside as separate processes. Many of these applications will run on a Unix workstation, separate from the controls system, but communicating with the hardware database via Remote Procedure Calls (RPCs)

  6. Crowd Sourcing for Challenging Technical Problems and Business Model

    Science.gov (United States)

    Davis, Jeffrey R.; Richard, Elizabeth

    2011-01-01

    Crowd sourcing may be defined as the act of outsourcing tasks that are traditionally performed by an employee or contractor to an undefined, generally large group of people or community (a crowd) in the form of an open call. The open call may be issued by an organization wishing to find a solution to a particular problem or complete a task, or by an open innovation service provider on behalf of that organization. In 2008, the Space Life Sciences Directorate (SLSD), with the support of Wyle Integrated Science and Engineering, established and implemented pilot projects in open innovation (crowd sourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical challenges. These unsolved technical problems were converted to problem statements, also called "Challenges" or "Technical Needs" by the various open innovation service providers, and were then posted externally to seek solutions. In addition, an open call was issued internally to NASA employees Agency wide (10 Field Centers and NASA HQ) using an open innovation service provider crowd sourcing platform to post NASA challenges from each Center for the others to propose solutions). From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external problems or challenges were posted through three different vendors: InnoCentive, Yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive crowd sourcing platform designed for internal use by an organization. This platform was customized for NASA use and promoted as NASA@Work. The results were significant. Of the seven InnoCentive external challenges, two full and five partial awards were made in complex technical areas such as predicting solar flares and long-duration food packaging. Similarly, the TopCoder challenge yielded an optimization algorithm for designing a lunar medical kit. The Yet2.com challenges yielded many new industry and academic contacts in bone

  7. Development of an emissions inventory model for mobile sources

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, A W; Broderick, B M [Trinity College, Dublin (Ireland). Dept. of Civil, Structural and Environmental Engineering

    2000-07-01

    Traffic represents one of the largest sources of primary air pollutants in urban areas. As a consequence, numerous abatement strategies are being pursued to decrease the ambient concentrations of a wide range of pollutants. A mutual characteristic of most of these strategies is a requirement for accurate data on both the quantity and spatial distribution of emissions to air in the form of an atmospheric emissions inventory database. In the case of traffic pollution, such an inventory must be compiled using activity statistics and emission factors for a wide range of vehicle types. The majority of inventories are compiled using 'passive' data from either surveys or transportation models and by their very nature tend to be out-of-date by the time they are compiled. Current trends are towards integrating urban traffic control systems and assessments of the environmental effects of motor vehicles. In this paper. a methodology for estimating emissions from mobile sources using real-time data is described. This methodology is used to calculate emissions of sulphur dioxide (SO{sub 2}), oxides of nitrogen (NO{sub x}), carbon monoxide (CO). volatile organic compounds (VOC), particulate matter less than 10 {mu}m aerodynamic diameter (PM{sub 10}), 1,3-butadiene (C{sub 4}H{sub 6}) and benzene (C{sub 6}H{sub 6}) at a test junction in Dublin. Traffic data, which are required on a street-by-street basis, is obtained from induction loops and closed circuit televisions (CCTV) as well as statistical data. The observed traffic data are compared to simulated data from a travel demand model. As a test case, an emissions inventory is compiled for a heavily trafficked signalized junction in an urban environment using the measured data. In order that the model may be validated, the predicted emissions are employed in a dispersion model along with local meteorological conditions and site geometry. The resultant pollutant concentrations are compared to average ambient kerbside conditions

  8. Diagnosis of high-intensity pulsed heavy ion beam generated by a novel magnetically insulated diode with gas puff plasma gun.

    Science.gov (United States)

    Ito, H; Miyake, H; Masugata, K

    2008-10-01

    Intense pulsed heavy ion beam is expected to be applied to materials processing including surface modification and ion implantation. For those applications, it is very important to generate high-purity ion beams with various ion species. For this purpose, we have developed a new type of a magnetically insulated ion diode with an active ion source of a gas puff plasma gun. When the ion diode was operated at a diode voltage of about 190 kV, a diode current of about 15 kA, and a pulse duration of about 100 ns, the ion beam with an ion current density of 54 A/cm(2) was obtained at 50 mm downstream from the anode. By evaluating the ion species and the energy spectrum of the ion beam via a Thomson parabola spectrometer, it was confirmed that the ion beam consists of nitrogen ions (N(+) and N(2+)) of energy of 100-400 keV and the proton impurities of energy of 90-200 keV. The purity of the beam was evaluated to be 94%. The high-purity pulsed nitrogen ion beam was successfully obtained by the developed ion diode system.

  9. Preliminary results of an examination of electronic cigarette user puff topography: the effect of a mouthpiece-based topography measurement device on plasma nicotine and subjective effects.

    Science.gov (United States)

    Spindle, Tory R; Breland, Alison B; Karaoghlanian, Nareg V; Shihadeh, Alan L; Eissenberg, Thomas

    2015-02-01

    Electronic cigarettes (ECIGs) heat a nicotine-containing solution; the resulting aerosol is inhaled by the user. Nicotine delivery may be affected by users' puffing behavior (puff topography), and little is known about the puff topography of ECIG users. Puff topography can be measured using mouthpiece-based computerized systems. However, the extent to which a mouthpiece influences nicotine delivery and subjective effects in ECIG users is unknown. Plasma nicotine concentration, heart rate, and subjective effects were measured in 13 experienced ECIG users who used their preferred ECIG and liquid (≥ 12 mg/ml nicotine) during 2 sessions (with or without a mouthpiece). In both sessions, participants completed an ECIG use session in which they were instructed to take 10 puffs with 30-second inter-puff intervals. Puff topography was recorded in the mouthpiece condition. Almost all measures of the effects of ECIG use were independent of topography measurement. Collapsed across session, mean plasma nicotine concentration increased by 16.8 ng/ml, and mean heart rate increased by 8.5 bpm (ps topography measurement equipment, ECIG-using participants took larger and longer puffs with lower flow rates. In experienced ECIG users, measuring ECIG topography did not influence ECIG-associated nicotine delivery or most measures of withdrawal suppression. Topography measurement systems will need to account for the low flow rates observed for ECIG users. © The Author 2014. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Source term identification in atmospheric modelling via sparse optimization

    Science.gov (United States)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the

  11. The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle

    OpenAIRE

    Laaksonen, Pekka

    2011-01-01

    Laaksonen, Pekka The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle Jyväskylä: Jyväskylän yliopisto, 2011, 42 s. Tietojärjestelmätiede, kandidaatintutkielma Ohjaaja(t): Käkölä, Timo Tässä kandidaatintutkielmassa selvitettiin sitä, miten the eSourcing Capability Model for Service Providers-mallin käytännöt (practices) ovat liittyneet tietä-myksenhallinnan neljään prosessiin: tiedon luominen, varastointi/noutaminen, jakamine...

  12. Model of contamination sources of electron for radiotherapy of beams of photons

    International Nuclear Information System (INIS)

    Gonzalez Infantes, W.; Lallena Rojo, A. M.; Anguiano Millan, M.

    2013-01-01

    Proposes a model of virtual sources of electrons, that allows to reproduce the sources to the input parameters of the representation of the patient. To compare performance in depth values and calculated profiles from the full simulation of the heads, with the calculated values using sources model, found that the model is capable of playing depth dose distributions and profiles. (Author)

  13. A Series of Jets that Drove Streamer-Puff CMEs from Giant Active Region of 2014

    Science.gov (United States)

    Panesar, Navdeep K.; Sterling, Alphonse C.; Moore, Ronald L.

    2016-01-01

    We investigate characteristics of solar coronal jets that originated from active region NOAA 12192 and produced coronal mass ejections (CMEs). This active region produced many non­-jet major flare eruptions (X and M class) that made no CME. A multitude of jets occurred from the southeast edge of the active region, and in contrast to the major-­flare eruptions in the core, six of these jets resulted in CMEs. Our jet observations are from SDO/AIA EUV channels and from Hinode/XRT, and CME observations are from the SOHO/LASCO C2 coronograph. Each jet-­driven CME was relatively slow-­moving (approx. 200 - 300 km/s) compared to most CMEs; had angular width (20deg - 50deg) comparable to that of the streamer base; and was of the "streamer­-puff" variety, whereby a pre-existing streamer was transiently inflated but not removed (blown out) by the passage of the CME. Much of the chromospheric-­temperature plasma of the jets producing the CMEs escaped from the Sun, whereas relatively more of the chromospheric plasma in the non-CME-producing jets fell back to the solar surface. We also found that the CME-producing jets tended to be faster in speed and longer in duration than the non-CME-­producing jets. We expect that the jets result from eruptions of mini-filaments. We further propose that the CMEs are driven by magnetic twist injected into streamer-­base coronal loops when erupting twisted mini-filament field reconnects with the ambient field at the foot of those loops.

  14. Reliability model of SNS linac (spallation neutron source-ORNL)

    International Nuclear Information System (INIS)

    Pitigoi, A.; Fernandez, P.

    2015-01-01

    A reliability model of SNS LINAC (Spallation Neutron Source at Oak Ridge National Laboratory) has been developed using risk spectrum reliability analysis software and the analysis of the accelerator system's reliability has been performed. The analysis results have been evaluated by comparing them with the SNS operational data. This paper presents the main results and conclusions focusing on the definition of design weaknesses and provides recommendations to improve reliability of the MYRRHA ( linear accelerator. The reliability results show that the most affected SNS LINAC parts/systems are: 1) SCL (superconducting linac), front-end systems: IS, LEBT (low-energy beam transport line), MEBT (medium-energy beam transport line), diagnostics and controls; 2) RF systems (especially the SCL RF system); 3) power supplies and PS controllers. These results are in line with the records in the SNS logbook. The reliability issue that needs to be enforced in the linac design is the redundancy of the systems, subsystems and components most affected by failures. For compensation purposes, there is a need for intelligent fail-over redundancy implementation in controllers. Enough diagnostics has to be implemented to allow reliable functioning of the redundant solutions and to ensure the compensation function

  15. Modeling the explosion-source region: An overview

    International Nuclear Information System (INIS)

    Glenn, L.A.

    1993-01-01

    The explosion-source region is defined as the region surrounding an underground explosion that cannot be described by elastic or anelastic theory. This region extends typically to ranges up to 1 km/(kt) 1/3 but for some purposes, such as yield estimation via hydrodynamic means (CORRTEX and HYDRO PLUS), the maximum range of interest is less by an order of magnitude. For the simulation or analysis of seismic signals, however, what is required is the time resolved motion and stress state at the inelastic boundary. Various analytic approximations have been made for these boundary conditions, but since they rely on near-field empirical data they cannot be expected to reliably extrapolate to different explosion sites. More important, without some knowledge of the initial energy density and the characteristics of the medium immediately surrounding the explosion, these simplified models are unable to distinguish chemical from nuclear explosions, identify cavity decoupling, or account for such phenomena as anomalous dissipation via pore collapse

  16. Quasi-homogeneous partial coherent source modeling of multimode optical fiber output using the elementary source method

    Science.gov (United States)

    Fathy, Alaa; Sabry, Yasser M.; Khalil, Diaa A.

    2017-10-01

    Multimode fibers (MMF) have many applications in illumination, spectroscopy, sensing and even in optical communication systems. In this work, we present a model for the MMF output field assuming the fiber end as a quasi-homogenous source. The fiber end is modeled by a group of partially coherent elementary sources, spatially shifted and uncorrelated with each other. The elementary source distribution is derived from the far field intensity measurement, while the weighting function of the sources is derived from the fiber end intensity measurement. The model is compared with practical measurements for fibers with different core/cladding diameters at different propagation distances and for different input excitations: laser, white light and LED. The obtained results show normalized root mean square error less than 8% in the intensity profile in most cases, even when the fiber end surface is not perfectly cleaved. Also, the comparison with the Gaussian-Schell model results shows a better agreement with the measurement. In addition, the complex degree of coherence, derived from the model results, is compared with the theoretical predictions of the modified Van Zernike equation showing very good agreement, which strongly supports the assumption that the large core MMF could be considered as a quasi-homogenous source.

  17. An Analysis of a Puff Dispersion Model for a Coastal Region.

    Science.gov (United States)

    1982-06-01

    8217.4 * ~ * * * *ntS Z K 4- tn - . CU mpnU - J CU V) UJ 11 W .. 0 ’) r ’ I-N~~~~~~~ O-nic IIh10 If-10Ia 4z C-Z 09. ~~c 0 u0) QUCQ U U) QUQ I-.4 81) K ’Z

  18. Influence of corneal biomechanical properties on intraocular pressure differences between an air-puff tonometer and the Goldmann applanation tonometer.

    Science.gov (United States)

    Tranchina, Laura; Lombardo, Marco; Oddone, Francesco; Serrao, Sebastiano; Schiano Lomoriello, Domenico; Ducoli, Pietro

    2013-01-01

    To estimate the influence of corneal properties on intraocular pressure (IOP) differences between an air-puff tonometer (NT530P; Nidek) and the Goldmann applanation tonometer (Haag-Streit). The influence of central corneal thickness (CCT), keratometry, and Ocular Response Analyzer (Reichert) measurements of corneal viscoelasticity [corneal hysteresis (CH) and corneal resistance factor (CRF)] on IOP differences between tonometers was evaluated. The CRF was calculated to be the best predictor of the differences in IOP readings between tonometers (r2=0.23; Ptonometers. Corneal resistance to applanation induced by either contact or noncontact tonometers was calculated to be the most determinant factor in influencing IOP differences between applanation tonometers.

  19. Versatile Markovian models for networks with asymmetric TCP sources

    NARCIS (Netherlands)

    van Foreest, N.D.; Haverkort, Boudewijn R.H.M.; Mandjes, M.R.H.; Scheinhardt, Willem R.W.

    2004-01-01

    In this paper we use Stochastic Petri Nets (SPNs) to study the interaction of multiple TCP sources that share one or two buffers, thereby considerably extending earlier work. We first consider two sources sharing a buffer and investigate the consequences of two popular assumptions for the loss

  20. A discriminative syntactic model for source permutation via tree transduction

    NARCIS (Netherlands)

    Khalilov, M.; Sima'an, K.; Wu, D.

    2010-01-01

    A major challenge in statistical machine translation is mitigating the word order differences between source and target strings. While reordering and lexical translation choices are often conducted in tandem, source string permutation prior to translation is attractive for studying reordering using

  1. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  2. Muticriteria decision making model for chosing between open source and non-open source software

    Directory of Open Access Journals (Sweden)

    Edmilson Alves de Moraes

    2008-09-01

    Full Text Available This article proposes the use of a multicriterio method for supporting decision on a problem where the intent is to chose for software given the options of open source and not-open source. The study shows how a method for decison making can be used to provide problem structuration and simplify the decision maker job. The method Analytic Hierarchy Process-AHP is described step-by-step and its benefits and flaws are discussed. Followin the theoretical discussion, a muliple case study is presented, where two companies are to use the decison making method. The analysis was supported by Expert Choice, a software developed based on AHP framework.

  3. Laboratory Plasma Source as an MHD Model for Astrophysical Jets

    Science.gov (United States)

    Mayo, Robert M.

    1997-01-01

    The significance of the work described herein lies in the demonstration of Magnetized Coaxial Plasma Gun (MCG) devices like CPS-1 to produce energetic laboratory magneto-flows with embedded magnetic fields that can be used as a simulation tool to study flow interaction dynamic of jet flows, to demonstrate the magnetic acceleration and collimation of flows with primarily toroidal fields, and study cross field transport in turbulent accreting flows. Since plasma produced in MCG devices have magnetic topology and MHD flow regime similarity to stellar and extragalactic jets, we expect that careful investigation of these flows in the laboratory will reveal fundamental physical mechanisms influencing astrophysical flows. Discussion in the next section (sec.2) focuses on recent results describing collimation, leading flow surface interaction layers, and turbulent accretion. The primary objectives for a new three year effort would involve the development and deployment of novel electrostatic, magnetic, and visible plasma diagnostic techniques to measure plasma and flow parameters of the CPS-1 device in the flow chamber downstream of the plasma source to study, (1) mass ejection, morphology, and collimation and stability of energetic outflows, (2) the effects of external magnetization on collimation and stability, (3) the interaction of such flows with background neutral gas, the generation of visible emission in such interaction, and effect of neutral clouds on jet flow dynamics, and (4) the cross magnetic field transport of turbulent accreting flows. The applicability of existing laboratory plasma facilities to the study of stellar and extragalactic plasma should be exploited to elucidate underlying physical mechanisms that cannot be ascertained though astrophysical observation, and provide baseline to a wide variety of proposed models, MHD and otherwise. The work proposed herin represents a continued effort on a novel approach in relating laboratory experiments to

  4. Near Source 2007 Peru Tsunami Runup Observations and Modeling

    Science.gov (United States)

    Borrero, J. C.; Fritz, H. M.; Kalligeris, N.; Broncano, P.; Ortega, E.

    2008-12-01

    On 15 August 2007 an earthquake with moment magnitude (Mw) of 8.0 centered off the coast of central Peru, generated a tsunami with locally focused runup heights of up to 10 m. A reconnaissance team was deployed two weeks after the event and investigated the tsunami effects at 51 sites. Three tsunami fatalities were reported south of the Paracas Peninsula in a sparsely populated desert area where the largest tsunami runup heights and massive inundation distances up to 2 km were measured. Numerical modeling of the earthquake source and tsunami suggest that a region of high slip near the coastline was primarily responsible for the extreme runup heights. The town of Pisco was spared by the Paracas Peninsula, which blocked tsunami waves from propagating northward from the high slip region. As with all near field tsunamis, the waves struck within minutes of the massive ground shaking. Spontaneous evacuations coordinated by the Peruvian Coast Guard minimized the fatalities and illustrate the importance of community-based education and awareness programs. The residents of the fishing village Lagunilla were unaware of the tsunami hazard after an earthquake and did not evacuate, which resulted in 3 fatalities. Despite the relatively benign tsunami effects at Pisco from this event, the tsunami hazard for this city (and its liquefied natural gas terminal) cannot be underestimated. Between 1687 and 1868, the city of Pisco was destroyed 4 times by tsunami waves. Since then, two events (1974 and 2007) have resulted in partial inundation and moderate damage. The fact that potentially devastating tsunami runup heights were observed immediately south of the peninsula only serves to underscore this point.

  5. A Systems Thinking Model for Open Source Software Development in Social Media

    OpenAIRE

    Mustaquim, Moyen

    2010-01-01

    In this paper a social media model, based on systems thinking methodology is proposed to understand the behavior of the open source software development community working in social media.The proposed model is focused on relational influences of two different systems- social media and the open source community. This model can be useful for taking decisions which are complicated and where solutions are not apparent.Based on the proposed model, an efficient way of working in open source developm...

  6. Comparison of analytic source models for head scatter factor calculation and planar dose calculation for IMRT

    International Nuclear Information System (INIS)

    Yan Guanghua; Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G

    2008-01-01

    The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity

  7. Comparison of analytic source models for head scatter factor calculation and planar dose calculation for IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Yan Guanghua [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G [Department of Radiation Oncology, University of Florida, Gainesville, FL 32610-0385 (United States)

    2008-04-21

    The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity.

  8. Source apportionment of airborne particulates through receptor modeling: Indian scenario

    Science.gov (United States)

    Banerjee, Tirthankar; Murari, Vishnu; Kumar, Manish; Raju, M. P.

    2015-10-01

    Airborne particulate chemistry mostly governed by associated sources and apportionment of specific sources is extremely essential to delineate explicit control strategies. The present submission initially deals with the publications (1980s-2010s) of Indian origin which report regional heterogeneities of particulate concentrations with reference to associated species. Such meta-analyses clearly indicate the presence of reservoir of both primary and secondary aerosols in different geographical regions. Further, identification of specific signatory molecules for individual source category was also evaluated in terms of their scientific merit and repeatability. Source signatures mostly resemble international profile while, in selected cases lack appropriateness. In India, source apportionment (SA) of airborne particulates was initiated way back in 1985 through factor analysis, however, principal component analysis (PCA) shares a major proportion of applications (34%) followed by enrichment factor (EF, 27%), chemical mass balance (CMB, 15%) and positive matrix factorization (PMF, 9%). Mainstream SA analyses identify earth crust and road dust resuspensions (traced by Al, Ca, Fe, Na and Mg) as a principal source (6-73%) followed by vehicular emissions (traced by Fe, Cu, Pb, Cr, Ni, Mn, Ba and Zn; 5-65%), industrial emissions (traced by Co, Cr, Zn, V, Ni, Mn, Cd; 0-60%), fuel combustion (traced by K, NH4+, SO4-, As, Te, S, Mn; 4-42%), marine aerosols (traced by Na, Mg, K; 0-15%) and biomass/refuse burning (traced by Cd, V, K, Cr, As, TC, Na, K, NH4+, NO3-, OC; 1-42%). In most of the cases, temporal variations of individual source contribution for a specific geographic region exhibit radical heterogeneity possibly due to unscientific orientation of individual tracers for specific source and well exaggerated by methodological weakness, inappropriate sample size, implications of secondary aerosols and inadequate emission inventories. Conclusively, a number of challenging

  9. Studies and modeling of cold neutron sources; Etude et modelisation des sources froides de neutron

    Energy Technology Data Exchange (ETDEWEB)

    Campioni, G

    2004-11-15

    With the purpose of updating knowledge in the fields of cold neutron sources, the work of this thesis has been run according to the 3 following axes. First, the gathering of specific information forming the materials of this work. This set of knowledge covers the following fields: cold neutron, cross-sections for the different cold moderators, flux slowing down, different measurements of the cold flux and finally, issues in the thermal analysis of the problem. Secondly, the study and development of suitable computation tools. After an analysis of the problem, several tools have been planed, implemented and tested in the 3-dimensional radiation transport code Tripoli-4. In particular, a module of uncoupling, integrated in the official version of Tripoli-4, can perform Monte-Carlo parametric studies with a spare factor of Cpu time fetching 50 times. A module of coupling, simulating neutron guides, has also been developed and implemented in the Monte-Carlo code McStas. Thirdly, achieving a complete study for the validation of the installed calculation chain. These studies focus on 3 cold sources currently functioning: SP1 from Orphee reactor and 2 other sources (SFH and SFV) from the HFR at the Laue Langevin Institute. These studies give examples of problems and methods for the design of future cold sources.

  10. RANS modeling of scalar dispersion from localized sources within a simplified urban-area model

    Science.gov (United States)

    Rossi, Riccardo; Capra, Stefano; Iaccarino, Gianluca

    2011-11-01

    The dispersion of a passive scalar downstream a localized source within a simplified urban-like geometry is examined by means of RANS scalar flux models. The computations are conducted under conditions of neutral stability and for three different incoming wind directions (0°, 45°, 90°) at a roughness Reynolds number of Ret = 391. A Reynolds stress transport model is used to close the flow governing equations whereas both the standard eddy-diffusivity closure and algebraic flux models are employed to close the transport equation for the passive scalar. The comparison with a DNS database shows improved reliability from algebraic scalar flux models towards predicting both the mean concentration and the plume structure. Since algebraic flux models do not increase substantially the computational effort, the results indicate that the use of tensorial-diffusivity can be promising tool for dispersion simulations for the urban environment.

  11. Modelling of novel light sources based on asymmetric heterostructures

    International Nuclear Information System (INIS)

    Afonenko, A.A.; Kononenko, V.K.; Manak, I.S.

    1995-01-01

    For asymmetric quantum-well heterojunction laser sources, processes of carrier injection into quantum wells are considered. In contrast to ordinary quantum-well light sources, active layers in the novel nanocrystalline systems have different thickness and/or compositions. In addition, wide-band gap barrier layers separating the quantum wells may have a linear or parabolic energy potential profile. For various kinds of the structures, mathematical simulation of dynamic response has been carried out. (author). 8 refs, 5 figs

  12. Source apportionment of fine particulate matter in China in 2013 using a source-oriented chemical transport model.

    Science.gov (United States)

    Shi, Zhihao; Li, Jingyi; Huang, Lin; Wang, Peng; Wu, Li; Ying, Qi; Zhang, Hongliang; Lu, Li; Liu, Xuejun; Liao, Hong; Hu, Jianlin

    2017-12-01

    China has been suffering high levels of fine particulate matter (PM 2.5 ). Designing effective PM 2.5 control strategies requires information about the contributions of different sources. In this study, a source-oriented Community Multiscale Air Quality (CMAQ) model was applied to quantitatively estimate the contributions of different source sectors to PM 2.5 in China. Emissions of primary PM 2.5 and gas pollutants of SO 2 , NO x , and NH 3 , which are precursors of particulate sulfate, nitrate, and ammonium (SNA, major PM 2.5 components in China), from eight source categories (power plants, residential sources, industries, transportation, open burning, sea salt, windblown dust and agriculture) were separately tracked to determine their contributions to PM 2.5 in 2013. Industrial sector is the largest source of SNA in Beijing, Xi'an and Chongqing, followed by agriculture and power plants. Residential emissions are also important sources of SNA, especially in winter when severe pollution events often occur. Nationally, the contributions of different source sectors to annual total PM 2.5 from high to low are industries, residential sources, agriculture, power plants, transportation, windblown dust, open burning and sea salt. Provincially, residential sources and industries are the major anthropogenic sources of primary PM 2.5 , while industries, agriculture, power plants and transportation are important for SNA in most provinces. For total PM 2.5 , residential and industrial emissions are the top two sources, with a combined contribution of 40-50% in most provinces. The contributions of power plants and agriculture to total PM 2.5 are about 10%, respectively. Secondary organic aerosol accounts for about 10% of annual PM 2.5 in most provinces, with higher contributions in southern provinces such as Yunnan (26%), Hainan (25%) and Taiwan (21%). Windblown dust is an important source in western provinces such as Xizang (55% of total PM 2.5 ), Qinghai (74%), Xinjiang (59

  13. Source apportionment of PM2.5 in North India using source-oriented air quality models

    International Nuclear Information System (INIS)

    Guo, Hao; Kota, Sri Harsha; Sahu, Shovan Kumar; Hu, Jianlin; Ying, Qi; Gao, Aifang; Zhang, Hongliang

    2017-01-01

    In recent years, severe pollution events were observed frequently in India especially at its capital, New Delhi. However, limited studies have been conducted to understand the sources to high pollutant concentrations for designing effective control strategies. In this work, source-oriented versions of the Community Multi-scale Air Quality (CMAQ) model with Emissions Database for Global Atmospheric Research (EDGAR) were applied to quantify the contributions of eight source types (energy, industry, residential, on-road, off-road, agriculture, open burning and dust) to fine particulate matter (PM 2.5 ) and its components including primary PM (PPM) and secondary inorganic aerosol (SIA) i.e. sulfate, nitrate and ammonium ions, in Delhi and three surrounding cities, Chandigarh, Lucknow and Jaipur in 2015. PPM mass is dominated by industry and residential activities (>60%). Energy (∼39%) and industry (∼45%) sectors contribute significantly to PPM at south of Delhi, which reach a maximum of 200 μg/m 3 during winter. Unlike PPM, SIA concentrations from different sources are more heterogeneous. High SIA concentrations (∼25 μg/m 3 ) at south Delhi and central Uttar Pradesh were mainly attributed to energy, industry and residential sectors. Agriculture is more important for SIA than PPM and contributions of on-road and open burning to SIA are also higher than to PPM. Residential sector contributes highest to total PM 2.5 (∼80 μg/m 3 ), followed by industry (∼70 μg/m 3 ) in North India. Energy and agriculture contribute ∼25 μg/m 3 and ∼16 μg/m 3 to total PM 2.5 , while SOA contributes <5 μg/m 3 . In Delhi, industry and residential activities contribute to 80% of total PM 2.5 . - Highlights: • Sources of PM 2.5 in North India were quantified by source-oriented CMAQ. • Industrial/residential activities are the dominating sources (60–70%) for PPM. • Energy/agriculture are the most important sources (30–40%) for SIA. • Strong seasonal

  14. Water Quality Assessment of River Soan (Pakistan) and Source Apportionment of Pollution Sources Through Receptor Modeling.

    Science.gov (United States)

    Nazeer, Summya; Ali, Zeshan; Malik, Riffat Naseem

    2016-07-01

    The present study was designed to determine the spatiotemporal patterns in water quality of River Soan using multivariate statistics. A total of 26 sites were surveyed along River Soan and its associated tributaries during pre- and post-monsoon seasons in 2008. Hierarchical agglomerative cluster analysis (HACA) classified sampling sites into three groups according to their degree of pollution, which ranged from least to high degradation of water quality. Discriminant function analysis (DFA) revealed that alkalinity, orthophosphates, nitrates, ammonia, salinity, and Cd were variables that significantly discriminate among three groups identified by HACA. Temporal trends as identified through DFA revealed that COD, DO, pH, Cu, Cd, and Cr could be attributed for major seasonal variations in water quality. PCA/FA identified six factors as potential sources of pollution of River Soan. Absolute principal component scores using multiple regression method (APCS-MLR) further explained the percent contribution from each source. Heavy metals were largely added through industrial activities (28 %) and sewage waste (28 %), nutrients through agriculture runoff (35 %) and sewage waste (28 %), organic pollution through sewage waste (27 %) and urban runoff (17 %) and macroelements through urban runoff (39 %), and mineralization and sewage waste (30 %). The present study showed that anthropogenic activities are the major source of variations in River Soan. In order to address the water quality issues, implementation of effective waste management measures are needed.

  15. eTOXlab, an open source modeling framework for implementing predictive models in production environments.

    Science.gov (United States)

    Carrió, Pau; López, Oriol; Sanz, Ferran; Pastor, Manuel

    2015-01-01

    Computational models based in Quantitative-Structure Activity Relationship (QSAR) methodologies are widely used tools for predicting the biological properties of new compounds. In many instances, such models are used as a routine in the industry (e.g. food, cosmetic or pharmaceutical industry) for the early assessment of the biological properties of new compounds. However, most of the tools currently available for developing QSAR models are not well suited for supporting the whole QSAR model life cycle in production environments. We have developed eTOXlab; an open source modeling framework designed to be used at the core of a self-contained virtual machine that can be easily deployed in production environments, providing predictions as web services. eTOXlab consists on a collection of object-oriented Python modules with methods mapping common tasks of standard modeling workflows. This framework allows building and validating QSAR models as well as predicting the properties of new compounds using either a command line interface or a graphic user interface (GUI). Simple models can be easily generated by setting a few parameters, while more complex models can be implemented by overriding pieces of the original source code. eTOXlab benefits from the object-oriented capabilities of Python for providing high flexibility: any model implemented using eTOXlab inherits the features implemented in the parent model, like common tools and services or the automatic exposure of the models as prediction web services. The particular eTOXlab architecture as a self-contained, portable prediction engine allows building models with confidential information within corporate facilities, which can be safely exported and used for prediction without disclosing the structures of the training series. The software presented here provides full support to the specific needs of users that want to develop, use and maintain predictive models in corporate environments. The technologies used by e

  16. Modelling [CAS - CERN Accelerator School, Ion Sources, Senec (Slovakia), 29 May - 8 June 2012

    International Nuclear Information System (INIS)

    Spädtke, P

    2013-01-01

    Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H - -sources) together with some remarks on beam transport. (author)

  17. Monte Carlo model for a thick target T(D,n)4 He neutron source

    International Nuclear Information System (INIS)

    Webster, W.M.

    1976-01-01

    A brief description is given of a calculational model developed to simulate a T(D,n) 4 He neutron source which is anisotropic in energy and intensity. The model also provides a means for including the time dependency of the neutron source. Although the model has been applied specifically to the Lawrence Livermore Laboratory ICT accelerator, the technique is general and can be applied to any similar neutron source

  18. Mesorad dose assessment model. Volume 1. Technical basis

    International Nuclear Information System (INIS)

    Scherpelz, R.I.; Bander, T.J.; Athey, G.F.; Ramsdell, J.V.

    1986-03-01

    MESORAD is a dose assessment model for emergency response applications. Using release data for as many as 50 radionuclides, the model calculates: (1) external doses resulting from exposure to radiation emitted by radionuclides contained in elevated or deposited material; (2) internal dose commitment resulting from inhalation; and (3) total whole-body doses. External doses from airborne material are calculated using semi-infinite and finite cloud approximations. At each stage in model execution, the appropriate approximation is selected after considering the cloud dimensions. Atmospheric processes are represented in MESORAD by a combination of Lagrangian puff and Gaussian plume dispersion models, a source depletion (deposition velocity) dry deposition model, and a wet deposition model using washout coefficients based on precipitation rates

  19. A 1D ion species model for an RF driven negative ion source

    Science.gov (United States)

    Turner, I.; Holmes, A. J. T.

    2017-08-01

    A one-dimensional model for an RF driven negative ion source has been developed based on an inductive discharge. The RF source differs from traditional filament and arc ion sources because there are no primary electrons present, and is simply composed of an antenna region (driver) and a main plasma discharge region. However the model does still make use of the classical plasma transport equations for particle energy and flow, which have previously worked well for modelling DC driven sources. The model has been developed primarily to model the Small Negative Ion Facility (SNIF) ion source at CCFE, but may be easily adapted to model other RF sources. Currently the model considers the hydrogen ion species, and provides a detailed description of the plasma parameters along the source axis, i.e. plasma temperature, density and potential, as well as current densities and species fluxes. The inputs to the model are currently the RF power, the magnetic filter field and the source gas pressure. Results from the model are presented and where possible compared to existing experimental data from SNIF, with varying RF power, source pressure.

  20. Characteristics and Source Apportionment of Marine Aerosols over East China Sea Using a Source-oriented Chemical Transport Model

    Science.gov (United States)

    Kang, M.; Zhang, H.; Fu, P.

    2017-12-01

    Marine aerosols exert a strong influence on global climate change and biogeochemical cycling, as oceans cover beyond 70% of the Earth's surface. However, investigations on marine aerosols are relatively limited at present due to the difficulty and inconvenience in sampling marine aerosols as well as their diverse sources. East China Sea (ECS), lying over the broad shelf of the western North Pacific, is adjacent to the Asian mainland, where continental-scale air pollution could impose a heavy load on the marine atmosphere through long-range atmospheric transport. Thus, contributions of major sources to marine aerosols need to be identified for policy makers to develop cost effective control strategies. In this work, a source-oriented version of the Community Multiscale Air Quality (CMAQ) model, which can directly track the contributions from multiple emission sources to marine aerosols, is used to investigate the contributions from power, industry, transportation, residential, biogenic and biomass burning to marine aerosols over the ECS in May and June 2014. The model simulations indicate significant spatial and temporal variations of concentrations as well as the source contributions. This study demonstrates that the Asian continent can greatly affect the marine atmosphere through long-range transport.

  1. Logistic Regression Modeling of Diminishing Manufacturing Sources for Integrated Circuits

    National Research Council Canada - National Science Library

    Gravier, Michael

    1999-01-01

    .... The research identified logistic regression as a powerful tool for analysis of DMSMS and further developed twenty models attempting to identify the "best" way to model and predict DMSMS using logistic regression...

  2. Modeling a point-source release of 1,1,1-trichloroethane using EPA's SCREEN model

    International Nuclear Information System (INIS)

    Henriques, W.D.; Dixon, K.R.

    1994-01-01

    Using data from the Environmental Protection Agency's Toxic Release Inventory 1988 (EPA TRI88), pollutant concentration estimates were modeled for a point source air release of 1,1,1-trichloroethane at the Savannah River Plant located in Aiken, South Carolina. Estimates were calculating using the EPA's SCREEN model utilizing typical meteorological conditions to determine maximum impact of the plume under different mixing conditions for locations within 100 meters of the stack. Input data for the SCREEN model were then manipulated to simulate the impact of the release under urban conditions (for the purpose of assessing future landuse considerations) and under flare release options to determine if these parameters lessen or increase the probability of human or wildlife exposure to significant concentrations. The results were then compared to EPA reference concentrations (RfC) in order to assess the size of the buffer around the stack which may potentially have levels that exceed this level of safety

  3. Study of the L–I–H transition with a new dual gas puff imaging system in the EAST superconducting tokamak

    DEFF Research Database (Denmark)

    Xu, G.S.; Shao, L.M.; Liu, S.C.

    2014-01-01

    The intermediate oscillatory phase during the L–H transition, termed the I-phase, is studied in the EAST superconducting tokamak using a newly developed dual gas puff imaging (GPI) system near the L–H transition power threshold. The experimental observations suggest that the oscillatory behaviour...

  4. Puff pastry with low saturated fat contents: The role of fat and dough physical interactions in the development of a layered structure

    NARCIS (Netherlands)

    Renzetti, S.; Harder, R. de; Jurgens, A.

    2015-01-01

    In puff pastry, fat and dough rheological behavior during sheeting control pastry dough development by formation of the layered structure which is essential for product quality. The aim of this work was to unravel the influence of fat and dough physical interactions during sheeting, as affected by

  5. A 28-fold increase in secretory protein synthesis is associated with DNA puff activity in the salivary gland of Bradysia hygida (Diptera, Sciaridae

    Directory of Open Access Journals (Sweden)

    de-Almeida J.C.

    1997-01-01

    Full Text Available When the first group of DNA puffs is active in the salivary gland regions S1 and S3 of Bradysia hygida larvae, there is a large increase in the production and secretion of new salivary proteins demonstrable by [3H]-Leu incorporation. The present study shows that protein separation by SDS-PAGE and detection by fluorography demonstrated that these polypeptides range in molecular mass from about 23 to 100 kDa. Furthermore, these proteins were synthesized mainly in the S1 and S3 salivary gland regions where the DNA puffs C7, C5, C4 and B10 are conspicuous, while in the S2 region protein synthesis was very low. Others have shown that the extent of amplification for DNA sequences that code for mRNA in the DNA puffs C4 and B10 was about 22 and 10 times, respectively. The present data for this group of DNA puffs are consistent with the proposition that gene amplification is necessary to provide some cells with additional gene copies for the production of massive amounts of proteins within a short period of time (Spradling AC and Mahowald AP (1980 Proceedings of the National Academy of Sciences, USA, 77: 1096-1100.

  6. Open Source Software Success Model for Iran: End-User Satisfaction Viewpoint

    Directory of Open Access Journals (Sweden)

    Ali Niknafs

    2012-03-01

    Full Text Available The open source software development is notable option for software companies. Recent years, many advantages of this software type are cause of move to that in Iran. National security and international restrictions problems and also software and services costs and more other problems intensified importance of use of this software. Users and their viewpoints are the critical success factor in the software plans. But there is not an appropriate model for open source software case in Iran. This research tried to develop a measuring open source software success model for Iran. By use of data gathered from open source users and online survey the model was tested. The results showed that components by positive effect on open source success were user satisfaction, open source community services quality, open source quality, copyright and security.

  7. Power-law thermal model for blackbody sources

    International Nuclear Information System (INIS)

    Del Grande, N.K.

    1979-01-01

    The spectral radiant emittance W/sub E/ from a blackbody at a temperature kT for photons at energies E above the spectral peak (2.82144 kT) varies as (kT)/sup E/kT/. This power-law temperature dependence, an approximation of Planck's radiation law, may have applications for measuring the emissivity of sources emitting in the soft x-ray region

  8. Outer heliospheric radio emissions. II - Foreshock source models

    Science.gov (United States)

    Cairns, Iver H.; Kurth, William S.; Gurnett, Donald A.

    1992-01-01

    Observations of LF radio emissions in the range 2-3 kHz by the Voyager spacecraft during the intervals 1983-1987 and 1989 to the present while at heliocentric distances greater than 11 AU are reported. New analyses of the wave data are presented, and the characteristics of the radiation are reviewed and discussed. Two classes of events are distinguished: transient events with varying starting frequencies that drift upward in frequency and a relatively continuous component that remains near 2 kHz. Evidence for multiple transient sources and for extension of the 2-kHz component above the 2.4-kHz interference signal is presented. The transient emissions are interpreted in terms of radiation generated at multiples of the plasma frequency when solar wind density enhancements enter one or more regions of a foreshock sunward of the inner heliospheric shock. Solar wind density enhancements by factors of 4-10 are observed. Propagation effects, the number of radiation sources, and the time variability, frequency drift, and varying starting frequencies of the transient events are discussed in terms of foreshock sources.

  9. From sub-source to source: Interpreting results of biological trace investigations using probabilistic models

    NARCIS (Netherlands)

    Oosterman, W.T.; Kokshoorn, B.; Maaskant-van Wijk, P.A.; de Zoete, J.

    2015-01-01

    The current method of reporting a putative cell type is based on a non-probabilistic assessment of test results by the forensic practitioner. Additionally, the association between donor and cell type in mixed DNA profiles can be exceedingly complex. We present a probabilistic model for

  10. Examining Daily Electronic Cigarette Puff Topography Among Established and Non-established Cigarette Smokers in their Natural Environment.

    Science.gov (United States)

    Lee, Youn Ok; Nonnemaker, James M; Bradfield, Brian; Hensel, Edward C; Robinson, Risa J

    2017-10-04

    Understanding exposures and potential health effects of ecigarettes is complex. Users' puffing behavior, or topography, affects function of ecigarette devices (e.g., coil temperature) and composition of their emissions. Users with different topographies are likely exposed to different amounts of any harmful or potentially harmful constituents (HPHCs). In this study, we compare ecigarette topographies of established cigarette smokers and non-established cigarette smokers. Data measuring e-cigarette topography were collected using a wireless hand-held monitoring device in users' everyday lives over 1 week. Young adult (aged 18-25) participants (N=20) used disposable e-cigarettes with the monitor as they normally would and responded to online surveys. Topography characteristics of established versus non-established cigarette smokers were compared. On average, established cigarette smokers in the sample had larger first puff volume (130.9ml vs. 56.0ml, pvs. 651.7ml, pnon-established smokers. At marginal significance, they had longer sessions (566.3s vs. 279.7s, p=.06) and used e-cigarettes more sessions per day (5.3s vs. 3.5s, p=.14). Established cigarette smokers also used ecigarettes for longer puff durations (3.3s vs. 1.8s, pvs. 54.7ml, pnon-established smokers. At marginal significance, they had longer puff interval (38.1s vs. 21.7s, p=.05). Our results demonstrate that topography characteristics differ by level of current cigarette smoking. This suggests that exposures to constituents of e-cigarettes depends on user characteristics and that specific topography parameters may be needed for different user populations when assessing ecigarette health effects. A user's topography affects his or her exposure to HPHCs. As this study demonstrates, user characteristics, such as level of smoking, can influence topography. Thus, it is crucial to understand the topography profiles of different user types to assess the potential for population harm and to identify potentially

  11. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.; Jonsson, Sigurjon; Sudhaus, H.; Baumann, C.

    2012-01-01

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due

  12. The Design of a Fire Source in Scale-Model Experiments with Smoke Ventilation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Brohus, Henrik; la Cour-Harbo, H.

    2004-01-01

    The paper describes the design of a fire and a smoke source for scale-model experiments with smoke ventilation. It is only possible to work with scale-model experiments where the Reynolds number is reduced compared to full scale, and it is demonstrated that special attention to the fire source...... (heat and smoke source) may improve the possibility of obtaining Reynolds number independent solutions with a fully developed flow. The paper shows scale-model experiments for the Ofenegg tunnel case. Design of a fire source for experiments with smoke ventilation in a large room and smoke movement...

  13. [Source apportionment of soil heavy metals in Jiapigou goldmine based on the UNMIX model].

    Science.gov (United States)

    Ai, Jian-chao; Wang, Ning; Yang, Jing

    2014-09-01

    The paper determines 16 kinds of metal elements' concentration in soil samples which collected in Jipigou goldmine upper the Songhua River. The UNMIX Model which was recommended by US EPA to get the source apportionment results was applied in this study, Cd, Hg, Pb and Ag concentration contour maps were generated by using Kriging interpolation method to verify the results. The main conclusions of this study are: (1)the concentrations of Cd, Hg, Pb and Ag exceeded Jilin Province soil background values and enriched obviously in soil samples; (2)using the UNMIX Model resolved four pollution sources: source 1 represents human activities of transportation, ore mining and garbage, and the source 1's contribution is 39. 1% ; Source 2 represents the contribution of the weathering of rocks and biological effects, and the source 2's contribution is 13. 87% ; Source 3 is a comprehensive source of soil parent material and chemical fertilizer, and the source 3's contribution is 23. 93% ; Source 4 represents iron ore mining and transportation sources, and the source 4's contribution is 22. 89%. (3)the UNMIX Model results are in accordance with the survey of local land-use types, human activities and Cd, Hg and Pb content distributions.

  14. Endangered Butterflies as a Model System for Managing Source Sink Dynamics on Department of Defense Lands

    Science.gov (United States)

    used three species of endangered butterflies as a model system to rigorously investigate the source-sink dynamics of species being managed on military...lands. Butterflies have numerous advantages as models for source-sink dynamics , including rapid generation times and relatively limited dispersal, but...they are subject to the same processes that determine source-sink dynamics of longer-lived, more vagile taxa.1.2 Technical Approach: For two of our

  15. Challenges for Knowledge Management in the Context of IT Global Sourcing Models Implementation

    OpenAIRE

    Perechuda , Kazimierz; Sobińska , Małgorzata

    2014-01-01

    Part 2: Models and Functioning of Knowledge Management; International audience; The article gives a literature overview of the current challenges connected with the implementation of the newest IT sourcing models. In the dynamic environment, organizations are required to build their competitive advantage not only on their own resources, but also on resources commissioned from external providers, accessed through various forms of sourcing, including the sourcing of IT services. This paper pres...

  16. Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE) using a Hierarchical Bayesian Approach

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2011-01-01

    We present an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model representation is motivated by the many random contributions to the path from sources to measurements including the tissue conductivity distribution, the geometry of the cortical s...

  17. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  18. Comparison of HYSPLIT-4 model simulations of the ETEX data, using meteorological input data of differing spatial and temporal resolution

    International Nuclear Information System (INIS)

    Hess, G.D.; Mills, G.A.; Draxler, R.R.

    1997-01-01

    Model simulations of air concentrations during ETEX-1 using the HYSPLIT-4 (HYbrid Single-Particle Lagrangian Integrated Trajectories, version 4) code and analysed meteorological data fields provided by ECMWF and the Australian Bureau of Meteorology are presented here. The HYSPLIT-4 model is a complete system for computing simple trajectories to complex dispersion and deposition simulations using either puff or particle approaches. A mixed dispersion algorithm is employed in this study: puffs in the horizontal and particles in the vertical

  19. Modeling of magnetically enhanced capacitively coupled plasma sources: Ar discharges

    International Nuclear Information System (INIS)

    Kushner, Mark J.

    2003-01-01

    Magnetically enhanced capacitively coupled plasma sources use transverse static magnetic fields to modify the performance of low pressure radio frequency discharges. Magnetically enhanced reactive ion etching (MERIE) sources typically use magnetic fields of tens to hundreds of Gauss parallel to the substrate to increase the plasma density at a given pressure or to lower the operating pressure. In this article results from a two-dimensional hybrid-fluid computational investigation of MERIE reactors with plasmas sustained in argon are discussed for an industrially relevant geometry. The reduction in electron cross field mobility as the magnetic field increases produces a systematic decrease in the dc bias (becoming more positive). This decrease is accompanied by a decrease in the energy and increase in angular spread of the ion flux to the substrate. Similar trends are observed when decreasing pressure for a constant magnetic field. Although for constant power the magnitudes of ion fluxes to the substrate increase with moderate magnetic fields, the fluxes decreased at larger magnetic fields. These trends are due, in part, to a reduction in the contributions of more efficient multistep ionization

  20. The x-ray emission spectra of multicharged xenon ions in a gas puff laser-produced plasma

    Energy Technology Data Exchange (ETDEWEB)

    Skobelev, I.Yu.; Dyakin, V.M.; Faenov, A.Ya. [Multicharged Ion Spectra Data Center, VNIIFTRI, Mendeleevo (Russian Federation); Bartnik, A.; Fiedorowicz, H.; Jarocki, R.; Kostecki, J.; Szczurek, M. [Institute of Optoelectronics, Military University of Technology, Warsaw (Poland); Biemont, E. [Institut de Physique Nucleaire Experimentale, Universite de Liege, Liege (Belgium); Astrophysique et Spectroscopie, Universite de Mons-Hainaut, Mons (Belgium); Quinet, P. [Astrophysique et Spectroscopie, Universite de Mons-Hainaut, Mons (Belgium); Nilsen, J. [Lawrence Livermore National Laboratory, Livermore, CA (United States); Behar, E.; Doron, R.; Mandelbaum, P.; Schwob, J.L. [Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem (Israel)

    1999-01-14

    Emission spectra of multicharged xenon ions produced by a laser gas puff are observed with high spectral resolution in the 8.5-9.5 and 17-19 A wavelength ranges. Three different theoretical methods are employed to obtain 3l-n'l'(n' = 4 to 10) wavelengths and Einstein coefficients for Ni-like Xe{sup 26+}. For the 3d-4p transitions, very good agreement is found between the experimental wavelengths and the various theoretical wavelengths. These accurate energy level measurements can be useful for studying the Ni-like xenon x-ray laser scheme. On the other hand, several intense spectral lines could not be identified as 3l-n'l' lines of Ni-like xenon, despite the very good agreement between the wavelengths and Einstein coefficients calculated for these transitions using the three different methods. (author)

  1. Mathematical models of thermohydraulic disturbance sources in the NPP circuits

    International Nuclear Information System (INIS)

    Proskuryakov, K.N.

    1999-01-01

    Methods and means of diagnostics of equipment and processes at NPPs allowing one to substantially increase safety and economic efficiency of nuclear power plant operation are considered. Development of mathematical models, describing the occurrence and propagation of violations is conducted

  2. Logistic Regression Modeling of Diminishing Manufacturing Sources for Integrated Circuits

    National Research Council Canada - National Science Library

    Gravier, Michael

    1999-01-01

    .... This thesis draws on available data from the electronics integrated circuit industry to attempt to assess whether statistical modeling offers a viable method for predicting the presence of DMSMS...

  3. Computer modelling of radioactive source terms at a tokamak reactor

    International Nuclear Information System (INIS)

    Meide, A.

    1984-12-01

    The Monte Carlo code MCNP has been used to create a simple three-dimensional mathematical model representing 1/12 of a tokamak fusion reactor for studies of the exposure rate level from neutrons as well as gamma rays from the activated materials, and for later estimates of the consequences to the environment, public, and operating personnel. The model is based on the recommendations from the NET/INTOR workshops. (author)

  4. Considering a point-source in a regional air pollution model; Prise en compte d`une source ponctuelle dans un modele regional de pollution atmospherique

    Energy Technology Data Exchange (ETDEWEB)

    Lipphardt, M.

    1997-06-19

    This thesis deals with the development and validation of a point-source plume model, with the aim to refine the representation of intensive point-source emissions in regional-scale air quality models. The plume is modelled at four levels of increasing complexity, from a modified Gaussian plume model to the Freiberg and Lusis ring model. Plume elevation is determined by Netterville`s plume rise model, using turbulence and atmospheric stability parameters. A model for the effect of a fine-scale turbulence on the mean concentrations in the plume is developed and integrated in the ring model. A comparison between results with and without considering micro-mixing shows the importance of this effect in a chemically reactive plume. The plume model is integrated into the Eulerian transport/chemistry model AIRQUAL, using an interface between Airqual and the sub-model, and interactions between the two scales are described. A simulation of an air pollution episode over Paris is carried out, showing that the utilization of such a sub-scale model improves the accuracy of the air quality model

  5. Quantitative assessment of corneal vibrations during intraocular pressure measurement with the air-puff method in patients with keratoconus.

    Science.gov (United States)

    Koprowski, Robert; Ambrósio, Renato

    2015-11-01

    One of the current methods for measuring intraocular pressure is the air-puff method. A tonometer which uses this method is the Corvis device. With the ultra-high-speed (UHS) Scheimpflug camera, it is also possible to observe corneal deformation during measurement. The use of modern image analysis and processing methods allows for analysis of higher harmonics of corneal deflection above 100 Hz. 493 eyes of healthy subjects and 279 eyes of patients with keratoconus were used in the measurements. For each eye, 140 corneal deformation images were recorded during intraocular pressure measurement. Each image was recorded every 230 µs and had a resolution of 200 × 576 pixels. A new, original algorithm for image analysis and processing has been proposed. It enables to separate the eyeball reaction as well as low-frequency and high-frequency corneal deformations from the eye response to an air puff. Furthermore, a method for classification of healthy subjects and patients with keratoconus based on decision trees has been proposed. The obtained results confirm the possibility to distinguish between patients with keratoconus and healthy subjects. The features used in this classification are directly related to corneal vibrations. They are only available in the proposed software and provide specificity of 98%, sensitivity-85%, and accuracy-92%. This confirms the usefulness of the proposed method in this type of classification that uses corneal vibrations during intraocular pressure measurement with the Corvis tonometer. With the new proposed algorithm for image analysis and processing allowing for the separation of individual features from a corneal deformation image, it is possible to: automatically measure corneal vibrations in a few characteristic points of the cornea, obtain fully repeatable measurement of vibrations for the same registered sequence of images and measure vibration parameters for large inter-individual variability in patients. Copyright © 2015 Elsevier

  6. Modeling the NPE with finite sources and empirical Green`s functions

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.; Kasameyer, P.; Goldstein, P. [Lawrence Livermore National Lab., CA (United States)] [and others

    1994-12-31

    In order to better understand the source characteristics of both nuclear and chemical explosions for purposes of discrimination, we have modeled the NPE chemical explosion as a finite source and with empirical Green`s functions. Seismograms are synthesized at four sties to test the validity of source models. We use a smaller chemical explosion detonated in the vicinity of the working point to obtain empirical Green`s functions. Empirical Green`s functions contain all the linear information of the geology along the propagation path and recording site, which are identical for chemical or nuclear explosions, and therefore reduce the variability in modeling the source of the larger event. We further constrain the solution to have the overall source duration obtained from point-source deconvolution results. In modeling the source, we consider both an elastic source on a spherical surface and an inelastic expanding spherical volume source. We found that the spherical volume solution provides better fits to observed seismograms. The potential to identify secondary sources was examined, but the resolution is too poor to be definitive.

  7. Information contraction and extraction by multivariate autoregressive (MAR) modelling. Pt. 2. Dominant noise sources in BWRS

    International Nuclear Information System (INIS)

    Morishima, N.

    1996-01-01

    The multivariate autoregressive (MAR) modeling of a vector noise process is discussed in terms of the estimation of dominant noise sources in BWRs. The discussion is based on a physical approach: a transfer function model on BWR core dynamics is utilized in developing a noise model; a set of input-output relations between three system variables and twelve different noise sources is obtained. By the least-square fitting of a theoretical PSD on neutron noise to an experimental one, four kinds of dominant noise sources are selected. It is shown that some of dominant noise sources consist of two or more different noise sources and have the spectral properties of being coloured and correlated with each other. By diagonalizing the PSD matrix for dominant noise sources, we may obtain an MAR expression for a vector noise process as a response to the diagonal elements(i.e. residual noises) being white and mutually-independent. (Author)

  8. Source term model evaluations for the low-level waste facility performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Yim, M.S.; Su, S.I. [North Carolina State Univ., Raleigh, NC (United States)

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  9. Asteroid models from photometry and complementary data sources

    Energy Technology Data Exchange (ETDEWEB)

    Kaasalainen, Mikko [Department of Mathematics, Tampere University of Technology (Finland)

    2016-05-10

    I discuss inversion methods for asteroid shape and spin reconstruction with photometry (lightcurves) and complementary data sources such as adaptive optics or other images, occultation timings, interferometry, and range-Doppler radar data. These are essentially different sampling modes (generalized projections) of plane-of-sky images. An important concept in this approach is the optimal weighting of the various data modes. The maximum compatibility estimate, a multi-modal generalization of the maximum likelihood estimate, can be used for this purpose. I discuss the fundamental properties of lightcurve inversion by examining the two-dimensional case that, though not usable in our three-dimensional world, is simple to analyze, and it shares essentially the same uniqueness and stability properties as the 3-D case. After this, I review the main aspects of 3-D shape representations, lightcurve inversion, and the inclusion of complementary data.

  10. Asteroid models from photometry and complementary data sources

    International Nuclear Information System (INIS)

    Kaasalainen, Mikko

    2016-01-01

    I discuss inversion methods for asteroid shape and spin reconstruction with photometry (lightcurves) and complementary data sources such as adaptive optics or other images, occultation timings, interferometry, and range-Doppler radar data. These are essentially different sampling modes (generalized projections) of plane-of-sky images. An important concept in this approach is the optimal weighting of the various data modes. The maximum compatibility estimate, a multi-modal generalization of the maximum likelihood estimate, can be used for this purpose. I discuss the fundamental properties of lightcurve inversion by examining the two-dimensional case that, though not usable in our three-dimensional world, is simple to analyze, and it shares essentially the same uniqueness and stability properties as the 3-D case. After this, I review the main aspects of 3-D shape representations, lightcurve inversion, and the inclusion of complementary data.

  11. Modelling RF-plasma interaction in ECR ion sources

    Directory of Open Access Journals (Sweden)

    Mascali David

    2017-01-01

    Full Text Available This paper describes three-dimensional self-consistent numerical simulations of wave propagation in magnetoplasmas of Electron cyclotron resonance ion sources (ECRIS. Numerical results can give useful information on the distribution of the absorbed RF power and/or efficiency of RF heating, especially in the case of alternative schemes such as mode-conversion based heating scenarios. Ray-tracing approximation is allowed only for small wavelength compared to the system scale lengths: as a consequence, full-wave solutions of Maxwell-Vlasov equation must be taken into account in compact and strongly inhomogeneous ECRIS plasmas. This contribution presents a multi-scale temporal domains approach for simultaneously including RF dynamics and plasma kinetics in a “cold-plasma”, and some perspectives for “hot-plasma” implementation. The presented results rely with the attempt to establish a modal-conversion scenario of OXB-type in double frequency heating inside an ECRIS testbench.

  12. Scale changes in air quality modelling and assessment of associated uncertainties

    International Nuclear Information System (INIS)

    Korsakissok, Irene

    2009-01-01

    After an introduction of issues related to a scale change in the field of air quality (existing scales for emissions, transport, turbulence and loss processes, hierarchy of data and models, methods of scale change), the author first presents Gaussian models which have been implemented within the Polyphemus modelling platform. These models are assessed by comparison with experimental observations and with other commonly used Gaussian models. The second part reports the coupling of the puff-based Gaussian model with the Eulerian Polair3D model for the sub-mesh processing of point sources. This coupling is assessed at the continental scale for a passive tracer, and at the regional scale for photochemistry. Different statistical methods are assessed

  13. Introduction of Two Novel Stiffness Parameters and Interpretation of Air Puff-Induced Biomechanical Deformation Parameters With a Dynamic Scheimpflug Analyzer.

    Science.gov (United States)

    Roberts, Cynthia J; Mahmoud, Ashraf M; Bons, Jeffrey P; Hossain, Arif; Elsheikh, Ahmed; Vinciguerra, Riccardo; Vinciguerra, Paolo; Ambrósio, Renato

    2017-04-01

    To investigate two new stiffness parameters and their relationships with the dynamic corneal response (DCR) parameters and compare normal and keratoconic eyes. Stiffness parameters are defined as Resultant Pressure at inward applanation (A1) divided by corneal displacement. Stiffness parameter A1 uses displacement between the undeformed cornea and A1 and stiffness parameter highest concavity (HC) uses displacement from A1 to maximum deflection during HC. The spatial and temporal profiles of the Corvis ST (Oculus Optikgeräte, Wetzlar, Germany) air puff were characterized using hot wire anemometry. An adjusted air pressure impinging on the cornea at A1 (adjAP1) and an algorithm to biomechanically correct intraocular pressure based on finite element modelling (bIOP) were used for Resultant Pressure calculation (adjAP1 - bIOP). Linear regression analyses between DCR parameters and stiffness parameters were performed on a retrospective dataset of 180 keratoconic eyes and 482 normal eyes. DCR parameters from a subset of 158 eyes of 158 patients in each group were matched for bIOP and compared using t tests. A P value of less than .05 was considered statistically significant. All DCR parameters evaluated showed significant differences between normal and keratoconic eyes, except peak distance. Keratoconic eyes had lower stiffness parameter values, thinner pachymetry, shorter applanation lengths, greater absolute values of applanation velocities, earlier A1 times and later second applanation times, greater HC deformation amplitudes and HC deflection amplitudes, and lower HC radius of concave curvature (greater concave curvature). Most DCR parameters showed a significant relationship with both stiffness parameters in both groups. Keratoconic eyes demonstrated less resistance to deformation than normal eyes with similar IOP. The stiffness parameters may be useful in future biomechanical studies as potential biomarkers. [J Refract Surg. 2017;33(4):266-273.]. Copyright 2017

  14. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    Science.gov (United States)

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Current-voltage model of LED light sources

    DEFF Research Database (Denmark)

    Beczkowski, Szymon; Munk-Nielsen, Stig

    2012-01-01

    Amplitude modulation is rarely used for dimming light-emitting diodes in polychromatic luminaires due to big color shifts caused by varying magnitude of LED driving current and nonlinear relationship between intensity of a diode and driving current. Current-voltage empirical model of light...

  16. On the sources of technological change: What do the models assume?

    International Nuclear Information System (INIS)

    Clarke, Leon; Weyant, John; Edmonds, Jae

    2008-01-01

    It is widely acknowledged that technological change can substantially reduce the costs of stabilizing atmospheric concentrations of greenhouse gases. This paper discusses the sources of technological change and the representations of these sources in formal models of energy and the environment. The paper distinguishes between three major sources of technological change-R and D, learning-by-doing and spillovers-and introduces a conceptual framework for linking modeling approaches to assumptions about these real-world sources. A selective review of modeling approaches, including those employing exogenous technological change, suggests that most formal models have meaningful real-world interpretations that focus on a subset of possible sources of technological change while downplaying the roles of others

  17. Model Predictive Control of Z-source Neutral Point Clamped Inverter

    DEFF Research Database (Denmark)

    Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of Z-source Neutral Point Clamped (NPC) inverter. For illustration, current control of Z-source NPC grid-connected inverter is analyzed and simulated. With MPC’s advantage of easily including system constraints, load current, impedance network...... response are obtained at the same time with a formulated Z-source NPC inverter network model. Operation steady state and transient state simulation results of MPC are going to be presented, which shows good reference tracking ability of this method. It provides new control method for Z-source NPC inverter...

  18. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    Science.gov (United States)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite

  19. Spatial and frequency domain ring source models for the single muscle fiber action potential

    DEFF Research Database (Denmark)

    Henneberg, Kaj-åge; R., Plonsey

    1994-01-01

    In the paper, single-fibre models for the extracellular action potential are developed that will allow the potential to the evaluated at an arbitrary field point in the extracellular space. Fourier-domain models are restricted in that they evaluate potentials at equidistant points along a line...... parallel to the fibre axis. Consequently, they cannot easily evaluate the potential at the boundary nodes of a boundary-element electrode model. The Fourier-domain models employ axial-symmetric ring source models, and thereby provide higher accuracy that the line source model, where the source is lumped...... including anisotropy show that the spatial models require extreme care in the integration procedure owing to the singularity in the weighting functions. With adequate sampling, the spatial models can evaluate extracellular potentials with high accuracy....

  20. Diamond carbon sources: a comparison of carbon isotope models

    International Nuclear Information System (INIS)

    Kirkley, M.B.; Otter, M.L.; Gurney, J.J.; Hill, S.J.

    1990-01-01

    The carbon isotope compositions of approximately 500 inclusion-bearing diamonds have been determined in the past decade. 98 percent of these diamonds readily fall into two broad categories on the basis of their inclusion mineralogies and compositions. These categories are peridotitic diamonds and eclogitic diamonds. Most peridotitic diamonds have δ 13 C values between -10 and -1 permil, whereas eclogitic diamonds have δ 13 C values between -28 and +2 permil. Peridotitic diamonds may represent primordial carbon, however, it is proposed that initially inhomogeneous δ 13 C values were subsequently homogenized, e.g. during melting and convection that is postulated to have occurred during the first billion years of the earth's existence. If this is the case, then the wider range of δ 13 C values exhibited by eclogitic diamonds requires a different explanation. Both the fractionation model and the subduction model can account for the range of observed δ 13 C values in eclogitic diamonds. 16 refs., 2 figs

  1. Conceptual model for deriving the repository source term

    International Nuclear Information System (INIS)

    Alexander, D.H.; Apted, M.J.; Liebetrau, A.M.; Van Luik, A.E.; Williford, R.E.; Doctor, P.G.; Pacific Northwest Lab., Richland, WA; Roy F. Weston, Inc./Rogers and Assoc. Engineering Corp., Rockville, MD)

    1984-01-01

    Part of a strategy for evaluating the compliance of geologic repositories with Federal regulations is a modeling approach that would provide realistic release estimates for a particular configuration of the engineered-barrier system. The objective is to avoid worst-case bounding assumptions that are physically impossible or excessively conservative and to obtain probabilitistic estimates of (1) the penetration time for metal barriers and (2) radionuclide-release rates for individually simulated waste packages after penetration has occurred. The conceptual model described in this paper will assume that release rates are explicitly related to such time-dependent processes as mass transfer, dissolution and precipitation, radionuclide decay, and variations in the geochemical environment. The conceptual model will take into account the reduction in the rates of waste-form dissolution and metal corrosion due to a buildup of chemical reaction products. The sorptive properties of the metal-barrier corrosion products in proximity to the waste form surface will also be included. Cumulative released from the engineered-barrier system will be calculated by summing the releases from a probabilistically generated population of individual waste packages. 14 refs., 7 figs

  2. Conceptual model for deriving the repository source term

    International Nuclear Information System (INIS)

    Alexander, D.H.; Apted, M.J.; Liebetrau, A.M.; Doctor, P.G.; Williford, R.E.; Van Luik, A.E.

    1984-11-01

    Part of a strategy for evaluating the compliance of geologic repositories with federal regulations is a modeling approach that would provide realistic release estimates for a particular configuration of the engineered-barrier system. The objective is to avoid worst-case bounding assumptions that are physically impossible or excessively conservative and to obtain probabilistic estimates of (1) the penetration time for metal barriers and (2) radionuclide-release rates for individually simulated waste packages after penetration has occurred. The conceptual model described in this paper will assume that release rates are explicitly related to such time-dependent processes as mass transfer, dissolution and precipitation, radionuclide decay, and variations in the geochemical environment. The conceptual model will take into account the reduction in the rates of waste-form dissolution and metal corrosion due to a buildup of chemical reaction products. The sorptive properties of the metal-barrier corrosion products in proximity to the waste form surface will also be included. Cumulative releases from the engineered-barrier system will be calculated by summing the releases from a probabilistically generated population of individual waste packages. 14 refs., 7 figs

  3. A GIS-based time-dependent seismic source modeling of Northern Iran

    Science.gov (United States)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  4. Calculation of media temperatures for nuclear sources in geologic depositories by a finite-length line source superposition model (FLLSSM)

    Energy Technology Data Exchange (ETDEWEB)

    Kays, W M; Hossaini-Hashemi, F [Stanford Univ., Palo Alto, CA (USA). Dept. of Mechanical Engineering; Busch, J S [Kaiser Engineers, Oakland, CA (USA)

    1982-02-01

    A linearized transient thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high-level waste or spent fuel assemblies are represented as finite-length line sources in a continuous medium. The combined effects of multiple canisters in a representative storage pattern can be established in the medium at selected point of interest by superposition of the temperature rises calculated for each canister. A mathematical solution of the calculation for each separate source is given in this article, permitting a slow hand calculation. The full report, ONWI-94, contains the details of the computer code FLLSSM and its use, yielding the total solution in one computer output.

  5. Evaluation of the influence of uncertain forward models on the EEG source reconstruction problem

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    in the different areas of the brain when noise is present. Results Due to mismatch between the true and experimental forward model, the reconstruction of the sources is determined by the angles between the i'th forward field associated with the true source and the j'th forward field in the experimental forward...... representation of the signal. Conclusions This analysis demonstrated that caution is needed when evaluating the source estimates in different brain regions. Moreover, we demonstrated the importance of reliable forward models, which may be used as a motivation for including the forward model uncertainty...

  6. Identifying the Source of Misfit in Item Response Theory Models.

    Science.gov (United States)

    Liu, Yang; Maydeu-Olivares, Alberto

    2014-01-01

    When an item response theory model fails to fit adequately, the items for which the model provides a good fit and those for which it does not must be determined. To this end, we compare the performance of several fit statistics for item pairs with known asymptotic distributions under maximum likelihood estimation of the item parameters: (a) a mean and variance adjustment to bivariate Pearson's X(2), (b) a bivariate subtable analog to Reiser's (1996) overall goodness-of-fit test, (c) a z statistic for the bivariate residual cross product, and (d) Maydeu-Olivares and Joe's (2006) M2 statistic applied to bivariate subtables. The unadjusted Pearson's X(2) with heuristically determined degrees of freedom is also included in the comparison. For binary and ordinal data, our simulation results suggest that the z statistic has the best Type I error and power behavior among all the statistics under investigation when the observed information matrix is used in its computation. However, if one has to use the cross-product information, the mean and variance adjusted X(2) is recommended. We illustrate the use of pairwise fit statistics in 2 real-data examples and discuss possible extensions of the current research in various directions.

  7. Investigations of incorporating source directivity into room acoustics computer models to improve auralizations

    Science.gov (United States)

    Vigeant, Michelle C.

    Room acoustics computer modeling and auralizations are useful tools when designing or modifying acoustically sensitive spaces. In this dissertation, the input parameter of source directivity has been studied in great detail to determine first its effect in room acoustics computer models and secondly how to better incorporate the directional source characteristics into these models to improve auralizations. To increase the accuracy of room acoustics computer models, the source directivity of real sources, such as musical instruments, must be included in the models. The traditional method for incorporating source directivity into room acoustics computer models involves inputting the measured static directivity data taken every 10° in a sphere-shaped pattern around the source. This data can be entered into the room acoustics software to create a directivity balloon, which is used in the ray tracing algorithm to simulate the room impulse response. The first study in this dissertation shows that using directional sources over an omni-directional source in room acoustics computer models produces significant differences both in terms of calculated room acoustics parameters and auralizations. The room acoustics computer model was also validated in terms of accurately incorporating the input source directivity. A recently proposed technique for creating auralizations using a multi-channel source representation has been investigated with numerous subjective studies, applied to both solo instruments and an orchestra. The method of multi-channel auralizations involves obtaining multi-channel anechoic recordings of short melodies from various instruments and creating individual channel auralizations. These auralizations are then combined to create a total multi-channel auralization. Through many subjective studies, this process was shown to be effective in terms of improving the realism and source width of the auralizations in a number of cases, and also modeling different

  8. Comparison of receptor models for source apportionment of volatile organic compounds in Beijing, China

    International Nuclear Information System (INIS)

    Song Yu; Dai Wei; Shao Min; Liu Ying; Lu Sihua; Kuster, William; Goldan, Paul

    2008-01-01

    Identifying the sources of volatile organic compounds (VOCs) is key to reducing ground-level ozone and secondary organic aerosols (SOAs). Several receptor models have been developed to apportion sources, but an intercomparison of these models had not been performed for VOCs in China. In the present study, we compared VOC sources based on chemical mass balance (CMB), UNMIX, and positive matrix factorization (PMF) models. Gasoline-related sources, petrochemical production, and liquefied petroleum gas (LPG) were identified by all three models as the major contributors, with UNMIX and PMF producing quite similar results. The contributions of gasoline-related sources and LPG estimated by the CMB model were higher, and petrochemical emissions were lower than in the UNMIX and PMF results, possibly because the VOC profiles used in the CMB model were for fresh emissions and the profiles extracted from ambient measurements by the two-factor analysis models were 'aged'. - VOCs sources were similar for three models with CMB showing a higher estimate for vehicles

  9. Comparison of receptor models for source apportionment of volatile organic compounds in Beijing, China

    Energy Technology Data Exchange (ETDEWEB)

    Song Yu; Dai Wei [Department of Environmental Sciences, Peking University, Beijing 100871 (China); Shao Min [State Joint Key Laboratory of Environmental Simulation and Pollution Control, Peking University, Beijing 100871 (China)], E-mail: mshao@pku.edu.cn; Liu Ying; Lu Sihua [State Joint Key Laboratory of Environmental Simulation and Pollution Control, Peking University, Beijing 100871 (China); Kuster, William; Goldan, Paul [Chemical Sciences Division, NOAA Earth System Research Laboratory, Boulder, CO 80305 (United States)

    2008-11-15

    Identifying the sources of volatile organic compounds (VOCs) is key to reducing ground-level ozone and secondary organic aerosols (SOAs). Several receptor models have been developed to apportion sources, but an intercomparison of these models had not been performed for VOCs in China. In the present study, we compared VOC sources based on chemical mass balance (CMB), UNMIX, and positive matrix factorization (PMF) models. Gasoline-related sources, petrochemical production, and liquefied petroleum gas (LPG) were identified by all three models as the major contributors, with UNMIX and PMF producing quite similar results. The contributions of gasoline-related sources and LPG estimated by the CMB model were higher, and petrochemical emissions were lower than in the UNMIX and PMF results, possibly because the VOC profiles used in the CMB model were for fresh emissions and the profiles extracted from ambient measurements by the two-factor analysis models were 'aged'. - VOCs sources were similar for three models with CMB showing a higher estimate for vehicles.

  10. Hanford tank residual waste - Contaminant source terms and release models

    International Nuclear Information System (INIS)

    Deutsch, William J.; Cantrell, Kirk J.; Krupka, Kenneth M.; Lindberg, Michael L.; Jeffery Serne, R.

    2011-01-01

    Highlights: → Residual waste from five Hanford spent fuel process storage tanks was evaluated. → Gibbsite is a common mineral in tanks with high Al concentrations. → Non-crystalline U-Na-C-O-P ± H phases are common in the U-rich residual. → Iron oxides/hydroxides have been identified in all residual waste samples. → Uranium release is highly dependent on waste and leachant compositions. - Abstract: Residual waste is expected to be left in 177 underground storage tanks after closure at the US Department of Energy's Hanford Site in Washington State, USA. In the long term, the residual wastes may represent a potential source of contamination to the subsurface environment. Residual materials that cannot be completely removed during the tank closure process are being studied to identify and characterize the solid phases and estimate the release of contaminants from these solids to water that might enter the closed tanks in the future. As of the end of 2009, residual waste from five tanks has been evaluated. Residual wastes from adjacent tanks C-202 and C-203 have high U concentrations of 24 and 59 wt.%, respectively, while residual wastes from nearby tanks C-103 and C-106 have low U concentrations of 0.4 and 0.03 wt.%, respectively. Aluminum concentrations are high (8.2-29.1 wt.%) in some tanks (C-103, C-106, and S-112) and relatively low ( 2 -saturated solution, or a CaCO 3 -saturated water. Uranium release concentrations are highly dependent on waste and leachant compositions with dissolved U concentrations one or two orders of magnitude higher in the tests with high U residual wastes, and also higher when leached with the CaCO 3 -saturated solution than with the Ca(OH) 2 -saturated solution. Technetium leachability is not as strongly dependent on the concentration of Tc in the waste, and it appears to be slightly more leachable by the Ca(OH) 2 -saturated solution than by the CaCO 3 -saturated solution. In general, Tc is much less leachable (<10 wt.% of the

  11. Analytic sensing for multi-layer spherical models with application to EEG source imaging

    OpenAIRE

    Kandaswamy, Djano; Blu, Thierry; Van De Ville, Dimitri

    2013-01-01

    Source imaging maps back boundary measurements to underlying generators within the domain; e. g., retrieving the parameters of the generating dipoles from electrical potential measurements on the scalp such as in electroencephalography (EEG). Fitting such a parametric source model is non-linear in the positions of the sources and renewed interest in mathematical imaging has led to several promising approaches. One important step in these methods is the application of a sensing principle that ...

  12. Parallel Beam Dynamics Simulation Tools for Future Light Source Linac Modeling

    International Nuclear Information System (INIS)

    Qiang, Ji; Pogorelov, Ilya v.; Ryne, Robert D.

    2007-01-01

    Large-scale modeling on parallel computers is playing an increasingly important role in the design of future light sources. Such modeling provides a means to accurately and efficiently explore issues such as limits to beam brightness, emittance preservation, the growth of instabilities, etc. Recently the IMPACT codes suite was enhanced to be applicable to future light source design. Simulations with IMPACT-Z were performed using up to one billion simulation particles for the main linac of a future light source to study the microbunching instability. Combined with the time domain code IMPACT-T, it is now possible to perform large-scale start-to-end linac simulations for future light sources, including the injector, main linac, chicanes, and transfer lines. In this paper we provide an overview of the IMPACT code suite, its key capabilities, and recent enhancements pertinent to accelerator modeling for future linac-based light sources

  13. Sources of motivation, interpersonal conflict management styles, and leadership effectiveness: a structural model.

    Science.gov (United States)

    Barbuto, John E; Xu, Ye

    2006-02-01

    126 leaders and 624 employees were sampled to test the relationship between sources of motivation and conflict management styles of leaders and how these variables influence effectiveness of leadership. Five sources of motivation measured by the Motivation Sources Inventory were tested-intrinsic process, instrumental, self-concept external, self-concept internal, and goal internalization. These sources of work motivation were associated with Rahim's modes of interpersonal conflict management-dominating, avoiding, obliging, complying, and integrating-and to perceived leadership effectiveness. A structural equation model tested leaders' conflict management styles and leadership effectiveness based upon different sources of work motivation. The model explained variance for obliging (65%), dominating (79%), avoiding (76%), and compromising (68%), but explained little variance for integrating (7%). The model explained only 28% of the variance in leader effectiveness.

  14. Total Variability Modeling using Source-specific Priors

    DEFF Research Database (Denmark)

    Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou

    2016-01-01

    sequence of an utterance. In both cases the prior for the latent variable is assumed to be non-informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows in the heterogeneous case, that using informative priors for com- puting the posterior......, can lead to favorable results. We focus on modeling the priors using minimum divergence criterion or fac- tor analysis techniques. Tests on the NIST 2008 and 2010 Speaker Recognition Evaluation (SRE) dataset show that our proposed method beats four baselines: For i-vector extraction using an already...... trained matrix, for the short2-short3 task in SRE’08, five out of eight female and four out of eight male common conditions, were improved. For the core-extended task in SRE’10, four out of nine female and six out of nine male common conditions were improved. When incorporating prior information...

  15. Receptor modeling for source apportionment of polycyclic aromatic hydrocarbons in urban atmosphere.

    Science.gov (United States)

    Singh, Kunwar P; Malik, Amrita; Kumar, Ranjan; Saxena, Puneet; Sinha, Sarita

    2008-01-01

    This study reports source apportionment of polycyclic aromatic hydrocarbons (PAHs) in particulate depositions on vegetation foliages near highway in the urban environment of Lucknow city (India) using the principal components analysis/absolute principal components scores (PCA/APCS) receptor modeling approach. The multivariate method enables identification of major PAHs sources along with their quantitative contributions with respect to individual PAH. The PCA identified three major sources of PAHs viz. combustion, vehicular emissions, and diesel based activities. The PCA/APCS receptor modeling approach revealed that the combustion sources (natural gas, wood, coal/coke, biomass) contributed 19-97% of various PAHs, vehicular emissions 0-70%, diesel based sources 0-81% and other miscellaneous sources 0-20% of different PAHs. The contributions of major pyrolytic and petrogenic sources to the total PAHs were 56 and 42%, respectively. Further, the combustion related sources contribute major fraction of the carcinogenic PAHs in the study area. High correlation coefficient (R2 > 0.75 for most PAHs) between the measured and predicted concentrations of PAHs suggests for the applicability of the PCA/APCS receptor modeling approach for estimation of source contribution to the PAHs in particulates.

  16. Source modelling of train noise - Literature review and some initial measurements

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Xuetao; Jonasson, Hans; Holmberg, Kjell

    2000-07-01

    A literature review of source modelling of railway noise is reported. Measurements on a special test rig at Surahammar and on the new railway line between Arlanda and Stockholm City are reported and analyzed. In the analysis the train is modelled as a number of point sources with or without directivity and each source is combined with analytical sound propagation theory to predict the sound propagation pattern best fitting the measured data. Wheel/rail rolling noise is considered to be the most important noise source. The rolling noise can be modelled as an array of moving point sources, which have a dipole-like horizontal directivity and some kind of vertical directivity. In general it is necessary to distribute the point sources on several heights. Based on our model analysis the source heights for the rolling noise should be below the wheel axles and the most important height is about a quarter of wheel diameter above the railheads. When train speeds are greater than 250 km/h aerodynamic noise will become important and even dominant. It may be important for low frequency components only if the train speed is less than 220 km/h. Little data are available for these cases. It is believed that aerodynamic noise has dipole-like directivity. Its spectrum depends on many factors: speed, railway system, type of train, bogies, wheels, pantograph, presence of barriers and even weather conditions. Other sources such as fans, engine, transmission and carriage bodies are at most second order noise sources, but for trains with a diesel locomotive engine the engine noise will be dominant if train speeds are less than about 100 km/h. The Nord 2000 comprehensive model for sound propagation outdoors, together with the source model that is based on the understandings above, can suitably handle the problems of railway noise propagation in one-third octave bands although there are still problems left to be solved.

  17. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    Science.gov (United States)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  18. Modeling and analysis of a transcritical rankine power cycle with a low grade heat source

    DEFF Research Database (Denmark)

    Nguyen, Chan; Veje, Christian

    efficiency, exergetic efficiency and specific net power output. A generic cycle configuration has been used for analysis of a geothermal energy heat source. This model has been validated against similar calculations using industrial waste heat as the energy source. Calculations are done with fixed...

  19. Free Open Source Software: Social Phenomenon, New Management, New Business Models

    Directory of Open Access Journals (Sweden)

    Žilvinas Jančoras

    2011-08-01

    Full Text Available In the paper assumptions of free open source software existence, development, financing and competition models are presented. The free software as a social phenomenon and the open source software as the technological and managerial innovation environment are revealed. The social and business interaction processes are analyzed.Article in Lithuanian

  20. Parsing pyrogenic polycyclic aromatic hydrocarbons: forensic chemistry, receptor models, and source control policy.

    Science.gov (United States)

    O'Reilly, Kirk T; Pietari, Jaana; Boehm, Paul D

    2014-04-01

    A realistic understanding of contaminant sources is required to set appropriate control policy. Forensic chemical methods can be powerful tools in source characterization and identification, but they require a multiple-lines-of-evidence approach. Atmospheric receptor models, such as the US Environmental Protection Agency (USEPA)'s chemical mass balance (CMB), are increasingly being used to evaluate sources of pyrogenic polycyclic aromatic hydrocarbons (PAHs) in sediments. This paper describes the assumptions underlying receptor models and discusses challenges in complying with these assumptions in practice. Given the variability within, and the similarity among, pyrogenic PAH source types, model outputs are sensitive to specific inputs, and parsing among some source types may not be possible. Although still useful for identifying potential sources, the technical specialist applying these methods must describe both the results and their inherent uncertainties in a way that is understandable to nontechnical policy makers. The authors present an example case study concerning an investigation of a class of parking-lot sealers as a significant source of PAHs in urban sediment. Principal component analysis is used to evaluate published CMB model inputs and outputs. Targeted analyses of 2 areas where bans have been implemented are included. The results do not support the claim that parking-lot sealers are a significant source of PAHs in urban sediments. © 2013 SETAC.

  1. Two Model-Based Methods for Policy Analyses of Fine Particulate Matter Control in China: Source Apportionment and Source Sensitivity

    Science.gov (United States)

    Li, X.; Zhang, Y.; Zheng, B.; Zhang, Q.; He, K.

    2013-12-01

    Anthropogenic emissions have been controlled in recent years in China to mitigate fine particulate matter (PM2.5) pollution. Recent studies show that sulfate dioxide (SO2)-only control cannot reduce total PM2.5 levels efficiently. Other species such as nitrogen oxide, ammonia, black carbon, and organic carbon may be equally important during particular seasons. Furthermore, each species is emitted from several anthropogenic sectors (e.g., industry, power plant, transportation, residential and agriculture). On the other hand, contribution of one emission sector to PM2.5 represents contributions of all species in this sector. In this work, two model-based methods are used to identify the most influential emission sectors and areas to PM2.5. The first method is the source apportionment (SA) based on the Particulate Source Apportionment Technology (PSAT) available in the Comprehensive Air Quality Model with extensions (CAMx) driven by meteorological predictions of the Weather Research and Forecast (WRF) model. The second method is the source sensitivity (SS) based on an adjoint integration technique (AIT) available in the GEOS-Chem model. The SA method attributes simulated PM2.5 concentrations to each emission group, while the SS method calculates their sensitivity to each emission group, accounting for the non-linear relationship between PM2.5 and its precursors. Despite their differences, the complementary nature of the two methods enables a complete analysis of source-receptor relationships to support emission control policies. Our objectives are to quantify the contributions of each emission group/area to PM2.5 in the receptor areas and to intercompare results from the two methods to gain a comprehensive understanding of the role of emission sources in PM2.5 formation. The results will be compared in terms of the magnitudes and rankings of SS or SA of emitted species and emission groups/areas. GEOS-Chem with AIT is applied over East Asia at a horizontal grid

  2. Fecal indicator organism modeling and microbial source tracking in environmental waters: Chapter 3.4.6

    Science.gov (United States)

    Nevers, Meredith; Byappanahalli, Muruleedhara; Phanikumar, Mantha S.; Whitman, Richard L.

    2016-01-01

    Mathematical models have been widely applied to surface waters to estimate rates of settling, resuspension, flow, dispersion, and advection in order to calculate movement of particles that influence water quality. Of particular interest are the movement, survival, and persistence of microbial pathogens or their surrogates, which may contaminate recreational water, drinking water, or shellfish. Most models devoted to microbial water quality have been focused on fecal indicator organisms (FIO), which act as a surrogate for pathogens and viruses. Process-based modeling and statistical modeling have been used to track contamination events to source and to predict future events. The use of these two types of models require different levels of expertise and input; process-based models rely on theoretical physical constructs to explain present conditions and biological distribution while data-based, statistical models use extant paired data to do the same. The selection of the appropriate model and interpretation of results is critical to proper use of these tools in microbial source tracking. Integration of the modeling approaches could provide insight for tracking and predicting contamination events in real time. A review of modeling efforts reveals that process-based modeling has great promise for microbial source tracking efforts; further, combining the understanding of physical processes influencing FIO contamination developed with process-based models and molecular characterization of the population by gene-based (i.e., biological) or chemical markers may be an effective approach for locating sources and remediating contamination in order to protect human health better.

  3. Modelling Nd-isotopes with a coarse resolution ocean circulation model: Sensitivities to model parameters and source/sink distributions

    International Nuclear Information System (INIS)

    Rempfer, Johannes; Stocker, Thomas F.; Joos, Fortunat; Dutay, Jean-Claude; Siddall, Mark

    2011-01-01

    The neodymium (Nd) isotopic composition (Nd) of seawater is a quasi-conservative tracer of water mass mixing and is assumed to hold great potential for paleo-oceanographic studies. Here we present a comprehensive approach for the simulation of the two neodymium isotopes 143 Nd, and 144 Nd using the Bern3D model, a low resolution ocean model. The high computational efficiency of the Bern3D model in conjunction with our comprehensive approach allows us to systematically and extensively explore the sensitivity of Nd concentrations and ε Nd to the parametrisation of sources and sinks. Previous studies have been restricted in doing so either by the chosen approach or by computational costs. Our study thus presents the most comprehensive survey of the marine Nd cycle to date. Our model simulates both Nd concentrations as well as ε Nd in good agreement with observations. ε Nd co-varies with salinity, thus underlining its potential as a water mass proxy. Results confirm that the continental margins are required as a Nd source to simulate Nd concentrations and ε Nd consistent with observations. We estimate this source to be slightly smaller than reported in previous studies and find that above a certain magnitude its magnitude affects ε Nd only to a small extent. On the other hand, the parametrisation of the reversible scavenging considerably affects the ability of the model to simulate both, Nd concentrations and ε Nd . Furthermore, despite their small contribution, we find dust and rivers to be important components of the Nd cycle. In additional experiments, we systematically varied the diapycnal diffusivity as well as the Atlantic-to-Pacific freshwater flux to explore the sensitivity of Nd concentrations and its isotopic signature to the strength and geometry of the overturning circulation. These experiments reveal that Nd concentrations and ε Nd are comparatively little affected by variations in diapycnal diffusivity and the Atlantic-to-Pacific freshwater flux

  4. AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source

    Science.gov (United States)

    Nightingale, J. W.; Dye, S.; Massey, Richard J.

    2018-05-01

    This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.

  5. An incentive-based source separation model for sustainable municipal solid waste management in China.

    Science.gov (United States)

    Xu, Wanying; Zhou, Chuanbin; Lan, Yajun; Jin, Jiasheng; Cao, Aixin

    2015-05-01

    Municipal solid waste (MSW) management (MSWM) is most important and challenging in large urban communities. Sound community-based waste management systems normally include waste reduction and material recycling elements, often entailing the separation of recyclable materials by the residents. To increase the efficiency of source separation and recycling, an incentive-based source separation model was designed and this model was tested in 76 households in Guiyang, a city of almost three million people in southwest China. This model embraced the concepts of rewarding households for sorting organic waste, government funds for waste reduction, and introducing small recycling enterprises for promoting source separation. Results show that after one year of operation, the waste reduction rate was 87.3%, and the comprehensive net benefit under the incentive-based source separation model increased by 18.3 CNY tonne(-1) (2.4 Euros tonne(-1)), compared to that under the normal model. The stakeholder analysis (SA) shows that the centralized MSW disposal enterprises had minimum interest and may oppose the start-up of a new recycling system, while small recycling enterprises had a primary interest in promoting the incentive-based source separation model, but they had the least ability to make any change to the current recycling system. The strategies for promoting this incentive-based source separation model are also discussed in this study. © The Author(s) 2015.

  6. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    Science.gov (United States)

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  7. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  8. Major models and data sources for residential and commercial sector energy conservation analysis. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-09-01

    Major models and data sources are reviewed that can be used for energy-conservation analysis in the residential and commercial sectors to provide an introduction to the information that can or is available to DOE in order to further its efforts in analyzing and quantifying their policy and program requirements. Models and data sources examined in the residential sector are: ORNL Residential Energy Model; BECOM; NEPOOL; MATH/CHRDS; NIECS; Energy Consumption Data Base: Household Sector; Patterns of Energy Use by Electrical Appliances Data Base; Annual Housing Survey; 1970 Census of Housing; AIA Research Corporation Data Base; RECS; Solar Market Development Model; and ORNL Buildings Energy Use Data Book. Models and data sources examined in the commercial sector are: ORNL Commercial Sector Model of Energy Demand; BECOM; NEPOOL; Energy Consumption Data Base: Commercial Sector; F.W. Dodge Data Base; NFIB Energy Report for Small Businesses; ADL Commercial Sector Energy Use Data Base; AIA Research Corporation Data Base; Nonresidential Buildings Surveys of Energy Consumption; General Electric Co: Commercial Sector Data Base; The BOMA Commercial Sector Data Base; The Tishman-Syska and Hennessy Data Base; The NEMA Commercial Sector Data Base; ORNL Buildings Energy Use Data Book; and Solar Market Development Model. Purpose; basis for model structure; policy variables and parameters; level of regional, sectoral, and fuels detail; outputs; input requirements; sources of data; computer accessibility and requirements; and a bibliography are provided for each model and data source.

  9. Martian methane plume models for defining Mars rover methane source search strategies

    Science.gov (United States)

    Nicol, Christopher; Ellery, Alex; Lynch, Brian; Cloutis, Ed

    2018-07-01

    The detection of atmospheric methane on Mars implies an active methane source. This introduces the possibility of a biotic source with the implied need to determine whether the methane is indeed biotic in nature or geologically generated. There is a clear need for robotic algorithms which are capable of manoeuvring a rover through a methane plume on Mars to locate its source. We explore aspects of Mars methane plume modelling to reveal complex dynamics characterized by advection and diffusion. A statistical analysis of the plume model has been performed and compared to analyses of terrestrial plume models. Finally, we consider a robotic search strategy to find a methane plume source. We find that gradient-based techniques are ineffective, but that more sophisticated model-based search strategies are unlikely to be available in near-term rover missions.

  10. Development of Realistic Head Models for Electromagnetic Source Imaging of the Human Brain

    National Research Council Canada - National Science Library

    Akalin, Z

    2001-01-01

    In this work, a methodology is developed to solve the forward problem of electromagnetic source imaging using realistic head models, For this purpose, first segmentation of the 3 dimensional MR head...

  11. Variability of dynamic source parameters inferred from kinematic models of past earthquakes

    KAUST Repository

    Causse, M.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    We analyse the scaling and distribution of average dynamic source properties (fracture energy, static, dynamic and apparent stress drops) using 31 kinematic inversion models from 21 crustal earthquakes. Shear-stress histories are computed by solving

  12. Effects of Host-rock Fracturing on Elastic-deformation Source Models of Volcano Deflation.

    Science.gov (United States)

    Holohan, Eoghan P; Sudhaus, Henriette; Walter, Thomas R; Schöpfer, Martin P J; Walsh, John J

    2017-09-08

    Volcanoes commonly inflate or deflate during episodes of unrest or eruption. Continuum mechanics models that assume linear elastic deformation of the Earth's crust are routinely used to invert the observed ground motions. The source(s) of deformation in such models are generally interpreted in terms of magma bodies or pathways, and thus form a basis for hazard assessment and mitigation. Using discontinuum mechanics models, we show how host-rock fracturing (i.e. non-elastic deformation) during drainage of a magma body can progressively change the shape and depth of an elastic-deformation source. We argue that this effect explains the marked spatio-temporal changes in source model attributes inferred for the March-April 2007 eruption of Piton de la Fournaise volcano, La Reunion. We find that pronounced deflation-related host-rock fracturing can: (1) yield inclined source model geometries for a horizontal magma body; (2) cause significant upward migration of an elastic-deformation source, leading to underestimation of the true magma body depth and potentially to a misinterpretation of ascending magma; and (3) at least partly explain underestimation by elastic-deformation sources of changes in sub-surface magma volume.

  13. Validation and calibration of structural models that combine information from multiple sources.

    Science.gov (United States)

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  14. Added-value joint source modelling of seismic and geodetic data

    Science.gov (United States)

    Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank

    2013-04-01

    In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source

  15. A Method of Auxiliary Sources Approach for Modelling the Impact of Ground Planes on Antenna

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2006-01-01

    The Method of Auxiliary Sources (MAS) is employed to model the impact of finite ground planes on the radiation from antennas. Two different antenna test cases are shown and the calculated results agree well with reference measurements......The Method of Auxiliary Sources (MAS) is employed to model the impact of finite ground planes on the radiation from antennas. Two different antenna test cases are shown and the calculated results agree well with reference measurements...

  16. Energy models for commercial energy prediction and substitution of renewable energy sources

    International Nuclear Information System (INIS)

    Iniyan, S.; Suganthi, L.; Samuel, Anand A.

    2006-01-01

    In this paper, three models have been projected namely Modified Econometric Mathematical (MEM) model, Mathematical Programming Energy-Economy-Environment (MPEEE) model, and Optimal Renewable Energy Mathematical (OREM) model. The actual demand for coal, oil and electricity is predicted using the MEM model based on economic, technological and environmental factors. The results were used in the MPEEE model, which determines the optimum allocation of commercial energy sources based on environmental limitations. The gap between the actual energy demand from the MEM model and optimal energy use from the MPEEE model, has to be met by the renewable energy sources. The study develops an OREM model that would facilitate effective utilization of renewable energy sources in India, based on cost, efficiency, social acceptance, reliability, potential and demand. The economic variations in solar energy systems and inclusion of environmental constraint are also analyzed with OREM model. The OREM model will help policy makers in the formulation and implementation of strategies concerning renewable energy sources in India for the next two decades

  17. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans.

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-07

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients' CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  18. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  19. Experimental research of neutron yield and spectrum from deuterium gas-puff z-pinch on the GIT-12 generator at current above 2 MA

    Science.gov (United States)

    Cherdizov, R. K.; Fursov, F. I.; Kokshenev, V. A.; Kurmaev, N. E.; Labetsky, A. Yu; Ratakhin, N. A.; Shishlov, A. V.; Cikhardt, J.; Cikhardtova, B.; Klir, D.; Kravarik, J.; Kubes, P.; Rezac, K.; Dudkin, G. N.; Garapatsky, A. A.; Padalko, V. N.; Varlachev, V. A.

    2017-05-01

    The Z-pinch experiments with deuterium gas-puff surrounded by an outer plasma shell were carried out on the GIT-12 generator (Tomsk, Russia) at currents of 2 MA. The plasma shell consisting of hydrogen and carbon ions was formed by 48 plasma guns. The deuterium gas-puff was created by a fast electromagnetic valve. This configuration provides an efficient mode of the neutron production in DD reaction, and the neutron yield reaches a value above 1012 neutrons per shot. Neutron diagnostics included scintillation TOF detectors for determination of the neutron energy spectrum, bubble detectors BD-PND, a silver activation detector, and several activation samples for determination of the neutron yield analysed by a Sodium Iodide (NaI) and a high-purity Germanium (HPGe) detectors. Using this neutron diagnostic complex, we measured the total neutron yield and amount of high-energy neutrons.

  20. Source Release Modeling for the Idaho National Engineering and Environmental Laboratory's Subsurface Disposal Area

    International Nuclear Information System (INIS)

    Becker, B.H.

    2002-01-01

    A source release model was developed to determine the release of contaminants into the shallow subsurface, as part of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) evaluation at the Idaho National Engineering and Environmental Laboratory's (INEEL) Subsurface Disposal Area (SDA). The output of the source release model is used as input to the subsurface transport and biotic uptake models. The model allowed separating the waste into areas that match the actual disposal units. This allows quantitative evaluation of the relative contribution to the total risk and allows evaluation of selective remediation of the disposal units within the SDA

  1. Receptor modeling studies for the characterization of PM10 pollution sources in Belgrade

    Directory of Open Access Journals (Sweden)

    Mijić Zoran

    2012-01-01

    Full Text Available The objective of this study is to determine the major sources and potential source regions of PM10 over Belgrade, Serbia. The PM10 samples were collected from July 2003 to December 2006 in very urban area of Belgrade and concentrations of Al, V, Cr, Mn, Fe, Ni, Cu, Zn, Cd and Pb were analyzed by atomic absorption spectrometry. The analysis of seasonal variations of PM10 mass and some element concentrations reported relatively higher concentrations in winter, what underlined the importance of local emission sources. The Unmix model was used for source apportionment purpose and the four main source profiles (fossil fuel combustion; traffic exhaust/regional transport from industrial centers; traffic related particles/site specific sources and mineral/crustal matter were identified. Among the resolved factors the fossil fuel combustion was the highest contributor (34% followed by traffic/regional industry (26%. Conditional probability function (CPF results identified possible directions of local sources. The potential source contribution function (PSCF and concentration weighted trajectory (CWT receptor models were used to identify spatial source distribution and contribution of regional-scale transported aerosols. [Projekat Ministarstva nauke Republike Srbije, br. III43007 i br. III41011

  2. Modeling generalized interline power-flow controller (GIPFC using 48-pulse voltage source converters

    Directory of Open Access Journals (Sweden)

    Amir Ghorbani

    2018-05-01

    Full Text Available Generalized interline power-flow controller (GIPFC is one of the voltage-source controller (VSC-based flexible AC transmission system (FACTS controllers that can independently regulate the power-flow over each transmission line of a multiline system. This paper presents the modeling and performance analysis of GIPFC based on 48-pulsed voltage-source converters. This paper deals with a cascaded multilevel converter model, which is a 48-pulse (three levels voltage source converter. The voltage source converter described in this paper is a harmonic neutralized, 48-pulse GTO converter. The GIPFC controller is based on d-q orthogonal coordinates. The algorithm is verified using simulations in MATLAB/Simulink environment. Comparisons between unified power flow controller (UPFC and GIPFC are also included. Keywords: Generalized interline power-flow controller (GIPFC, Voltage source converter (VCS, 48-pulse GTO converter

  3. Effects of source shape on the numerical aperture factor with a geometrical-optics model.

    Science.gov (United States)

    Wan, Der-Shen; Schmit, Joanna; Novak, Erik

    2004-04-01

    We study the effects of an extended light source on the calibration of an interference microscope, also referred to as an optical profiler. Theoretical and experimental numerical aperture (NA) factors for circular and linear light sources along with collimated laser illumination demonstrate that the shape of the light source or effective aperture cone is critical for a correct NA factor calculation. In practice, more-accurate results for the NA factor are obtained when a linear approximation to the filament light source shape is used in a geometric model. We show that previously measured and derived NA factors show some discrepancies because a circular rather than linear approximation to the filament source was used in the modeling.

  4. Bacterial contaminants from frozen puff pastry production process and their growth inhibition by antimicrobial substances from lactic acid bacteria.

    Science.gov (United States)

    Rumjuankiat, Kittaporn; Keawsompong, Suttipun; Nitisinprasert, Sunee

    2017-05-01

    Seventy-five bacterial contaminants which still persisted to cleaning system from three puff pastry production lines (dough forming, layer and filling forming, and shock freezing) were identified using 16S rDNA as seven genera of Bacillus , Corynebacterium , Dermacoccus , Enterobacter , Klebsiella, Pseudomonas , and Staphylococcus with detection frequencies of 24.00, 2.66, 1.33, 37.33, 1.33, 2.66, and 30.66, respectively. Seventeen species were discovered while only 11 species Bacillus cereus, B. subtilis, B. pumilus, Corynebacterium striatum , Dermacoccus barathri , Enterobacter asburiae, Staphylococcus kloosii, S. haemolyticus, S. hominis, S. warneri , and S. aureus were detected at the end of production. Based on their abundance, the highest abundance of E. asburiae could be used as a biomarker for product quality. While a low abundance of the mesophile pathogen C. striatum , which causes respiratory and nervous infection and appeared only at the shock freezing step was firstly reported for its detection in bakery product. Six antimicrobial substances (AMSs) from lactic acid bacteria, FF1-4, FF1-7, PFUR-242, PFUR-255, PP-174, and nisin A were tested for their inhibition activities against the contaminants. The three most effective were FF1-7, PP-174, and nisin A exhibiting wide inhibition spectra of 88.00%, 85.33%, and 86.66%, respectively. The potential of a disinfectant solution containing 800 AU/ml of PP-174 and nisin A against the most resistant strains of Enterobacter , Staphylococcus , Bacillus and Klebsiella was determined on artificially contaminated conveyor belt coupons at 0, 4, 8, 12, and 16 hr. The survival levels of the test strains were below 1 log CFU/coupon at 0 hr. The results suggested that a combined solution of PP-174 and nisin A may be beneficial as a sanitizer to inhibit bacterial contaminants in the frozen puff pastry industry.

  5. Experiments with a Gas-Puff-On-Wire-Array Load on the GIT-12 Generator for Al K-shell Radiation Production at Microsecond Implosion Times

    International Nuclear Information System (INIS)

    Shishlov, Alexander V.; Baksht, Rina B.; Chaikovsky, Stanislav A.; Fedunin, Anatoly V.; Fursov, Fedor I.; Kovalchuk, Boris M.; Kokshenev, Vladimir A.; Kurmaev, Nikolai E.; Labetsky, Aleksey Yu.; Oreshkin, Vladimir I.; Rousskikh, Alexander G.; Lassalle, Francis; Bayol, Frederic

    2006-01-01

    Results of the experiments carried out on the GIT-12 generator at the current level of 3.5 MA and the Z-pinch implosion times from 700 ns to 1.1 μs are presented. A multi-shell (triple-shell) load configuration with the outer gas puffs (neon) and the inner wire array (aluminum) was used in the experiments. In the course of the research, implosion dynamics of the triple-shell z-pinch was studied, and the radiation yield in the spectral range of neon and aluminum K-lines have been measured. Optimization of the inner wire array parameters aimed at obtaining the maximum aluminum K-shell radiation yield has been carried out. As a result of optimization of the gas-puff-on-wire-array Z-pinch load, the aluminum K-shell radiation yield (hv> 1.55 keV) up to 4 kJ/cm in the radiation pulse with FWHM less than 30 ns has been obtained. Comparison of the experimental results with the results of preliminary 1D RMHD simulations allows a conclusion that at least 2/3 of the generator current is switched from a gas puff to an aluminum wire array. The radiation yield in the spectral range of neon K-lines (0.92-1.55 keV) increases considerably in the shots with the inner wire array in comparison with the shots carried out with the outer gas puffs only. The radiation yield in the spectral range above 1 keV registered in the experiments reached 10 kJ/cm. The presence of a high portion of the neon plasma inside an inner wire array can limit the radiation yield in the spectral range above 1.55 keV

  6. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    Directory of Open Access Journals (Sweden)

    Javier Macias-Guarasa

    2012-10-01

    Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  7. Locating the Source of Atmospheric Contamination Based on Data From the Kori Field Tracer Experiment

    Directory of Open Access Journals (Sweden)

    Piotr Kopka

    2015-01-01

    Full Text Available Accidental releases of hazardous material into the atmosphere pose high risks to human health and the environment. Thus it would be valuable to develop an emergency reaction system which can recognize the probable location of the source based only on concentrations of the released substance as reported by a network of sensors. We apply a methodology combining Bayesian inference with Sequential Monte Carlo (SMC methods to the problem of locating the source of an atmospheric contaminant. The input data for this algorithm are the concentrations of a given substance gathered continuously in time. We employ this algorithm to locating a contamination source using data from a field tracer experiment covering the Kori nuclear site and conducted in May 2001. We use the second-order Closure Integrated PUFF Model (SCIPUFF of atmospheric dispersion as the forward model to predict concentrations at the sensors' locations. We demonstrate that the source of continuous contamination may be successfully located even in the very complicated, hilly terrain surrounding the Kori nuclear site. (original abstract

  8. The Analytical Repository Source-Term (AREST) model: Description and documentation

    International Nuclear Information System (INIS)

    Liebetrau, A.M.; Apted, M.J.; Engel, D.W.; Altenhofen, M.K.; Strachan, D.M.; Reid, C.R.; Windisch, C.F.; Erikson, R.L.; Johnson, K.I.

    1987-10-01

    The geologic repository system consists of several components, one of which is the engineered barrier system. The engineered barrier system interfaces with natural barriers that constitute the setting of the repository. A model that simulates the releases from the engineered barrier system into the natural barriers of the geosphere, called a source-term model, is an important component of any model for assessing the overall performance of the geologic repository system. The Analytical Repository Source-Term (AREST) model being developed is one such model. This report describes the current state of development of the AREST model and the code in which the model is implemented. The AREST model consists of three component models and five process models that describe the post-emplacement environment of a waste package. All of these components are combined within a probabilistic framework. The component models are a waste package containment (WPC) model that simulates the corrosion and degradation processes which eventually result in waste package containment failure; a waste package release (WPR) model that calculates the rates of radionuclide release from the failed waste package; and an engineered system release (ESR) model that controls the flow of information among all AREST components and process models and combines release output from the WPR model with failure times from the WPC model to produce estimates of total release. 167 refs., 40 figs., 12 tabs

  9. An Equivalent Source Method for Modelling the Global Lithospheric Magnetic Field

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    2014-01-01

    We present a new technique for modelling the global lithospheric magnetic field at Earth's surface based on the estimation of equivalent potential field sources. As a demonstration we show an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010 when...... are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid. The corresponding source values are estimated using an iteratively reweighted least squares algorithm...... in the CHAOS-4 and MF7 models using more conventional spherical harmonic based approaches. Advantages of the equivalent source method include its local nature, allowing e.g. for regional grid refinement, and the ease of transforming to spherical harmonics when needed. Future applications will make use of Swarm...

  10. A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.

    Science.gov (United States)

    Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco

    2018-01-01

    Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    Science.gov (United States)

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  12. Inter-comparison of receptor models for PM source apportionment: Case study in an industrial area

    Science.gov (United States)

    Viana, M.; Pandolfi, M.; Minguillón, M. C.; Querol, X.; Alastuey, A.; Monfort, E.; Celades, I.

    2008-05-01

    Receptor modelling techniques are used to identify and quantify the contributions from emission sources to the levels and major and trace components of ambient particulate matter (PM). A wide variety of receptor models are currently available, and consequently the comparability between models should be evaluated if source apportionment data are to be used as input in health effects studies or mitigation plans. Three of the most widespread receptor models (principal component analysis, PCA; positive matrix factorization, PMF; chemical mass balance, CMB) were applied to a single PM10 data set (n=328 samples, 2002-2005) obtained from an industrial area in NE Spain, dedicated to ceramic production. Sensitivity and temporal trend analyses (using the Mann-Kendall test) were applied. Results evidenced the good overall performance of the three models (r2>0.83 and α>0.91×between modelled and measured PM10 mass), with a good agreement regarding source identification and high correlations between input (CMB) and output (PCA, PMF) source profiles. Larger differences were obtained regarding the quantification of source contributions (up to a factor of 4 in some cases). The combined application of different types of receptor models would solve the limitations of each of the models, by constructing a more robust solution based on their strengths. The authors suggest the combined use of factor analysis techniques (PCA, PMF) to identify and interpret emission sources, and to obtain a first quantification of their contributions to the PM mass, and the subsequent application of CMB. Further research is needed to ensure that source apportionment methods are robust enough for application to PM health effects assessments.

  13. Electrical description of a magnetic pole enhanced inductively coupled plasma source: Refinement of the transformer model by reverse electromagnetic modeling

    International Nuclear Information System (INIS)

    Meziani, T.; Colpo, P.; Rossi, F.

    2006-01-01

    The magnetic pole enhanced inductively coupled source (MaPE-ICP) is an innovative low-pressure plasma source that allows for high plasma density and high plasma uniformity, as well as large-area plasma generation. This article presents an electrical characterization of this source, and the experimental measurements are compared to the results obtained after modeling the source by the equivalent circuit of the transformer. In particular, the method applied consists in performing a reverse electromagnetic modeling of the source by providing the measured plasma parameters such as plasma density and electron temperature as an input, and computing the total impedance seen at the primary of the transformer. The impedance results given by the model are compared to the experimental results. This approach allows for a more comprehensive refinement of the electrical model in order to obtain a better fitting of the results. The electrical characteristics of the system, and in particular the total impedance, were measured at the inductive coil antenna (primary of the transformer). The source was modeled electrically by a finite element method, treating the plasma as a conductive load and taking into account the complex plasma conductivity, the value of which was calculated from the electron density and electron temperature measurements carried out previously. The electrical characterization of the inductive excitation source itself versus frequency showed that the source cannot be treated as purely inductive and that the effect of parasitic capacitances must be taken into account in the model. Finally, considerations on the effect of the magnetic core addition on the capacitive component of the coupling are made

  14. Simulation of ultrasonic surface waves with multi-Gaussian and point source beam models

    International Nuclear Information System (INIS)

    Zhao, Xinyu; Schmerr, Lester W. Jr.; Li, Xiongbing; Sedov, Alexander

    2014-01-01

    In the past decade, multi-Gaussian beam models have been developed to solve many complicated bulk wave propagation problems. However, to date those models have not been extended to simulate the generation of Rayleigh waves. Here we will combine Gaussian beams with an explicit high frequency expression for the Rayleigh wave Green function to produce a three-dimensional multi-Gaussian beam model for the fields radiated from an angle beam transducer mounted on a solid wedge. Simulation results obtained with this model are compared to those of a point source model. It is shown that the multi-Gaussian surface wave beam model agrees well with the point source model while being computationally much more efficient

  15. Solving the forward problem in EEG source analysis by spherical and fdm head modeling: a comparative analysis - biomed 2009

    NARCIS (Netherlands)

    Vatta, F.; Meneghini, F.; Esposito, F.; Mininel, S.; Di Salle, F.

    2009-01-01

    Neural source localization techniques based on electroencephalography (EEG) use scalp potential data to infer the location of underlying neural activity. This procedure entails modeling the sources of EEG activity and modeling the head volume conduction process to link the modeled sources to the

  16. Introducing a new open source GIS user interface for the SWAT model

    Science.gov (United States)

    The Soil and Water Assessment Tool (SWAT) model is a robust watershed modelling tool. It typically uses the ArcSWAT interface to create its inputs. ArcSWAT is public domain software which works in the licensed ArcGIS environment. The aim of this paper was to develop an open source user interface ...

  17. Model description for calculating the source term of the Angra 1 environmental control system

    International Nuclear Information System (INIS)

    Oliveira, L.F.S. de; Amaral Neto, J.D.; Salles, M.R.

    1988-01-01

    This work presents the model used for evaluation of source term released from Angra 1 Nuclear Power Plant in case of an accident. After that, an application of the model for the case of a Fuel Assembly Drop Accident Inside the Fuel Handling Building during reactor refueling is presented. (author) [pt

  18. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    Science.gov (United States)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  19. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Energy Technology Data Exchange (ETDEWEB)

    Murray, S. G.; Trott, C. M.; Jordan, C. H. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO) (Australia)

    2017-08-10

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  20. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Science.gov (United States)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  1. Variability of dynamic source parameters inferred from kinematic models of past earthquakes

    KAUST Repository

    Causse, M.

    2013-12-24

    We analyse the scaling and distribution of average dynamic source properties (fracture energy, static, dynamic and apparent stress drops) using 31 kinematic inversion models from 21 crustal earthquakes. Shear-stress histories are computed by solving the elastodynamic equations while imposing the slip velocity of a kinematic source model as a boundary condition on the fault plane. This is achieved using a 3-D finite difference method in which the rupture kinematics are modelled with the staggered-grid-split-node fault representation method of Dalguer & Day. Dynamic parameters are then estimated from the calculated stress-slip curves and averaged over the fault plane. Our results indicate that fracture energy, static, dynamic and apparent stress drops tend to increase with magnitude. The epistemic uncertainty due to uncertainties in kinematic inversions remains small (ϕ ∼ 0.1 in log10 units), showing that kinematic source models provide robust information to analyse the distribution of average dynamic source parameters. The proposed scaling relations may be useful to constrain friction law parameters in spontaneous dynamic rupture calculations for earthquake source studies, and physics-based near-source ground-motion prediction for seismic hazard and risk mitigation.

  2. Double point source W-phase inversion: Real-time implementation and automated model selection

    Science.gov (United States)

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  3. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    Science.gov (United States)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  4. The continental source of glyoxal estimated by the synergistic use of spaceborne measurements and inverse modelling

    Directory of Open Access Journals (Sweden)

    A. Richter

    2009-11-01

    Full Text Available Tropospheric glyoxal and formaldehyde columns retrieved from the SCIAMACHY satellite instrument in 2005 are used with the IMAGESv2 global chemistry-transport model and its adjoint in a two-compound inversion scheme designed to estimate the continental source of glyoxal. The formaldehyde observations provide an important constraint on the production of glyoxal from isoprene in the model, since the degradation of isoprene constitutes an important source of both glyoxal and formaldehyde. Current modelling studies underestimate largely the observed glyoxal satellite columns, pointing to the existence of an additional land glyoxal source of biogenic origin. We include an extra glyoxal source in the model and we explore its possible distribution and magnitude through two inversion experiments. In the first case, the additional source is represented as a direct glyoxal emission, and in the second, as a secondary formation through the oxidation of an unspecified glyoxal precursor. Besides this extra source, the inversion scheme optimizes the primary glyoxal and formaldehyde emissions, as well as their secondary production from other identified non-methane volatile organic precursors of anthropogenic, pyrogenic and biogenic origin.

    In the first inversion experiment, the additional direct source, estimated at 36 Tg/yr, represents 38% of the global continental source, whereas the contribution of isoprene is equally important (30%, the remainder being accounted for by anthropogenic (20% and pyrogenic fluxes. The inversion succeeds in reducing the underestimation of the glyoxal columns by the model, but it leads to a severe overestimation of glyoxal surface concentrations in comparison with in situ measurements. In the second scenario, the inferred total global continental glyoxal source is estimated at 108 Tg/yr, almost two times higher than the global a priori source. The extra secondary source is the largest contribution to the global glyoxal

  5. Modelling surface energy fluxes over a Dehesa ecosystem using a two-source energy balance model.

    Science.gov (United States)

    Andreu, Ana; Kustas, William. P.; Anderson, Martha C.; Carrara, Arnaud; Patrocinio Gonzalez-Dugo, Maria

    2013-04-01

    The Dehesa is the most widespread agroforestry land-use system in Europe, covering more than 3 million hectares in the Iberian Peninsula and Greece (Grove and Rackham, 2001; Papanastasis, 2004). It is an agro-silvo-pastural ecosystem consisting of widely-spaced oak trees (mostly Quercus ilex L.), combined with crops, pasture and Mediterranean shrubs, and it is recognized as an example of sustainable land use and for his importance in the rural economy (Diaz et al., 1997; Plieninger and Wilbrand, 2001). The ecosystem is influenced by a Mediterranean climate, with recurrent and severe droughts. Over the last decades the Dehesa has faced multiple environmental threats, derived from intensive agricultural use and socio-economic changes, which have caused environmental degradation of the area, namely reduction in tree density and stocking rates, changes in soil properties and hydrological processes and an increase of soil erosion (Coelho et al. 2004; Schnabel and Ferreira, 2004; Montoya 1998; Pulido and Díaz, 2005). Understanding the hydrological, atmospheric and physiological processes that affect the functioning of the ecosystem will improve the management and conservation of the Dehesa. One of the key metrics in assessing ecosystem health, particularly in this water-limited environment, is the capability of monitoring evaporation (ET). To make large area assessments requires the use of remote sensing. Thermal-based energy balance techniques that distinguish soil/substrate and vegetation contributions to the radiative temperature and radiation/turbulent fluxes have proven to be reliable in such semi-arid sparse canopy-cover landscapes. In particular, the two-source energy balance (TSEB) model of Norman et al. (1995) and Kustas and Norman (1999) has shown to be robust for a wide range of partially-vegetated landscapes. The TSEB formulation is evaluated at a flux tower site located in center Spain (Majadas del Tietar, Caceres). Its application in this environment is

  6. Evaluation of the Agricultural Non-point Source Pollution in Chongqing Based on PSR Model

    Institute of Scientific and Technical Information of China (English)

    Hanwen; ZHANG; Xinli; MOU; Hui; XIE; Hong; LU; Xingyun; YAN

    2014-01-01

    Through a series of exploration based on PSR framework model,for the purpose of building a suitable Chongqing agricultural nonpoint source pollution evaluation index system model framework,combined with the presence of Chongqing specific agro-environmental issues,we build a agricultural non-point source pollution assessment index system,and then study the agricultural system pressure,agro-environmental status and human response in total 3 major categories,develope an agricultural non-point source pollution evaluation index consisting of 3 criteria indicators and 19 indicators. As can be seen from the analysis,pressures and responses tend to increase and decrease linearly,state and complex have large fluctuations,and their fluctuations are similar mainly due to the elimination of pressures and impact,increasing the impact for agricultural non-point source pollution.

  7. Sensitivity of the coastal tsunami simulation to the complexity of the 2011 Tohoku earthquake source model

    Science.gov (United States)

    Monnier, Angélique; Loevenbruck, Anne; Gailler, Audrey; Hébert, Hélène

    2016-04-01

    The 11 March 2011 Tohoku-Oki event, whether earthquake or tsunami, is exceptionally well documented. A wide range of onshore and offshore data has been recorded from seismic, geodetic, ocean-bottom pressure and sea level sensors. Along with these numerous observations, advance in inversion technique and computing facilities have led to many source studies. Rupture parameters inversion such as slip distribution and rupture history permit to estimate the complex coseismic seafloor deformation. From the numerous published seismic source studies, the most relevant coseismic source models are tested. The comparison of the predicted signals generated using both static and cinematic ruptures to the offshore and coastal measurements help determine which source model should be used to obtain the more consistent coastal tsunami simulations. This work is funded by the TANDEM project, reference ANR-11-RSNR-0023-01 of the French Programme Investissements d'Avenir (PIA 2014-2018).

  8. Neutron activation analysis: Modelling studies to improve the neutron flux of Americium-Beryllium source

    Energy Technology Data Exchange (ETDEWEB)

    Didi, Abdessamad; Dadouch, Ahmed; Tajmouati, Jaouad; Bekkouri, Hassane [Advanced Technology and Integration System, Dept. of Physics, Faculty of Science Dhar Mehraz, University Sidi Mohamed Ben Abdellah, Fez (Morocco); Jai, Otman [Laboratory of Radiation and Nuclear Systems, Dept. of Physics, Faculty of Sciences, Tetouan (Morocco)

    2017-06-15

    Americium–beryllium (Am-Be; n, γ) is a neutron emitting source used in various research fields such as chemistry, physics, geology, archaeology, medicine, and environmental monitoring, as well as in the forensic sciences. It is a mobile source of neutron activity (20 Ci), yielding a small thermal neutron flux that is water moderated. The aim of this study is to develop a model to increase the neutron thermal flux of a source such as Am-Be. This study achieved multiple advantageous results: primarily, it will help us perform neutron activation analysis. Next, it will give us the opportunity to produce radio-elements with short half-lives. Am-Be single and multisource (5 sources) experiments were performed within an irradiation facility with a paraffin moderator. The resulting models mainly increase the thermal neutron flux compared to the traditional method with water moderator.

  9. An Equivalent Source Method for Modelling the Lithospheric Magnetic Field Using Satellite and Airborne Magnetic Data

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    . Advantages of the equivalent source method include its local nature and the ease of transforming to spherical harmonics when needed. The method can also be applied in local, high resolution, investigations of the lithospheric magnetic field, for example where suitable aeromagnetic data is available......We present a technique for modelling the lithospheric magnetic field based on estimation of equivalent potential field sources. As a first demonstration we present an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010. Three component vector field...... for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid with an increasing grid resolution towards the airborne survey area. The corresponding source values are estimated using an iteratively reweighted least squares algorithm that includes model...

  10. Differences in directional sound source behavior and perception between assorted computer room models

    DEFF Research Database (Denmark)

    Vigeant, Michelle C.; Wang, Lily M.; Rindel, Jens Holger

    2004-01-01

    considering reverberation time. However, for the three other parameters evaluated (sound pressure level, clarity index and lateral fraction), the changing diffusivity of the room does not diminish the importance of the directivity. The study therefore shows the importance of considering source directivity......Source directivity is an important input variable when using room acoustic computer modeling programs to generate auralizations. Previous research has shown that using a multichannel anechoic recording can produce a more natural sounding auralization, particularly as the number of channels...

  11. Source model for the Copahue volcano magmaplumbing system constrained by InSARsurface deformation observations

    Science.gov (United States)

    Lundgren, P.; Nikkhoo, M.; Samsonov, S. V.; Milillo, P.; Gil-Cruz, F., Sr.; Lazo, J.

    2017-12-01

    Copahue volcano straddling the edge of the Agrio-Caviahue caldera along the Chile-Argentinaborder in the southern Andes has been in unrest since inflation began in late 2011. We constrain Copahue'ssource models with satellite and airborne interferometric synthetic aperture radar (InSAR) deformationobservations. InSAR time series from descending track RADARSAT-2 and COSMO-SkyMed data span theentire inflation period from 2011 to 2016, with their initially high rates of 12 and 15 cm/yr, respectively,slowing only slightly despite ongoing small eruptions through 2016. InSAR ascending and descending tracktime series for the 2013-2016 time period constrain a two-source compound dislocation model, with a rate ofvolume increase of 13 × 106 m3/yr. They consist of a shallow, near-vertical, elongated source centered at2.5 km beneath the summit and a deeper, shallowly plunging source centered at 7 km depth connecting theshallow source to the deeper caldera. The deeper source is located directly beneath the volcano tectonicseismicity with the lower bounds of the seismicity parallel to the plunge of the deep source. InSAR time seriesalso show normal fault offsets on the NE flank Copahue faults. Coulomb stress change calculations forright-lateral strike slip (RLSS), thrust, and normal receiver faults show positive values in the north caldera forboth RLSS and normal faults, suggesting that northward trending seismicity and Copahue fault motion withinthe caldera are caused by the modeled sources. Together, the InSAR-constrained source model and theseismicity suggest a deep conduit or transfer zone where magma moves from the central caldera toCopahue's upper edifice.

  12. A simplified model of the source channel of the Leksell GammaKnife tested with PENELOPE

    OpenAIRE

    Al-Dweri, Feras M. O.; Lallena, Antonio M.; Vilches, Manuel

    2004-01-01

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife$^{\\circledR}$. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3$^{\\rm o}$ with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photons trajectories reaching the out...

  13. Modeled Sources, Transport, and Accumulation of Dissolved Solids in Water Resources of the Southwestern United States.

    Science.gov (United States)

    Anning, David W

    2011-10-01

    Information on important source areas for dissolved solids in streams of the southwestern United States, the relative share of deliveries of dissolved solids to streams from natural and human sources, and the potential for salt accumulation in soil or groundwater was developed using a SPAtially Referenced Regressions On Watershed attributes model. Predicted area-normalized reach-catchment delivery rates of dissolved solids to streams ranged from Salton Sea accounting unit.

  14. Quantification of source-term profiles from near-field geochemical models

    International Nuclear Information System (INIS)

    McKinley, I.G.

    1985-01-01

    A geochemical model of the near-field is described which quantitatively treats the processes of engineered barrier degradation, buffering of aqueous chemistry by solid phases, nuclide solubilization and transport through the near-field and release to the far-field. The radionuclide source-terms derived from this model are compared with those from a simpler model used for repository safety analysis. 10 refs., 2 figs., 2 tabs

  15. Certification of model spectrometric alpha sources (MSAS) and problems of the MSAS system improvement

    International Nuclear Information System (INIS)

    Belyatskij, A.F.; Gejdel'man, A.M.; Egorov, Yu.S.; Nedovesov, V.G.; Chechev, V.P.

    1984-01-01

    Results of certification of standard spectrometric alpha sources (SSAS) of industrial production are presented: methods for certification by main radiation physical parameters: proper halfwidth of α-lines, activity of radionuclides in the source, energies of α-particle emitting sources and relative intensity of different energy α-particle groups - are analysed. The advantage for the SSAS system improvement - a set of model measures for α-radiation, a collection of interconnected data units on physical, engineering and design characteristics of SSAS, methods for their obtaining and determination, on instruments used, is considered

  16. Assessing the contribution of binaural cues for apparent source width perception via a functional model

    DEFF Research Database (Denmark)

    Käsbach, Johannes; Hahmann, Manuel; May, Tobias

    2016-01-01

    In echoic conditions, sound sources are not perceived as point sources but appear to be expanded. The expansion in the horizontal dimension is referred to as apparent source width (ASW). To elicit this perception, the auditory system has access to fluctuations of binaural cues, the interaural time...... a statistical representation of ITDs and ILDs based on percentiles integrated over time and frequency. The model’s performance was evaluated against psychoacoustic data obtained with noise, speech and music signals in loudspeakerbased experiments. A robust model prediction of ASW was achieved using a cross...

  17. Rate equation modelling of the optically pumped spin-exchange source

    International Nuclear Information System (INIS)

    Stenger, J.; Rith, K.

    1995-01-01

    Sources for spin polarized hydrogen or deuterium, polarized via spin-exchange of a laser optically pumped alkali metal, can be modelled by rate equations. The rate equations for this type of source, operated either with hydrogen or deuterium, are given explicitly with the intention of providing a useful tool for further source optimization and understanding. Laser optical pumping of alkali metal, spin-exchange collisions of hydrogen or deuterium atoms with each other and with alkali metal atoms are included, as well as depolarization due to flow and wall collisions. (orig.)

  18. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    Science.gov (United States)

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice

    2016-04-01

    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling

  19. SPARROW models used to understand nutrient sources in the Mississippi/Atchafalaya River Basin

    Science.gov (United States)

    Robertson, Dale M.; Saad, David A.

    2013-01-01

    Nitrogen (N) and phosphorus (P) loading from the Mississippi/Atchafalaya River Basin (MARB) has been linked to hypoxia in the Gulf of Mexico. To describe where and from what sources those loads originate, SPAtially Referenced Regression On Watershed attributes (SPARROW) models were constructed for the MARB using geospatial datasets for 2002, including inputs from wastewater treatment plants (WWTPs), and calibration sites throughout the MARB. Previous studies found that highest N and P yields were from the north-central part of the MARB (Corn Belt). Based on the MARB SPARROW models, highest N yields were still from the Corn Belt but centered over Iowa and Indiana, and highest P yields were widely distributed throughout the center of the MARB. Similar to that found in other studies, agricultural inputs were found to be the largest N and P sources throughout most of the MARB: farm fertilizers were the largest N source, whereas farm fertilizers, manure, and urban inputs were dominant P sources. The MARB models enable individual N and P sources to be defined at scales ranging from SPARROW catchments (∼50 km2) to the entire area of the MARB. Inputs of P from WWTPs and urban areas were more important than found in most other studies. Information from this study will help to reduce nutrient loading from the MARB by providing managers with a description of where each of the sources of N and P are most important, thus providing a basis for prioritizing management actions and ultimately reducing the extent of Gulf hypoxia.

  20. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2016-05-15

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.

  1. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    International Nuclear Information System (INIS)

    Kim, Tae Hoon; Kim, Yong Kyun; Chung, Hyun Tai

    2016-01-01

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results

  2. Assessment of source-receptor relationships of aerosols: An integrated forward and backward modeling approach

    Science.gov (United States)

    Kulkarni, Sarika

    This dissertation presents a scientific framework that facilitates enhanced understanding of aerosol source -- receptor (S/R) relationships and their impact on the local, regional and global air quality by employing a complementary suite of modeling methods. The receptor -- oriented Positive Matrix Factorization (PMF) technique is combined with Potential Source Contribution Function (PSCF), a trajectory ensemble model, to characterize sources influencing the aerosols measured at Gosan, Korea during spring 2001. It is found that the episodic dust events originating from desert regions in East Asia (EA) that mix with pollution along the transit path, have a significant and pervasive impact on the air quality of Gosan. The intercontinental and hemispheric transport of aerosols is analyzed by a series of emission perturbation simulations with the Sulfur Transport and dEposition Model (STEM), a regional scale Chemical Transport Model (CTM), evaluated with observations from the 2008 NASA ARCTAS field campaign. This modeling study shows that pollution transport from regions outside North America (NA) contributed ˜ 30 and 20% to NA sulfate and BC surface concentration. This study also identifies aerosols transported from Europe, NA and EA regions as significant contributors to springtime Arctic sulfate and BC. Trajectory ensemble models are combined with source region tagged tracer model output to identify the source regions and possible instances of quasi-lagrangian sampled air masses during the 2006 NASA INTEX-B field campaign. The impact of specific emission sectors from Asia during the INTEX-B period is studied with the STEM model, identifying residential sector as potential target for emission reduction to combat global warming. The output from the STEM model constrained with satellite derived aerosol optical depth and ground based measurements of single scattering albedo via an optimal interpolation assimilation scheme is combined with the PMF technique to

  3. Advanced Neutron Source Dynamic Model (ANSDM) code description and user guide

    International Nuclear Information System (INIS)

    March-Leuba, J.

    1995-08-01

    A mathematical model is designed that simulates the dynamic behavior of the Advanced Neutron Source (ANS) reactor. Its main objective is to model important characteristics of the ANS systems as they are being designed, updated, and employed; its primary design goal, to aid in the development of safety and control features. During the simulations the model is also found to aid in making design decisions for thermal-hydraulic systems. Model components, empirical correlations, and model parameters are discussed; sample procedures are also given. Modifications are cited, and significant development and application efforts are noted focusing on examination of instrumentation required during and after accidents to ensure adequate monitoring during transient conditions

  4. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    Science.gov (United States)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  5. Point, surface and volumetric heat sources in the thermal modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    Selective laser melting (SLM) is a powder based additive manufacturing technique suitable for producing high precision metal parts. However, distortions and residual stresses within products arise during SLM because of the high temperature gradients created by the laser heating. Residual stresses limit the load resistance of the product and may even lead to fracture during the built process. It is therefore of paramount importance to predict the level of part distortion and residual stress as a function of SLM process parameters which requires a reliable thermal modelling of the SLM process. Consequently, a key question arises which is how to describe the laser source appropriately. Reasonable simplification of the laser representation is crucial for the computational efficiency of the thermal model of the SLM process. In this paper, first a semi-analytical thermal modelling approach is described. Subsequently, the laser heating is modelled using point, surface and volumetric sources, in order to compare the influence of different laser source geometries on the thermal history prediction of the thermal model. The present work provides guidelines on appropriate representation of the laser source in the thermal modelling of the SLM process.

  6. A photovoltaic source I/U model suitable for hardware in the loop application

    Directory of Open Access Journals (Sweden)

    Stala Robert

    2017-12-01

    Full Text Available This paper presents a novel, low-complexity method of simulating PV source characteristics suitable for real-time modeling and hardware implementation. The application of the suitable model of the PV source as well as the model of all the PV system components in a real-time hardware gives a safe, fast and low cost method of testing PV systems. The paper demonstrates the concept of the PV array model and the hardware implementation in FPGAs of the system which combines two PV arrays. The obtained results confirm that the proposed model is of low complexity and can be suitable for hardware in the loop (HIL tests of the complex PV system control, with various arrays operating under different conditions.

  7. Modeling of Acoustic Field for a Parametric Focusing Source Using the Spheroidal Beam Equation

    Directory of Open Access Journals (Sweden)

    Yu Lili

    2015-09-01

    Full Text Available A theoretical model of acoustic field for a parametric focusing source on concave spherical surface is proposed. In this model, the source boundary conditions of the Spheroidal Beam Equation (SBE for difference frequency wave excitation were studied. Propagation curves and beam patterns for difference frequency component of the acoustic field are compared with those obtained for Khokhlov-Zabolotskaya-Kuznetsov (KZK model. The results demonstrate that the focused parametric model of SBE is good valid for a large aperture angle in the strongly focused acoustic field. It is also investigated that high directivity and good focal ability with the decreasing of downshift ratio and the increasing of half-aperture angle for the focused parametric model of SBE.

  8. Source rock contributions to the Lower Cretaceous heavy oil accumulations in Alberta: a basin modeling study

    Science.gov (United States)

    Berbesi, Luiyin Alejandro; di Primio, Rolando; Anka, Zahie; Horsfield, Brian; Higley, Debra K.

    2012-01-01

    The origin of the immense oil sand deposits in Lower Cretaceous reservoirs of the Western Canada sedimentary basin is still a matter of debate, specifically with respect to the original in-place volumes and contributing source rocks. In this study, the contributions from the main source rocks were addressed using a three-dimensional petroleum system model calibrated to well data. A sensitivity analysis of source rock definition was performed in the case of the two main contributors, which are the Lower Jurassic Gordondale Member of the Fernie Group and the Upper Devonian–Lower Mississippian Exshaw Formation. This sensitivity analysis included variations of assigned total organic carbon and hydrogen index for both source intervals, and in the case of the Exshaw Formation, variations of thickness in areas beneath the Rocky Mountains were also considered. All of the modeled source rocks reached the early or main oil generation stages by 60 Ma, before the onset of the Laramide orogeny. Reconstructed oil accumulations were initially modest because of limited trapping efficiency. This was improved by defining lateral stratigraphic seals within the carrier system. An additional sealing effect by biodegraded oil may have hindered the migration of petroleum in the northern areas, but not to the east of Athabasca. In the latter case, the main trapping controls are dominantly stratigraphic and structural. Our model, based on available data, identifies the Gordondale source rock as the contributor of more than 54% of the oil in the Athabasca and Peace River accumulations, followed by minor amounts from Exshaw (15%) and other Devonian to Lower Jurassic source rocks. The proposed strong contribution of petroleum from the Exshaw Formation source rock to the Athabasca oil sands is only reproduced by assuming 25 m (82 ft) of mature Exshaw in the kitchen areas, with original total organic carbon of 9% or more.

  9. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    Science.gov (United States)

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  10. A numerical model of the mirror electron cyclotron resonance MECR source

    International Nuclear Information System (INIS)

    Hellblom, G.

    1986-03-01

    Results from numerical modeling of a new type of ion source are presented. The plasma in this source is produced by electron cyclotron resonance in a strong conversion magnetic field. Experiments have shown that a well-defined plasma column, extended along the magnetic field (z-axis) can be produced. The electron temperature and the densities of the various plasma particles have been found to have a strong z-position dependence. With the numerical model, a simulation of the evolution of the composition of the plasma as a function of z is made. A qualitative agreement with experimental data can be obtained for certain parameter regimes. (author)

  11. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    1997-01-01

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modeling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial Valley earthquake in California (U .S .A.). The results of the study indicate that while all three approaches can successfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  12. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modelling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial valley earthquake in California (USA). The results of the study indicate that while all three approaches can succesfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  13. X-ray spectral models of Galactic bulge sources - the emission-line factor

    International Nuclear Information System (INIS)

    Vrtilek, S.D.; Swank, J.H.; Kallman, T.R.

    1988-01-01

    Current difficulties in finding unique and physically meaningful models for the X-ray spectra of Galactic bulge sources are exacerbated by the presence of strong, variable emission and absorption features that are not resolved by the instruments observing them. Nine Einstein solid state spectrometer (SSS) observations of five Galactic bulge sources are presented for which relatively high resolution objective grating spectrometer (OGS) data have been published. It is found that in every case the goodness of fit of simple models to SSS data is greatly improved by adding line features identified in the OGS that cannot be resolved by the SSS but nevertheless strongly influence the spectra observed by SSS. 32 references

  14. Family of Quantum Sources for Improving Near Field Accuracy in Transducer Modeling by the Distributed Point Source Method

    Directory of Open Access Journals (Sweden)

    Dominique Placko

    2016-10-01

    Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.

  15. Using Reactive Transport Modeling to Evaluate the Source Term at Yucca Mountain

    Energy Technology Data Exchange (ETDEWEB)

    Y. Chen

    2001-12-19

    The conventional approach of source-term evaluation for performance assessment of nuclear waste repositories uses speciation-solubility modeling tools and assumes pure phases of radioelements control their solubility. This assumption may not reflect reality, as most radioelements (except for U) may not form their own pure phases. As a result, solubility limits predicted using the conventional approach are several orders of magnitude higher then the concentrations of radioelements measured in spent fuel dissolution experiments. This paper presents the author's attempt of using a non-conventional approach to evaluate source term of radionuclide release for Yucca Mountain. Based on the general reactive-transport code AREST-CT, a model for spent fuel dissolution and secondary phase precipitation has been constructed. The model accounts for both equilibrium and kinetic reactions. Its predictions have been compared against laboratory experiments and natural analogues. It is found that without calibrations, the simulated results match laboratory and field observations very well in many aspects. More important is the fact that no contradictions between them have been found. This provides confidence in the predictive power of the model. Based on the concept of Np incorporated into uranyl minerals, the model not only predicts a lower Np source-term than that given by conventional Np solubility models, but also produces results which are consistent with laboratory measurements and observations. Moreover, two hypotheses, whether Np enters tertiary uranyl minerals or not, have been tested by comparing model predictions against laboratory observations, the results favor the former. It is concluded that this non-conventional approach of source term evaluation not only eliminates over-conservatism in conventional solubility approach to some extent, but also gives a realistic representation of the system of interest, which is a prerequisite for truly understanding the long

  16. Using Reactive Transport Modeling to Evaluate the Source Term at Yucca Mountain

    International Nuclear Information System (INIS)

    Y. Chen

    2001-01-01

    The conventional approach of source-term evaluation for performance assessment of nuclear waste repositories uses speciation-solubility modeling tools and assumes pure phases of radioelements control their solubility. This assumption may not reflect reality, as most radioelements (except for U) may not form their own pure phases. As a result, solubility limits predicted using the conventional approach are several orders of magnitude higher then the concentrations of radioelements measured in spent fuel dissolution experiments. This paper presents the author's attempt of using a non-conventional approach to evaluate source term of radionuclide release for Yucca Mountain. Based on the general reactive-transport code AREST-CT, a model for spent fuel dissolution and secondary phase precipitation has been constructed. The model accounts for both equilibrium and kinetic reactions. Its predictions have been compared against laboratory experiments and natural analogues. It is found that without calibrations, the simulated results match laboratory and field observations very well in many aspects. More important is the fact that no contradictions between them have been found. This provides confidence in the predictive power of the model. Based on the concept of Np incorporated into uranyl minerals, the model not only predicts a lower Np source-term than that given by conventional Np solubility models, but also produces results which are consistent with laboratory measurements and observations. Moreover, two hypotheses, whether Np enters tertiary uranyl minerals or not, have been tested by comparing model predictions against laboratory observations, the results favor the former. It is concluded that this non-conventional approach of source term evaluation not only eliminates over-conservatism in conventional solubility approach to some extent, but also gives a realistic representation of the system of interest, which is a prerequisite for truly understanding the long

  17. A modified receptor model for source apportionment of heavy metal pollution in soil.

    Science.gov (United States)

    Huang, Ying; Deng, Meihua; Wu, Shaofu; Japenga, Jan; Li, Tingqiang; Yang, Xiaoe; He, Zhenli

    2018-07-15

    Source apportionment is a crucial step toward reduction of heavy metal pollution in soil. Existing methods are generally based on receptor models. However, overestimation or underestimation occurs when they are applied to heavy metal source apportionment in soil. Therefore, a modified model (PCA-MLRD) was developed, which is based on principal component analysis (PCA) and multiple linear regression with distance (MLRD). This model was applied to a case study conducted in a peri-urban area in southeast China where soils were contaminated by arsenic (As), cadmium (Cd), mercury (Hg) and lead (Pb). Compared with existing models, PCA-MLRD is able to identify specific sources and quantify the extent of influence for each emission. The zinc (Zn)-Pb mine was identified as the most important anthropogenic emission, which affected approximately half area for Pb and As accumulation, and approximately one third for Cd. Overall, the influence extent of the anthropogenic emissions decreased in the order of mine (3 km) > dyeing mill (2 km) ≈ industrial hub (2 km) > fluorescent factory (1.5 km) > road (0.5 km). Although algorithm still needs to improved, the PCA-MLRD model has the potential to become a useful tool for heavy metal source apportionment in soil. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Cell sources for in vitro human liver cell culture models

    Science.gov (United States)

    Freyer, Nora; Damm, Georg; Seehofer, Daniel; Knöspel, Fanny

    2016-01-01

    In vitro liver cell culture models are gaining increasing importance in pharmacological and toxicological research. The source of cells used is critical for the relevance and the predictive value of such models. Primary human hepatocytes (PHH) are currently considered to be the gold standard for hepatic in vitro culture models, since they directly reflect the specific metabolism and functionality of the human liver; however, the scarcity and difficult logistics of PHH have driven researchers to explore alternative cell sources, including liver cell lines and pluripotent stem cells. Liver cell lines generated from hepatomas or by genetic manipulation are widely used due to their good availability, but they are generally altered in certain metabolic functions. For the past few years, adult and pluripotent stem cells have been attracting increasing attention, due their ability to proliferate and to differentiate into hepatocyte-like cells in vitro. However, controlling the differentiation of these cells is still a challenge. This review gives an overview of the major human cell sources under investigation for in vitro liver cell culture models, including primary human liver cells, liver cell lines, and stem cells. The promises and challenges of different cell types are discussed with a focus on the complex 2D and 3D culture approaches under investigation for improving liver cell functionality in vitro. Finally, the specific application options of individual cell sources in pharmacological research or disease modeling are described. PMID:27385595

  19. Revealing transboundary and local air pollutant sources affecting Metro Manila through receptor modeling studies

    International Nuclear Information System (INIS)

    Pabroa, Preciosa Corazon B.; Bautista VII, Angel T.; Santos, Flora L.; Racho, Joseph Michael D.

    2011-01-01

    Ambient fine particulate matter (PM 2 .5) levels at the Metro Manila air sampling stations of the Philippine Nuclear Research Research Institute were found to be above the WHO guideline value of 10 μg m 3 indicating, in general, very poor air quality in the area. The elemental components of the fine particulate matter were obtained using the energy-dispersive x-ray fluorescence spectrometry. Positive matrix factorization, a receptor modelling tool, was used to identify and apportion air pollution sources. Location of probable transboundary air pollutants were evaluated using HYSPLIT (Hybrid Single Particle Lagrangian Integrated Trajectory Model) while location of probable local air pollutant sources were determined using the conditional probability function (CPF). Air pollutant sources can either be natural or anthropogenic. This study has shown natural air pollutant sources such as volcanic eruptions from Bulusan volcano in 2006 and from Anatahan volcano in 2005 to have impacted on the region. Fine soils was shown to have originated from China's Mu US Desert some time in 2004. Smoke in the fine fraction in 2006 show indications of coming from forest fires in Sumatra and Borneo. Fine particulate Pb in Valenzuela was shown to be coming from the surrounding area. Many more significant air pollution impacts can be evaluated with the identification of probable air pollutant sources with the use of elemental fingerprints and locating these sources with the use of HYSPLIT and CPF. (author)

  20. Comparing predictive models of glioblastoma multiforme built using multi-institutional and local data sources.

    Science.gov (United States)

    Singleton, Kyle W; Hsu, William; Bui, Alex A T

    2012-01-01

    The growing amount of electronic data collected from patient care and clinical trials is motivating the creation of national repositories where multiple institutions share data about their patient cohorts. Such efforts aim to provide sufficient sample sizes for data mining and predictive modeling, ultimately improving treatment recommendations and patient outcome prediction. While these repositories offer the potential to improve our understanding of a disease, potential issues need to be addressed to ensure that multi-site data and resultant predictive models are useful to non-contributing institutions. In this paper we examine the challenges of utilizing National Cancer Institute datasets for modeling glioblastoma multiforme. We created several types of prognostic models and compared their results against models generated using data solely from our institution. While overall model performance between the data sources was similar, different variables were selected during model generation, suggesting that mapping data resources between models is not a straightforward issue.

  1. Source-term development for a contaminant plume for use by multimedia risk assessment models

    International Nuclear Information System (INIS)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.

    1999-01-01

    Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equal importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool

  2. Using Bayesian Belief Network (BBN) modelling for Rapid Source Term Prediction. RASTEP Phase 1

    International Nuclear Information System (INIS)

    Knochenhauer, M.; Swaling, V.H.; Alfheim, P.

    2012-09-01

    The project is connected to the development of RASTEP, a computerized source term prediction tool aimed at providing a basis for improving off-site emergency management. RASTEP uses Bayesian belief networks (BBN) to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, timing, and pathway of released radio-nuclides). The output is a set of possible source terms with associated probabilities. In the NKS project, a number of complex issues associated with the integration of probabilistic and deterministic analyses are addressed. This includes issues related to the method for estimating source terms, signal validation, and sensitivity analysis. One major task within Phase 1 of the project addressed the problem of how to make the source term module flexible enough to give reliable and valid output throughout the accident scenario. Of the alternatives evaluated, it is recommended that RASTEP is connected to a fast running source term prediction code, e.g., MARS, with a possibility of updating source terms based on real-time observations. (Author)

  3. Using Bayesian Belief Network (BBN) modelling for Rapid Source Term Prediction. RASTEP Phase 1

    Energy Technology Data Exchange (ETDEWEB)

    Knochenhauer, M.; Swaling, V.H.; Alfheim, P. [Scandpower AB, Sundbyberg (Sweden)

    2012-09-15

    The project is connected to the development of RASTEP, a computerized source term prediction tool aimed at providing a basis for improving off-site emergency management. RASTEP uses Bayesian belief networks (BBN) to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, timing, and pathway of released radio-nuclides). The output is a set of possible source terms with associated probabilities. In the NKS project, a number of complex issues associated with the integration of probabilistic and deterministic analyses are addressed. This includes issues related to the method for estimating source terms, signal validation, and sensitivity analysis. One major task within Phase 1 of the project addressed the problem of how to make the source term module flexible enough to give reliable and valid output throughout the accident scenario. Of the alternatives evaluated, it is recommended that RASTEP is connected to a fast running source term prediction code, e.g., MARS, with a possibility of updating source terms based on real-time observations. (Author)

  4. Assessing the impact of different sources of topographic data on 1-D hydraulic modelling of floods

    Science.gov (United States)

    Ali, A. Md; Solomatine, D. P.; Di Baldassarre, G.

    2015-01-01

    Topographic data, such as digital elevation models (DEMs), are essential input in flood inundation modelling. DEMs can be derived from several sources either through remote sensing techniques (spaceborne or airborne imagery) or from traditional methods (ground survey). The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), the Shuttle Radar Topography Mission (SRTM), the light detection and ranging (lidar), and topographic contour maps are some of the most commonly used sources of data for DEMs. These DEMs are characterized by different precision and accuracy. On the one hand, the spatial resolution of low-cost DEMs from satellite imagery, such as ASTER and SRTM, is rather coarse (around 30 to 90 m). On the other hand, the lidar technique is able to produce high-resolution DEMs (at around 1 m), but at a much higher cost. Lastly, contour mapping based on ground survey is time consuming, particularly for higher scales, and may not be possible for some remote areas. The use of these different sources of DEM obviously affects the results of flood inundation models. This paper shows and compares a number of 1-D hydraulic models developed using HEC-RAS as model code and the aforementioned sources of DEM as geometric input. To test model selection, the outcomes of the 1-D models were also compared, in terms of flood water levels, to the results of 2-D models (LISFLOOD-FP). The study was carried out on a reach of the Johor River, in Malaysia. The effect of the different sources of DEMs (and different resolutions) was investigated by considering the performance of the hydraulic models in simulating flood water levels as well as inundation maps. The outcomes of our study show that the use of different DEMs has serious implications to the results of hydraulic models. The outcomes also indicate that the loss of model accuracy due to re-sampling the highest resolution DEM (i.e. lidar 1 m) to lower resolution is much less than the loss of model accuracy due

  5. A modeling study of saltwater intrusion in the Andarax delta area using multiple data sources

    DEFF Research Database (Denmark)

    Antonsson, Arni Valur; Engesgaard, Peter Knudegaard; Jorreto, Sara

    context. The validity of a conceptual model is determined by different factors, where both data quantity and quality is of crucial importance. Often, when dealing with saltwater intrusion, data is limited. Therefore, using different sources (and types) of data can be beneficial and increase......In groundwater model development, construction of the conceptual model is one of the (initial and) critical aspects that determines the model reliability and applicability in terms of e.g. system (hydrogeological) understanding, groundwater quality predictions, and general use in water resources...

  6. Unified Impedance Model of Grid-Connected Voltage-Source Converters

    DEFF Research Database (Denmark)

    Wang, Xiongfei; Harnefors, Lennart; Blaabjerg, Frede

    2018-01-01

    This paper proposes a unified impedance model of grid-connected voltage-source converters for analyzing dynamic influences of the Phase-Locked Loop (PLL) and current control. The mathematical relations between the impedance models in the different domains are first explicitly revealed by means...... of complex transfer functions and complex space vectors. A stationary (αβ-) frame impedance model is then proposed, which not only predicts the stability impact of the PLL, but reveals also its frequency coupling effect explicitly. Furthermore, the impedance shaping effect of the PLL on the current control...... results and theoretical analysis confirm the effectiveness of the stationary-frame impedance model....

  7. Monte Carlo modeling of neutron imaging at the SINQ spallation source

    International Nuclear Information System (INIS)

    Lebenhaft, J.R.; Lehmann, E.H.; Pitcher, E.J.; McKinney, G.W.

    2003-01-01

    Modeling of the Swiss Spallation Neutron Source (SINQ) has been used to demonstrate the neutron radiography capability of the newly released MPI-version of the MCNPX Monte Carlo code. A detailed MCNPX model was developed of SINQ and its associated neutron transmission radiography (NEUTRA) facility. Preliminary validation of the model was performed by comparing the calculated and measured neutron fluxes in the NEUTRA beam line, and a simulated radiography image was generated for a sample consisting of steel tubes containing different materials. This paper describes the SINQ facility, provides details of the MCNPX model, and presents preliminary results of the neutron imaging. (authors)

  8. [Comparison of tonometry with the Keeler air puff non-contact tonometer "Pulsair" and the Goldmann applanation tonometer].

    Science.gov (United States)

    Yücel, A A; Stürmer, J; Gloor, B

    1990-10-01

    Intraocular pressure (IOP) readings were performed with the Keeler Air-Puff Non-Contact Tonometer "Pulsair" in 126 patients before (NCT1) and after (NCT2) applanation-tonometry with the Goldmann device (GAT). For the whole population of 126 patients, in each of whom only one eye was selected, there was a significant difference of the mean IOP measurement, but the difference between the two measurement methods was only slightly significant when the NCT was applied before the GAT, and highly significant vice versa. Also the variation of the NCT-measurements was significantly larger than that for the GAT, while the before- and after GAT measurements had equal variations. If only the measurements under 18 mmHg mean GAT are taken into account (n = 101), the difference between GAT and NCT1 was not significant (p = 0.437), as opposed to the GAT-measurements above 18 mmHg, where a highly significant difference between the means was found (p = 0.0033). In most cases, the IOP-readings were underestimated using NCT. The Non Contact Tonometer "Pulsair" could be used for IOP-readings in patients with increased risk of infection, as well as in those with known allergic reactions to topical anesthetic agents, with poor or absent fixation ability, with corneal edema, and postoperative after anterior-segment surgery. The possibility of IOP-measurement in a reclined position is a true advantage of the Non-Contact Tonometer presented here. A measuring strategy for the above-mentioned applications is presented.

  9. A Monte Carlo multiple source model applied to radiosurgery narrow photon beams

    International Nuclear Information System (INIS)

    Chaves, A.; Lopes, M.C.; Alves, C.C.; Oliveira, C.; Peralta, L.; Rodrigues, P.; Trindade, A.

    2004-01-01

    Monte Carlo (MC) methods are nowadays often used in the field of radiotherapy. Through successive steps, radiation fields are simulated, producing source Phase Space Data (PSD) that enable a dose calculation with good accuracy. Narrow photon beams used in radiosurgery can also be simulated by MC codes. However, the poor efficiency in simulating these narrow photon beams produces PSD whose quality prevents calculating dose with the required accuracy. To overcome this difficulty, a multiple source model was developed that enhances the quality of the reconstructed PSD, reducing also the time and storage capacities. This multiple source model was based on the full MC simulation, performed with the MC code MCNP4C, of the Siemens Mevatron KD2 (6 MV mode) linear accelerator head and additional collimators. The full simulation allowed the characterization of the particles coming from the accelerator head and from the additional collimators that shape the narrow photon beams used in radiosurgery treatments. Eight relevant photon virtual sources were identified from the full characterization analysis. Spatial and energy distributions were stored in histograms for the virtual sources representing the accelerator head components and the additional collimators. The photon directions were calculated for virtual sources representing the accelerator head components whereas, for the virtual sources representing the additional collimators, they were recorded into histograms. All these histograms were included in the MC code, DPM code and using a sampling procedure that reconstructed the PSDs, dose distributions were calculated in a water phantom divided in 20000 voxels of 1x1x5 mm 3 . The model accurately calculates dose distributions in the water phantom for all the additional collimators; for depth dose curves, associated errors at 2σ were lower than 2.5% until a depth of 202.5 mm for all the additional collimators and for profiles at various depths, deviations between measured

  10. A linear ion optics model for extraction from a plasma ion source

    International Nuclear Information System (INIS)

    Dietrich, J.

    1987-01-01

    A linear ion optics model for ion extraction from a plasma ion source is presented, based on the paraxial equations which account for lens effects, space charge and finite source ion temperature. This model is applied to three- and four-electrode extraction systems with circular apertures. The results are compared with experimental data and numerical calculations in the literature. It is shown that the improved calculations of space charge effects and lens effects allow better agreement to be obtained than in earlier linear optics models. A principal result is that the model presented here describes the dependence of the optimum perveance on the aspect ratio in a manner similar to the nonlinear optics theory. (orig.)

  11. The SSI TOOLBOX Source Term Model SOSIM - Screening for important radionuclides and parameter sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Avila Moreno, R.; Barrdahl, R.; Haegg, C.

    1995-05-01

    The main objective of the present study was to carry out a screening and a sensitivity analysis of the SSI TOOLBOX source term model SOSIM. This model is a part of the SSI TOOLBOX for radiological impact assessment of the Swedish disposal concept for high-level waste KBS-3. The outputs of interest for this purpose were: the total released fraction, the time of total release, the time and value of maximum release rate, the dose rates after direct releases of the biosphere. The source term equations were derived and simple equations and methods were proposed for calculation of these. A literature survey has been performed in order to determine a characteristic variation range and a nominal value for each model parameter. In order to reduce the model uncertainties the authors recommend a change in the initial boundary condition for solution of the diffusion equation for highly soluble nuclides. 13 refs.

  12. Consistent modelling of wind turbine noise propagation from source to receiver

    DEFF Research Database (Denmark)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong

    2017-01-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine...... propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine....... and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound...

  13. Beam-based model of broad-band impedance of the Diamond Light Source

    Science.gov (United States)

    Smaluk, Victor; Martin, Ian; Fielder, Richard; Bartolini, Riccardo

    2015-06-01

    In an electron storage ring, the interaction between a single-bunch beam and a vacuum chamber impedance affects the beam parameters, which can be measured rather precisely. So we can develop beam-based numerical models of longitudinal and transverse impedances. At the Diamond Light Source (DLS) to get the model parameters, a set of measured data has been used including current-dependent shift of betatron tunes and synchronous phase, chromatic damping rates, and bunch lengthening. A matlab code for multiparticle tracking has been developed. The tracking results and analytical estimations are quite consistent with the measured data. Since Diamond has the shortest natural bunch length among all light sources in standard operation, the studies of collective effects with short bunches are relevant to many facilities including next generation of light sources.

  14. Beam-based model of broad-band impedance of the Diamond Light Source

    Directory of Open Access Journals (Sweden)

    Victor Smaluk

    2015-06-01

    Full Text Available In an electron storage ring, the interaction between a single-bunch beam and a vacuum chamber impedance affects the beam parameters, which can be measured rath