WorldWideScience

Sample records for gaussian plume model

  1. State of the art atmospheric dispersion modelling. Should the Gaussian plume model still be used?

    Energy Technology Data Exchange (ETDEWEB)

    Richter, Cornelia [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Koeln (Germany)

    2016-11-15

    For regulatory purposes with respect to licensing and supervision of airborne releases of nuclear installations, the Gaussian plume model is still in use in Germany. However, for complex situations the Gaussian plume model is to be replaced by a Lagrangian particle model. Now the new EU basic safety standards for protection against the dangers arising from exposure to ionising radiation (EU BSS) [1] asks for a realistic assessment of doses to the members of the public from authorised practices. This call for a realistic assessment raises the question whether dispersion modelling with the Gaussian plume model is an adequate approach anymore or whether the use of more complex models is mandatory.

  2. Performance of monitoring networks estimated from a Gaussian plume model

    International Nuclear Information System (INIS)

    Seebregts, A.J.; Hienen, J.F.A.

    1990-10-01

    In support of the ECN study on monitoring strategies after nuclear accidents, the present report describes the analysis of the performance of a monitoring network in a square grid. This network is used to estimate the distribution of the deposition pattern after a release of radioactivity into the atmosphere. The analysis is based upon a single release, a constant wind direction and an atmospheric dispersion according to a simplified Gaussian plume model. A technique is introduced to estimate the parameters in this Gaussian model based upon measurements at specific monitoring locations and linear regression, although this model is intrinsically non-linear. With these estimated parameters and the Gaussian model the distribution of the contamination due to deposition can be estimated. To investigate the relation between the network and the accuracy of the estimates for the deposition, deposition data have been generated by the Gaussian model, including a measurement error by a Monte Carlo simulation and this procedure has been repeated for several grid sizes, dispersion conditions, number of measurements per location, and errors per single measurement. The present technique has also been applied for the mesh sizes of two networks in the Netherlands, viz. the Landelijk Meetnet Radioaciviteit (National Measurement Network on Radioactivity, mesh size approx. 35 km) and the proposed Landelijk Meetnet Nucleaire Incidenten (National Measurement Network on Nuclear Incidents, mesh size approx. 15 km). The results show accuracies of 11 and 7 percent, respectively, if monitoring locations are used more than 10 km away from the postulated accident site. These figures are based upon 3 measurements per location and a dispersion during neutral weather with a wind velocity of 4 m/s. For stable weather conditions and low wind velocities, i.e. a small plume, the calculated accuracies are at least a factor 1.5 worse.The present type of analysis makes a cost-benefit approach to the

  3. A comparison of the Gaussian Plume models of Pasquill and Smith

    International Nuclear Information System (INIS)

    Barker, C.D.

    1978-03-01

    The Gaussian Plume models of Pasquill and Smith are compared over the full range of atmospheric stability for both short and continuous releases of material. For low level releases the two models compare well (to within a factor of approximately 2) except for very unstable conditions. The agreement between the two models for high level sources is not so good. It is concluded that the two Gaussian models are cheap and simple to use, but may require experimental verification in specific applications. (author)

  4. User's manual for DWNWND: an interactive Gaussian plume atmospheric transport model with eight dispersion parameter options

    International Nuclear Information System (INIS)

    Fields, D.E.; Miller, C.W.

    1980-05-01

    The most commonly used approach for estimating the atmospheric concentration and deposition of material downwind from its point of release is the Gaussian plume atmospheric dispersion model. Two of the critical parameters in this model are sigma/sub y/ and sigma/sub z/, the horizontal and vertical dispersion parameters, respectively. A number of different sets of values for sigma/sub y/ and sigma/sub z/ have been determined empirically for different release heights and meteorological and terrain conditions. The computer code DWNWND, described in this report, is an interactive implementation of the Gaussian plume model. This code allows the user to specify any one of eight different sets of the empirically determined dispersion paramters. Using the selected dispersion paramters, ground-level normalized exposure estimates are made at any specified downwind distance. Computed values may be corrected for plume depletion due to deposition and for plume settling due to gravitational fall. With this interactive code, the user chooses values for ten parameters which define the source, the dispersion and deposition process, and the sampling point. DWNWND is written in FORTRAN for execution on a PDP-10 computer, requiring less than one second of central processor unit time for each simulation

  5. Gaussian model for emission rate measurement of heated plumes using hyperspectral data

    Science.gov (United States)

    Grauer, Samuel J.; Conrad, Bradley M.; Miguel, Rodrigo B.; Daun, Kyle J.

    2018-02-01

    This paper presents a novel model for measuring the emission rate of a heated gas plume using hyperspectral data from an FTIR imaging spectrometer. The radiative transfer equation (RTE) is used to relate the spectral intensity of a pixel to presumed Gaussian distributions of volume fraction and temperature within the plume, along a line-of-sight that corresponds to the pixel, whereas previous techniques exclusively presume uniform distributions for these parameters. Estimates of volume fraction and temperature are converted to a column density by integrating the local molecular density along each path. Image correlation velocimetry is then employed on raw spectral intensity images to estimate the volume-weighted normal velocity at each pixel. Finally, integrating the product of velocity and column density along a control surface yields an estimate of the instantaneous emission rate. For validation, emission rate estimates were derived from synthetic hyperspectral images of a heated methane plume, generated using data from a large-eddy simulation. Calculating the RTE with Gaussian distributions of volume fraction and temperature, instead of uniform distributions, improved the accuracy of column density measurement by 14%. Moreover, the mean methane emission rate measured using our approach was within 4% of the ground truth. These results support the use of Gaussian distributions of thermodynamic properties in calculation of the RTE for optical gas diagnostics.

  6. A modified Gaussian model for the thermal plume from a ground-based heat source in a cross-wind

    International Nuclear Information System (INIS)

    Selander, W.N.; Barry, P.J.; Robertson, E.

    1990-06-01

    An array of propane burners operating at ground level in a cross-wind was used as a heat source to establish a blown-over thermal plume. A three-dimensional array of thermocouples was used to continuously measure the plume temperature downwind from the source. The resulting data were used to correlate the parameters of a modified Gaussian model for plume rise and dispersion with source strength, wind speed, and atmospheric dispersion parameters

  7. A comparison of the Gaussian Plume Diffusion Model with experimental data from Tilbury and Northfleet

    International Nuclear Information System (INIS)

    Barker, C.D.

    1979-07-01

    The Gaussian Plume Diffusion Model, using Smith's scheme for σsub(z) and various models for σsub(y), is compared with measured values of the location and strength of maximum ground level concentration taken during the Tilbury and Northfleet experiments. The position of maximum ground level concentration (xsub(m)) is found to be relatively insensitive to σsub(y) and Smith's model for σsub(z) is found to predict xsub(m) on average to within 50% for plume heights less than 200 - 400m (dependent on atmosphere stability). Several models for σsub(y) are examined by comparing predicted and observed values for the normalised maximum ground level concentration (Xsub(m)) and a modified form of Moore's model for σsub(y) is found to give the best overall fit, on average to within 30%. Gifford's release duration dependent model for σsub(y) is found to consistently underestimate Xsub(m) by 35 - 45%. This comparison is only a partial validation of the models described above and suggestions are made as to where further work is required. (author)

  8. Vertical dispersion from surface and elevated releases: An investigation of a Non-Gaussian plume model

    International Nuclear Information System (INIS)

    Brown, M.J.; Arya, S.P.; Snyder, W.H.

    1993-01-01

    The vertical diffusion of a passive tracer released from surface and elevated sources in a neutrally stratified boundary layer has been studied by comparing field and laboratory experiments with a non-Gaussian K-theory model that assumes power-law profiles for the mean velocity and vertical eddy diffusivity. Several important differences between model predictions and experimental data were discovered: (1) the model overestimated ground-level concentrations from surface and elevated releases at distances beyond the peak concentration; (2) the model overpredicted vertical mixing near elevated sources, especially in the upward direction; (3) the model-predicted exponent α in the exponential vertical concentration profile for a surface release [bar C(z)∝ exp(-z α )] was smaller than the experimentally measured exponent. Model closure assumptions and experimental short-comings are discussed in relation to their probable effect on model predictions and experimental measurements. 42 refs., 13 figs., 3 tabs

  9. Dispersion under low wind speed conditions using Gaussian Plume approach

    International Nuclear Information System (INIS)

    Rakesh, P.T.; Srinivas, C.V.; Baskaran, R.; Venkatesan, R.; Venkatraman, B.

    2018-01-01

    For radioactive dose computation due to atmospheric releases, dispersion models are essential requirement. For this purpose, Gaussian plume model (GPM) is used in the short range and advanced particle dispersion models are used in all ranges. In dispersion models, other than wind speed the most influential parameter which determines the fate of the pollutant is the turbulence diffusivity. In GPM the diffusivity is represented using empirical approach. Studies show that under low wind speed conditions, the existing diffusivity relationships are not adequate in estimating the diffusion. An important phenomenon that occurs during the low wind speed is the meandering motions. It is found that under meandering motions the extent of plume dispersion is more than the estimated value using conventional GPM and particle transport models. In this work a set of new turbulence parameters for the horizontal diffusion of the plume is suggested and using them in GPM, the plume is simulated and is compared against observation available from Hanford tracer release experiment

  10. Gaussian Plume Model Parameters for Ground-Level and Elevated Sources Derived from the Atmospheric Diffusion Equation in the Neutral and Stable Conditions

    International Nuclear Information System (INIS)

    Essa, K.S.M.

    2009-01-01

    The analytical solution of the atmospheric diffusion equation for a point source gives the ground-level concentration profiles. It depends on the wind speed ua nd vertical dispersion coefficient σ z expressed by Pasquill power laws. Both σ z and u are functions of downwind distance, stability and source elevation, while for the ground-level emission u is constant. In the neutral and stable conditions, the Gaussian plume model and finite difference numerical methods with wind speed in power law and the vertical dispersion coefficient in exponential law are estimated. This work shows that the estimated ground-level concentrations of the Gaussian model for high-level source and numerical finite difference method are very match fit to the observed ground-level concentrations of the Gaussian model

  11. Simple approximation for estimating centerline gamma absorbed dose rates due to a continuous Gaussian plume

    International Nuclear Information System (INIS)

    Overcamp, T.J.; Fjeld, R.A.

    1987-01-01

    A simple approximation for estimating the centerline gamma absorbed dose rates due to a continuous Gaussian plume was developed. To simplify the integration of the dose integral, this approach makes use of the Gaussian cloud concentration distribution. The solution is expressed in terms of the I1 and I2 integrals which were developed for estimating long-term dose due to a sector-averaged Gaussian plume. Estimates of tissue absorbed dose rates for the new approach and for the uniform cloud model were compared to numerical integration of the dose integral over a Gaussian plume distribution

  12. Gaussian plume model for the SO{sub 2} in a thermoelectric power plant; Modelo de pluma gaussiano para el SO{sub 2} en una central termoelectrica

    Energy Technology Data Exchange (ETDEWEB)

    Reyes L, C; Munoz Ledo, C R [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1993-12-31

    The Gaussian Plume Model is an analytical extension to simulate the dispersion of the SO{sub 2} concentration at ground level as a function of the emission changes in the spot sources, as well as the pollutant dispersion in the Wind Rose, when the necessary parameters are fed. The model was elaborated in a personal computer and the results produced are generated in text form. [Espanol] El modelo de pluma gaussiano es una extension analitica para simular la dispersion de las concentraciones de SO{sub 2} a nivel del piso en funcion de los cambios de las emisiones en las fuentes puntuales, asi como, la dispersion del contaminante en la rosa de los vientos cuando se le alimentan los parametros necesarios. El modelo fue elaborado en una computadora personal y los resultados que proporciona los genera en modo texto.

  13. Gaussian plume model for the SO{sub 2} in a thermoelectric power plant; Modelo de pluma gaussiano para el SO{sub 2} en una central termoelectrica

    Energy Technology Data Exchange (ETDEWEB)

    Reyes L, C.; Munoz Ledo, C. R. [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1992-12-31

    The Gaussian Plume Model is an analytical extension to simulate the dispersion of the SO{sub 2} concentration at ground level as a function of the emission changes in the spot sources, as well as the pollutant dispersion in the Wind Rose, when the necessary parameters are fed. The model was elaborated in a personal computer and the results produced are generated in text form. [Espanol] El modelo de pluma gaussiano es una extension analitica para simular la dispersion de las concentraciones de SO{sub 2} a nivel del piso en funcion de los cambios de las emisiones en las fuentes puntuales, asi como, la dispersion del contaminante en la rosa de los vientos cuando se le alimentan los parametros necesarios. El modelo fue elaborado en una computadora personal y los resultados que proporciona los genera en modo texto.

  14. Comparison of in situ observations of air traffic emission signatures in the North Atlantic flight corridor with simulations using a Gaussian plume model

    Energy Technology Data Exchange (ETDEWEB)

    Konopka, P; Schlager, H; Schulte, P; Schumann, U; Ziereis, H [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Oberpfaffenhofen (Germany). Inst. fuer Physik der Atmosphaere; Hagen, D; Whitefield, P [Missouri Univ., Rolla, MO (United States). Lab. for Cloud and Aerosol Science

    1998-12-31

    Focussed aircraft measurements including NO, NO{sub 2}, O{sub 3}, and aerosols (CN) have been carried out over the Eastern North Atlantic as part of the POLINAT (Pollution from Aircraft Emissions in the North Atlantic Flight Corridor) project to search for small and large scale signals of air traffic emissions in the corridor region. Here, the experimental data measured at cruising altitudes on November, 6, 1994 close to peak traffic hours are considered. Observed peak concentrations in small scale NO{sub x} spikes exceed background level of about 50 pptv by up to two orders of magnitude. The measured NO{sub x} concentration field is compared with simulations obtained with a plume dispersion model using collected air traffic data and wind measurements. Additionally, the measured and calculated NO/NO{sub x} ratios are considered. The comparison with the model shows that the observed (multiple-)peaks can be understood as a superposition of several aircraft plumes with ages up to 3 hours. (author) 12 refs.

  15. Comparison of in situ observations of air traffic emission signatures in the North Atlantic flight corridor with simulations using a Gaussian plume model

    Energy Technology Data Exchange (ETDEWEB)

    Konopka, P.; Schlager, H.; Schulte, P.; Schumann, U.; Ziereis, H. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Oberpfaffenhofen (Germany). Inst. fuer Physik der Atmosphaere; Hagen, D.; Whitefield, P. [Missouri Univ., Rolla, MO (United States). Lab. for Cloud and Aerosol Science

    1997-12-31

    Focussed aircraft measurements including NO, NO{sub 2}, O{sub 3}, and aerosols (CN) have been carried out over the Eastern North Atlantic as part of the POLINAT (Pollution from Aircraft Emissions in the North Atlantic Flight Corridor) project to search for small and large scale signals of air traffic emissions in the corridor region. Here, the experimental data measured at cruising altitudes on November, 6, 1994 close to peak traffic hours are considered. Observed peak concentrations in small scale NO{sub x} spikes exceed background level of about 50 pptv by up to two orders of magnitude. The measured NO{sub x} concentration field is compared with simulations obtained with a plume dispersion model using collected air traffic data and wind measurements. Additionally, the measured and calculated NO/NO{sub x} ratios are considered. The comparison with the model shows that the observed (multiple-)peaks can be understood as a superposition of several aircraft plumes with ages up to 3 hours. (author) 12 refs.

  16. Gaussian Mixture Model of Heart Rate Variability

    Science.gov (United States)

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  17. ALOFT-PC a smoke plume trajectory model for personal computers

    International Nuclear Information System (INIS)

    Walton, W.D.; McGrattan, K.B.; Mullin, J.V.

    1996-01-01

    A computer model, named ALOFT-PC, was developed for use during in-situ burning of oil spills to predict smoke plume trajectory. The downwind distribution of smoke particulate is a complex function of fire parameters, meteorological conditions, and topographic features. Experimental burns have shown that the downwind distribution of smoke is not Gaussian and simple smoke plume models do not capture the observed plume features. ALOFT-PC consists of the Navier-Stokes equations using an eddy viscosity over a uniform grid that spans the smoke plume and its surroundings. The model inputs are wind speed and variability, atmospheric temperature profile, and fire parameters and the output is the average of the plume. 7 refs., 3 tabs

  18. Modeling text with generalizable Gaussian mixtures

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Sigurdsson, Sigurdur; Kolenda, Thomas

    2000-01-01

    We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss...

  19. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  20. Kinetic electron model for plasma thruster plumes

    Science.gov (United States)

    Merino, Mario; Mauriño, Javier; Ahedo, Eduardo

    2018-03-01

    A paraxial model of an unmagnetized, collisionless plasma plume expanding into vacuum is presented. Electrons are treated kinetically, relying on the adiabatic invariance of their radial action integral for the integration of Vlasov's equation, whereas ions are treated as a cold species. The quasi-2D plasma density, self-consistent electric potential, and electron pressure, temperature, and heat fluxes are analyzed. In particular, the model yields the collisionless cooling of electrons, which differs from the Boltzmann relation and the simple polytropic laws usually employed in fluid and hybrid PIC/fluid plume codes.

  1. Extended Linear Models with Gaussian Priors

    DEFF Research Database (Denmark)

    Quinonero, Joaquin

    2002-01-01

    In extended linear models the input space is projected onto a feature space by means of an arbitrary non-linear transformation. A linear model is then applied to the feature space to construct the model output. The dimension of the feature space can be very large, or even infinite, giving the model...... a very big flexibility. Support Vector Machines (SVM's) and Gaussian processes are two examples of such models. In this technical report I present a model in which the dimension of the feature space remains finite, and where a Bayesian approach is used to train the model with Gaussian priors...... on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....

  2. The Gaussian atmospheric transport model and its sensitivity to the joint frequency distribution and parametric variability.

    Science.gov (United States)

    Hamby, D M

    2002-01-01

    Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.

  3. A hybrid plume model for local-scale dispersion

    Energy Technology Data Exchange (ETDEWEB)

    Nikmo, J.; Tuovinen, J.P.; Kukkonen, J.; Valkama, I.

    1997-12-31

    The report describes the contribution of the Finnish Meteorological Institute to the project `Dispersion from Strongly Buoyant Sources`, under the `Environment` programme of the European Union. The project addresses the atmospheric dispersion of gases and particles emitted from typical fires in warehouses and chemical stores. In the study only the `passive plume` regime, in which the influence of plume buoyancy is no longer important, is addressed. The mathematical model developed and its numerical testing is discussed. The model is based on atmospheric boundary-layer scaling theory. In the vicinity of the source, Gaussian equations are used in both the horizontal and vertical directions. After a specified transition distance, gradient transfer theory is applied in the vertical direction, while the horizontal dispersion is still assumed to be Gaussian. The dispersion parameters and eddy diffusivity are modelled in a form which facilitates the use of a meteorological pre-processor. Also a new model for the vertical eddy diffusivity (K{sub z}), which is a continuous function of height in the various atmospheric scaling regions is presented. The model includes a treatment of the dry deposition of gases and particulate matter, but wet deposition has been neglected. A numerical solver for the atmospheric diffusion equation (ADE) has been developed. The accuracy of the numerical model was analysed by comparing the model predictions with two analytical solutions of ADE. The numerical deviations of the model predictions from these analytic solutions were less than two per cent for the computational regime. The report gives numerical results for the vertical profiles of the eddy diffusivity and the dispersion parameters, and shows spatial concentration distributions in various atmospheric conditions 39 refs.

  4. Direct Importance Estimation with Gaussian Mixture Models

    Science.gov (United States)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  5. Numerical model simulation of atmospheric coolant plumes

    International Nuclear Information System (INIS)

    Gaillard, P.

    1980-01-01

    The effect of humid atmospheric coolants on the atmosphere is simulated by means of a three-dimensional numerical model. The atmosphere is defined by its natural vertical profiles of horizontal velocity, temperature, pressure and relative humidity. Effluent discharge is characterised by its vertical velocity and the temperature of air satured with water vapour. The subject of investigation is the area in the vicinity of the point of discharge, with due allowance for the wake effect of the tower and buildings and, where application, wind veer with altitude. The model equations express the conservation relationships for mometum, energy, total mass and water mass, for an incompressible fluid behaving in accordance with the Boussinesq assumptions. Condensation is represented by a simple thermodynamic model, and turbulent fluxes are simulated by introduction of turbulent viscosity and diffusivity data based on in-situ and experimental water model measurements. The three-dimensional problem expressed in terms of the primitive variables (u, v, w, p) is governed by an elliptic equation system which is solved numerically by application of an explicit time-marching algorithm in order to predict the steady-flow velocity distribution, temperature, water vapour concentration and the liquid-water concentration defining the visible plume. Windstill conditions are simulated by a program processing the elliptic equations in an axisymmetrical revolution coordinate system. The calculated visible plumes are compared with plumes observed on site with a view to validate the models [fr

  6. Ship plume modeling in EOSTAR

    NARCIS (Netherlands)

    Iersel, M. van; Mack, A.; Degach, M.A.C.; Eijk, A.M.J. van

    2014-01-01

    The EOSTAR model aims at assessing the performance of electro-optical (EO) sensors deployed in a maritime surface scenario, by providing operational performance measures (such as detection ranges) and synthetic images. The target library of EOSTAR includes larger surface vessels, for which the

  7. Modeling contaminant plumes in fractured limestone aquifers

    DEFF Research Database (Denmark)

    Mosthaf, Klaus; Brauns, Bentje; Fjordbøge, Annika Sidelmann

    Determining the fate and transport of contaminant plumes from contaminated sites in limestone aquifers is important because they are a major drinking water resource. This is challenging because they are often heavily fractured and contain chert layers and nodules, resulting in a complex transport...... model. The paper concludes with recommendations on how to identify and employ suitable models to advance the conceptual understanding and as decision support tools for risk assessment and the planning of remedial actions....... behavior. Improved conceptual models are needed for this type of site. Here conceptual models are developed by combining numerical models with field data. Several types of fracture flow and transport models are available for the modeling of contaminant transport in fractured media. These include...... the established approaches of the equivalent porous medium, discrete fracture and dual continuum models. However, these modeling concepts are not well tested for contaminant plume migration in limestone geologies. Our goal was to develop and evaluate approaches for modeling the transport of dissolved contaminant...

  8. Cooling tower plume - model and experiment

    Science.gov (United States)

    Cizek, Jan; Gemperle, Jiri; Strob, Miroslav; Nozicka, Jiri

    The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.

  9. Cooling tower plume - model and experiment

    Directory of Open Access Journals (Sweden)

    Cizek Jan

    2017-01-01

    Full Text Available The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.

  10. Liquid Booster Module (LBM) plume flowfield model

    Science.gov (United States)

    Smith, S. D.

    1981-01-01

    A complete definition of the LBM plume is important for many Shuttle design criteria. The exhaust plume shape has a significant effect on the vehicle base pressure. The LBM definition is also important to the Shuttle base heating, aerodynamics and the influence of the exhaust plume on the launch stand and environment. For these reasons a knowledge of the LBM plume characteristics is necessary. A definition of the sea level LBM plume as well as at several points along the Shuttle trajectory to LBM, burnout is presented.

  11. White Gaussian Noise - Models for Engineers

    Science.gov (United States)

    Jondral, Friedrich K.

    2018-04-01

    This paper assembles some information about white Gaussian noise (WGN) and its applications. It starts from a description of thermal noise, i. e. the irregular motion of free charge carriers in electronic devices. In a second step, mathematical models of WGN processes and their most important parameters, especially autocorrelation functions and power spectrum densities, are introduced. In order to proceed from mathematical models to simulations, we discuss the generation of normally distributed random numbers. The signal-to-noise ratio as the most important quality measure used in communications, control or measurement technology is accurately introduced. As a practical application of WGN, the transmission of quadrature amplitude modulated (QAM) signals over additive WGN channels together with the optimum maximum likelihood (ML) detector is considered in a demonstrative and intuitive way.

  12. Functional Dual Adaptive Control with Recursive Gaussian Process Model

    International Nuclear Information System (INIS)

    Prüher, Jakub; Král, Ladislav

    2015-01-01

    The paper deals with dual adaptive control problem, where the functional uncertainties in the system description are modelled by a non-parametric Gaussian process regression model. Current approaches to adaptive control based on Gaussian process models are severely limited in their practical applicability, because the model is re-adjusted using all the currently available data, which keeps growing with every time step. We propose the use of recursive Gaussian process regression algorithm for significant reduction in computational requirements, thus bringing the Gaussian process-based adaptive controllers closer to their practical applicability. In this work, we design a bi-criterial dual controller based on recursive Gaussian process model for discrete-time stochastic dynamic systems given in an affine-in-control form. Using Monte Carlo simulations, we show that the proposed controller achieves comparable performance with the full Gaussian process-based controller in terms of control quality while keeping the computational demands bounded. (paper)

  13. Towards LES Models of Jets and Plumes

    Science.gov (United States)

    Webb, A. T.; Mansour, N. N.

    2000-01-01

    As pointed out by Rodi standard integral solutions for jets and plumes developed for discharge into infinite, quiescent ambient are difficult to extend to complex situations, particularly in the presence of boundaries such as the sea floor or ocean surface. In such cases the assumption of similarity breaks down and it is impossible to find a suitable entrainment coefficient. The models are also incapable of describing any but the most slowly varying unsteady motions. There is therefore a need for full time-dependent modeling of the flow field for which there are three main approaches: (1) Reynolds averaged numerical simulation (RANS), (2) large eddy simulation (LES), and (3) direct numerical simulation (DNS). Rodi applied RANS modeling to both jets and plumes with considerable success, the test being a match with experimental data for time-averaged velocity and temperature profiles as well as turbulent kinetic energy and rms axial turbulent velocity fluctuations. This model still relies on empirical constants, some eleven in the case of the buoyant jet, and so would not be applicable to a partly laminar plume, may have limited use in the presence of boundaries, and would also be unsuitable if one is after details of the unsteady component of the flow (the turbulent eddies). At the other end of the scale DNS modeling includes all motions down to the viscous scales. Boersma et al. have built such a model for the non-buoyant case which also compares well with measured data for mean and turbulent velocity components. The model demonstrates its versatility by application to a laminar flow case. As its name implies, DNS directly models the Navier-Stokes equations without recourse to subgrid modeling so for flows with a broad spectrum of motions (high Re) the cost can be prohibitive - the number of required grid points scaling with Re(exp 9/4) and the number of time steps with Re(exp 3/4). The middle road is provided by LES whereby the Navier-Stokes equations are formally

  14. An integral model of plume rise from high explosive detonations

    International Nuclear Information System (INIS)

    Boughton, B.A.; De Laurentis, J.M.

    1987-01-01

    A numerical model has been developed which provides a complete description of the time evolution of both the physical and thermodynamic properties of the cloud formed when a high explosive is detonated. This simulation employs the integral technique. The model equations are derived by integrating the three-dimensional conservation equations of mass, momentum and energy over the plume cross section. Assumptions are made regarding (a) plume symmetry; (b) the shape of profiles of velocity, temperature, etc. across the plume; and (c) the methodology for simulating entrainment and the effects of the crossflow induced pressure drag force on the plume. With these assumptions, the integral equations can be reduced to a set of ordinary differential equations on the plume centerline variables. Only the macroscopic plume characteristics, e.g., plume radius, centerline height, temperature and density, are predicted; details of the plume intrastructure are ignored. The model explicitly takes into account existing meteorology and has been expanded to consider the alterations in plume behavior which occur when aqueous foam is used as a dispersal mitigating material. The simulation was tested by comparison with field measurements of cloud top height and diameter. Predictions were within 25% of field observations over a wide range of explosive yield and atmospheric stability

  15. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  16. Diffusion weighted imaging in patients with rectal cancer: Comparison between Gaussian and non-Gaussian models.

    Directory of Open Access Journals (Sweden)

    Georgios C Manikis

    Full Text Available The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer.Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2 at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG and non-Gaussian (MNG and BNG were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE. To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC and F-ratio.All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area.No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.

  17. Application Of Shared Gamma And Inverse-Gaussian Frailty Models ...

    African Journals Online (AJOL)

    Shared Gamma and Inverse-Gaussian Frailty models are used to analyze the survival times of patients who are clustered according to cancer/tumor types under Parametric Proportional Hazard framework. The result of the ... However, no evidence is strong enough for preference of either Gamma or Inverse Gaussian Frailty.

  18. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  19. Computer Simulation for Dispersion of Air Pollution Released from a Line Source According to Gaussian Model

    International Nuclear Information System (INIS)

    Emad, A.A.; El Shazly, S.M.; Kassem, Kh.O.

    2010-01-01

    A line source model, developed in laboratory of environmental physics, faculty of science at Qena, Egypt is proposed to describe the downwind dispersion of pollutants near roadways, at different cities in Egypt. The model is based on the Gaussian plume methodology and is used to predict air pollutants' concentrations near roadways. In this direction, simple software has been presented in this paper, developed by authors, adopted completely Graphical User Interface (GUI) technique for operating in various windows-based microcomputers. The software interface and code have been designed by Microsoft Visual basic 6.0 based on the Gaussian diffusion equation. This software is developed to predict concentrations of gaseous pollutants (eg. CO, SO 2 , NO 2 and particulates) at a user specified receptor grid

  20. Numerical Speadsheet Modeling of Natural Attenuation for Groundwater Contaminant Plumes

    National Research Council Canada - National Science Library

    Twesme, Troy

    1999-01-01

    .... The model was used to evaluate natural attenuation for removal of a trichloroethylene (TCE) plume from a surficial aquifer containing three regions with distinctly different processes for degradation of TCE...

  1. EM Modelling of RF Propagation Through Plasma Plumes

    Science.gov (United States)

    Pandolfo, L.; Bandinelli, M.; Araque Quijano, J. L.; Vecchi, G.; Pawlak, H.; Marliani, F.

    2012-05-01

    Electric propulsion is a commercially attractive solution for attitude and position control of geostationary satellites. Hall-effect ion thrusters generate a localized plasma flow in the surrounding of the satellite, whose impact on the communication system needs to be qualitatively and quantitatively assessed. An electromagnetic modelling tool has been developed and integrated into the Antenna Design Framework- ElectroMagnetic Satellite (ADF-EMS). The system is able to guide the user from the plume definition phases through plume installation and simulation. A validation activity has been carried out and the system has been applied to the plume modulation analysis of SGEO/Hispasat mission.

  2. Numerical modeling of macrodispersion in heterogeneous media: a comparison of multi-Gaussian and non-multi-Gaussian models

    Science.gov (United States)

    Wen, Xian-Huan; Gómez-Hernández, J. Jaime

    1998-03-01

    The macrodispersion of an inert solute in a 2-D heterogeneous porous media is estimated numerically in a series of fields of varying heterogeneity. Four different random function (RF) models are used to model log-transmissivity (ln T) spatial variability, and for each of these models, ln T variance is varied from 0.1 to 2.0. The four RF models share the same univariate Gaussian histogram and the same isotropic covariance, but differ from one another in terms of the spatial connectivity patterns at extreme transmissivity values. More specifically, model A is a multivariate Gaussian model for which, by definition, extreme values (both high and low) are spatially uncorrelated. The other three models are non-multi-Gaussian: model B with high connectivity of high extreme values, model C with high connectivity of low extreme values, and model D with high connectivities of both high and low extreme values. Residence time distributions (RTDs) and macrodispersivities (longitudinal and transverse) are computed on ln T fields corresponding to the different RF models, for two different flow directions and at several scales. They are compared with each other, as well as with predicted values based on first-order analytical results. Numerically derived RTDs and macrodispersivities for the multi-Gaussian model are in good agreement with analytically derived values using first-order theories for log-transmissivity variance up to 2.0. The results from the non-multi-Gaussian models differ from each other and deviate largely from the multi-Gaussian results even when ln T variance is small. RTDs in non-multi-Gaussian realizations with high connectivity at high extreme values display earlier breakthrough than in multi-Gaussian realizations, whereas later breakthrough and longer tails are observed for RTDs from non-multi-Gaussian realizations with high connectivity at low extreme values. Longitudinal macrodispersivities in the non-multi-Gaussian realizations are, in general, larger than

  3. Modeling Smoke Plume-Rise and Dispersion from Southern United States Prescribed Burns with Daysmoke

    Science.gov (United States)

    G L Achtemeier; S L Goodrick; Y Liu; F Garcia-Menendez; Y Hu; M. Odman

    2011-01-01

    We present Daysmoke, an empirical-statistical plume rise and dispersion model for simulating smoke from prescribed burns. Prescribed fires are characterized by complex plume structure including multiple-core updrafts which makes modeling with simple plume models difficult. Daysmoke accounts for plume structure in a three-dimensional veering/sheering atmospheric...

  4. Revisiting non-Gaussianity from non-attractor inflation models

    Science.gov (United States)

    Cai, Yi-Fu; Chen, Xingang; Namjoo, Mohammad Hossein; Sasaki, Misao; Wang, Dong-Gang; Wang, Ziwei

    2018-05-01

    Non-attractor inflation is known as the only single field inflationary scenario that can violate non-Gaussianity consistency relation with the Bunch-Davies vacuum state and generate large local non-Gaussianity. However, it is also known that the non-attractor inflation by itself is incomplete and should be followed by a phase of slow-roll attractor. Moreover, there is a transition process between these two phases. In the past literature, this transition was approximated as instant and the evolution of non-Gaussianity in this phase was not fully studied. In this paper, we follow the detailed evolution of the non-Gaussianity through the transition phase into the slow-roll attractor phase, considering different types of transition. We find that the transition process has important effect on the size of the local non-Gaussianity. We first compute the net contribution of the non-Gaussianities at the end of inflation in canonical non-attractor models. If the curvature perturbations keep evolving during the transition—such as in the case of smooth transition or some sharp transition scenarios—the Script O(1) local non-Gaussianity generated in the non-attractor phase can be completely erased by the subsequent evolution, although the consistency relation remains violated. In extremal cases of sharp transition where the super-horizon modes freeze immediately right after the end of the non-attractor phase, the original non-attractor result can be recovered. We also study models with non-canonical kinetic terms, and find that the transition can typically contribute a suppression factor in the squeezed bispectrum, but the final local non-Gaussianity can still be made parametrically large.

  5. Integrating wildfire plume rises within atmospheric transport models

    Science.gov (United States)

    Mallia, D. V.; Kochanski, A.; Wu, D.; Urbanski, S. P.; Krueger, S. K.; Lin, J. C.

    2016-12-01

    Wildfires can generate significant pyro-convection that is responsible for releasing pollutants, greenhouse gases, and trace species into the free troposphere, which are then transported a significant distance downwind from the fire. Oftentimes, atmospheric transport and chemistry models have a difficult time resolving the transport of smoke from these wildfires, primarily due to deficiencies in estimating the plume injection height, which has been highlighted in previous work as the most important aspect of simulating wildfire plume transport. As a result of the uncertainties associated with modeled wildfire plume rise, researchers face difficulties modeling the impacts of wildfire smoke on air quality and constraining fire emissions using inverse modeling techniques. Currently, several plume rise parameterizations exist that are able to determine the injection height of fire emissions; however, the success of these parameterizations has been mixed. With the advent of WRF-SFIRE, the wildfire plume rise and injection height can now be explicitly calculated using a fire spread model (SFIRE) that is dynamically linked with the atmosphere simulated by WRF. However, this model has only been tested on a limited basis due to computational costs. Here, we will test the performance of WRF-SFIRE in addition to several commonly adopted plume parameterizations (Freitas, Sofiev, and Briggs) for the 2013 Patch Springs (Utah) and 2012 Baker Canyon (Washington) fires, for both of which observations of plume rise heights are available. These plume rise techniques will then be incorporated within a Lagrangian atmospheric transport model (STILT) in order to simulate CO and CO2 concentrations during NASA's CARVE Earth Science Airborne Program over Alaska during the summer of 2012. Initial model results showed that STILT model simulations were unable to reproduce enhanced CO concentrations produced by Alaskan fires observed during 2012. Near-surface concentrations were drastically

  6. Merits of a Scenario Approach in Dredge Plume Modelling

    DEFF Research Database (Denmark)

    Pedersen, Claus; Chu, Amy Ling Chu; Hjelmager Jensen, Jacob

    2011-01-01

    Dredge plume modelling is a key tool for quantification of potential impacts to inform the EIA process. There are, however, significant uncertainties associated with the modelling at the EIA stage when both dredging methodology and schedule are likely to be a guess at best as the dredging...... contractor would rarely have been appointed. Simulation of a few variations of an assumed full dredge period programme will generally not provide a good representation of the overall environmental risks associated with the programme. An alternative dredge plume modelling strategy that attempts to encapsulate...... uncertainties associated with preliminary dredging programmes by using a scenario-based modelling approach is presented. The approach establishes a set of representative and conservative scenarios for key factors controlling the spill and plume dispersion and simulates all combinations of e.g. dredge, climatic...

  7. Perturbative corrections for approximate inference in gaussian latent variable models

    DEFF Research Database (Denmark)

    Opper, Manfred; Paquet, Ulrich; Winther, Ole

    2013-01-01

    Expectation Propagation (EP) provides a framework for approximate inference. When the model under consideration is over a latent Gaussian field, with the approximation being Gaussian, we show how these approximations can systematically be corrected. A perturbative expansion is made of the exact b...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....

  8. An approximate fractional Gaussian noise model with computational cost

    KAUST Repository

    Sø rbye, Sigrunn H.; Myrvoll-Nilsen, Eirik; Rue, Haavard

    2017-01-01

    Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length $n$ using a likelihood

  9. Supervised Gaussian mixture model based remote sensing image ...

    African Journals Online (AJOL)

    Using the supervised classification technique, both simulated and empirical satellite remote sensing data are used to train and test the Gaussian mixture model algorithm. For the purpose of validating the experiment, the resulting classified satellite image is compared with the ground truth data. For the simulated modelling, ...

  10. An unconventional adaptation of a classical Gaussian plume dispersion scheme for the fast assessment of external irradiation from a radioactive cloud

    Science.gov (United States)

    Pecha, Petr; Pechova, Emilie

    2014-06-01

    This article focuses on derivation of an effective algorithm for the fast estimation of cloudshine doses/dose rates induced by a large mixture of radionuclides discharged into the atmosphere. A certain special modification of the classical Gaussian plume approach is proposed for approximation of the near-field dispersion problem. Specifically, the accidental radioactivity release is subdivided into consecutive one-hour Gaussian segments, each driven by a short-term meteorological forecast for the respective hours. Determination of the physical quantity of photon fluence rate from an ambient cloud irradiation is coupled to a special decomposition of the Gaussian plume shape into the equivalent virtual elliptic disks. It facilitates solution of the formerly used time-consuming 3-D integration and provides advantages with regard to acceleration of the computational process on a local scale. An optimal choice of integration limit is adopted on the basis of the mean free path of γ-photons in the air. An efficient approach is introduced for treatment of a wide range of energetic spectrum of the emitted photons when the usual multi-nuclide approach is replaced by a new multi-group scheme. The algorithm is capable of generating the radiological responses in a large net of spatial nodes. It predetermines the proposed procedure such as a proper tool for online data assimilation analysis in the near-field areas. A specific technique for numerical integration is verified on the basis of comparison with a partial analytical solution. Convergence of the finite cloud approximation to the tabulated semi-infinite cloud values for dose conversion factors was validated.

  11. Modelling thermal plume impacts - Kalpakkam approach

    International Nuclear Information System (INIS)

    Rao, T.S.; Anup Kumar, B.; Narasimhan, S.V.

    2002-01-01

    A good understanding of temperature patterns in the receiving waters is essential to know the heat dissipation from thermal plumes originating from coastal power plants. The seasonal temperature profiles of the Kalpakkam coast near Madras Atomic Power Station (MAPS) thermal out fall site are determined and analysed. It is observed that the seasonal current reversal in the near shore zone is one of the major mechanisms for the transport of effluents away from the point of mixing. To further refine our understanding of the mixing and dilution processes, it is necessary to numerically simulate the coastal ocean processes by parameterising the key factors concerned. In this paper, we outline the experimental approach to achieve this objective. (author)

  12. A Gaussian mixture copula model based localized Gaussian process regression approach for long-term wind speed prediction

    International Nuclear Information System (INIS)

    Yu, Jie; Chen, Kuilin; Mori, Junichi; Rashid, Mudassir M.

    2013-01-01

    Optimizing wind power generation and controlling the operation of wind turbines to efficiently harness the renewable wind energy is a challenging task due to the intermittency and unpredictable nature of wind speed, which has significant influence on wind power production. A new approach for long-term wind speed forecasting is developed in this study by integrating GMCM (Gaussian mixture copula model) and localized GPR (Gaussian process regression). The time series of wind speed is first classified into multiple non-Gaussian components through the Gaussian mixture copula model and then Bayesian inference strategy is employed to incorporate the various non-Gaussian components using the posterior probabilities. Further, the localized Gaussian process regression models corresponding to different non-Gaussian components are built to characterize the stochastic uncertainty and non-stationary seasonality of the wind speed data. The various localized GPR models are integrated through the posterior probabilities as the weightings so that a global predictive model is developed for the prediction of wind speed. The proposed GMCM–GPR approach is demonstrated using wind speed data from various wind farm locations and compared against the GMCM-based ARIMA (auto-regressive integrated moving average) and SVR (support vector regression) methods. In contrast to GMCM–ARIMA and GMCM–SVR methods, the proposed GMCM–GPR model is able to well characterize the multi-seasonality and uncertainty of wind speed series for accurate long-term prediction. - Highlights: • A novel predictive modeling method is proposed for long-term wind speed forecasting. • Gaussian mixture copula model is estimated to characterize the multi-seasonality. • Localized Gaussian process regression models can deal with the random uncertainty. • Multiple GPR models are integrated through Bayesian inference strategy. • The proposed approach shows higher prediction accuracy and reliability

  13. Modelling and control of dynamic systems using gaussian process models

    CERN Document Server

    Kocijan, Juš

    2016-01-01

    This monograph opens up new horizons for engineers and researchers in academia and in industry dealing with or interested in new developments in the field of system identification and control. It emphasizes guidelines for working solutions and practical advice for their implementation rather than the theoretical background of Gaussian process (GP) models. The book demonstrates the potential of this recent development in probabilistic machine-learning methods and gives the reader an intuitive understanding of the topic. The current state of the art is treated along with possible future directions for research. Systems control design relies on mathematical models and these may be developed from measurement data. This process of system identification, when based on GP models, can play an integral part of control design in data-based control and its description as such is an essential aspect of the text. The background of GP regression is introduced first with system identification and incorporation of prior know...

  14. Modeling of the near field plume of a Hall thruster

    International Nuclear Information System (INIS)

    Boyd, Iain D.; Yim, John T.

    2004-01-01

    In this study, a detailed numerical model is developed to simulate the xenon plasma near-field plume from a Hall thruster. The model uses a detailed fluid model to describe the electrons and a particle-based kinetic approach is used to model the heavy xenon ions and atoms. The detailed model is applied to compute the near field plume of a small, 200 W Hall thruster. Results from the detailed model are compared with the standard modeling approach that employs the Boltzmann model. The usefulness of the model detailed is assessed through direct comparisons with a number of different measured data sets. The comparisons illustrate that the detailed model accurately predicts a number of features of the measured data not captured by the simpler Boltzmann approach

  15. Evaluation of Distance Measures Between Gaussian Mixture Models of MFCCs

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Ellis, Dan P. W.; Christensen, Mads Græsbøll

    2007-01-01

    In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although...

  16. Inverse Gaussian model for small area estimation via Gibbs sampling

    African Journals Online (AJOL)

    We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...

  17. PLUME-MoM 1.0: A new integral model of volcanic plumes based on the method of moments

    Science.gov (United States)

    de'Michieli Vitturi, M.; Neri, A.; Barsotti, S.

    2015-08-01

    In this paper a new integral mathematical model for volcanic plumes, named PLUME-MoM, is presented. The model describes the steady-state dynamics of a plume in a 3-D coordinate system, accounting for continuous variability in particle size distribution of the pyroclastic mixture ejected at the vent. Volcanic plumes are composed of pyroclastic particles of many different sizes ranging from a few microns up to several centimeters and more. A proper description of such a multi-particle nature is crucial when quantifying changes in grain-size distribution along the plume and, therefore, for better characterization of source conditions of ash dispersal models. The new model is based on the method of moments, which allows for a description of the pyroclastic mixture dynamics not only in the spatial domain but also in the space of parameters of the continuous size distribution of the particles. This is achieved by formulation of fundamental transport equations for the multi-particle mixture with respect to the different moments of the grain-size distribution. Different formulations, in terms of the distribution of the particle number, as well as of the mass distribution expressed in terms of the Krumbein log scale, are also derived. Comparison between the new moments-based formulation and the classical approach, based on the discretization of the mixture in N discrete phases, shows that the new model allows for the same results to be obtained with a significantly lower computational cost (particularly when a large number of discrete phases is adopted). Application of the new model, coupled with uncertainty quantification and global sensitivity analyses, enables the investigation of the response of four key output variables (mean and standard deviation of the grain-size distribution at the top of the plume, plume height and amount of mass lost by the plume during the ascent) to changes in the main input parameters (mean and standard deviation) characterizing the

  18. Subsurface oil release field experiment - observations and modelling of subsurface plume behaviour

    International Nuclear Information System (INIS)

    Rye, H.; Brandvik, P.J.; Reed, M.

    1996-01-01

    An experiment was conducted at sea, in which oil was released from 107 metres depth, in order to study plume behaviour. The objective of the underwater release was to simulate a pipeline leakage without gas and high pressure and to study the behaviour of the rising plume. A numerical model for the underwater plume behaviour was used for comparison with field data. The expected path of the plume, the time expected for the plume to reach the sea surface and the width of the plume was modelled. Field data and the numerical model were in good agreement. 10 refs., 2 tabs., 9 figs

  19. Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation

    Science.gov (United States)

    Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet

    2015-01-01

    When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating

  20. Dynamical reduction models with general gaussian noises

    International Nuclear Information System (INIS)

    Bassi, Angelo; Ghirardi, GianCarlo

    2002-02-01

    We consider the effect of replacing in stochastic differential equations leading to the dynamical collapse of the statevector, white noise stochastic processes with non white ones. We prove that such a modification can be consistently performed without altering the most interesting features of the previous models. One of the reasons to discuss this matter derives from the desire of being allowed to deal with physical stochastic fields, such as the gravitational one, which cannot give rise to white noises. From our point of view the most relevant motivation for the approach we propose here derives from the fact that in relativistic models the occurrence of white noises is the main responsible for the appearance of untractable divergences. Therefore, one can hope that resorting to non white noises one can overcome such a difficulty. We investigate stochastic equations with non white noises, we discuss their reduction properties and their physical implications. Our analysis has a precise interest not only for the above mentioned subject but also for the general study of dissipative systems and decoherence. (author)

  1. Dynamical reduction models with general Gaussian noises

    International Nuclear Information System (INIS)

    Bassi, Angelo; Ghirardi, GianCarlo

    2002-01-01

    We consider the effect of replacing in stochastic differential equations leading to the dynamical collapse of the state vector, white-noise stochastic processes with nonwhite ones. We prove that such a modification can be consistently performed without altering the most interesting features of the previous models. One of the reasons to discuss this matter derives from the desire of being allowed to deal with physical stochastic fields, such as the gravitational one, which cannot give rise to white noises. From our point of view, the most relevant motivation for the approach we propose here derives from the fact that in relativistic models intractable divergences appear as a consequence of the white nature of the noises. Therefore, one can hope that resorting to nonwhite noises, one can overcome such a difficulty. We investigate stochastic equations with nonwhite noises, we discuss their reduction properties and their physical implications. Our analysis has a precise interest not only for the above-mentioned subject but also for the general study of dissipative systems and decoherence

  2. A numerical model for buoyant oil jets and smoke plumes

    International Nuclear Information System (INIS)

    Zheng, L.; Yapa, P. D.

    1997-01-01

    Development of a 3-D numerical model to simulate the behaviour of buoyant oil jets from underwater accidents and smoke plumes from oil burning was described. These jets/plumes can be oil-in-water, oil/gas mixture in water, gas in water, or gas in air. The ambient can have a 3-D flow structure, and spatially/temporally varying flow conditions. The model is based on the Lagrangian integral technique. The model formulation of oil jet includes the diffusion and dissolution of oil from the jet to the ambient environment. It is suitable to simulate well blowout accidents that can occur in deep waters, including that of the North Sea. The model has been thoroughly tested against a variety of data, including data from both laboratory and field experiments. In all cases the simulation data compared very well with experimental data. 26 refs., 10 figs

  3. An approximate fractional Gaussian noise model with computational cost

    KAUST Repository

    Sørbye, Sigrunn H.

    2017-09-18

    Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length $n$ using a likelihood-based approach is ${\\\\mathcal O}(n^{2})$, exploiting the Toeplitz structure of the covariance matrix. In most realistic cases, we do not observe the fGn process directly but only through indirect Gaussian observations, so the Toeplitz structure is easily lost and the computational cost increases to ${\\\\mathcal O}(n^{3})$. This paper presents an approximate fGn model of ${\\\\mathcal O}(n)$ computational cost, both with direct or indirect Gaussian observations, with or without conditioning. This is achieved by approximating fGn with a weighted sum of independent first-order autoregressive processes, fitting the parameters of the approximation to match the autocorrelation function of the fGn model. The resulting approximation is stationary despite being Markov and gives a remarkably accurate fit using only four components. The performance of the approximate fGn model is demonstrated in simulations and two real data examples.

  4. Modelling the fate of the Tijuana River discharge plume

    Science.gov (United States)

    van Ormondt, M.; Terrill, E.; Hibler, L. F.; van Dongeren, A. R.

    2010-12-01

    After rainfall events, the Tijuana River discharges excess runoff into the ocean in a highly turbid plume. The runoff waters contain large suspended solids concentrations, as well as high levels of toxic contaminants, bacteria, and hepatitis and enteroviruses. Public health hazards posed by the effluent often result in beach closures for several kilometers northward along the U.S. shoreline. A Delft3D model has been set up to predict the fate of the Tijuana River plume. The model takes into account the effects of tides, wind, waves, salinity, and temperature stratification. Heat exchange with the atmosphere is also included. The model consists of a relatively coarse outer domain and a high-resolution surf zone domain that are coupled with Domain Decomposition. The offshore boundary conditions are obtained from the larger NCOM SoCal model (operated by the US Navy) that spans the entire Southern California Bight. A number of discharge events are investigated, in which model results are validated against a wide range of field measurements in the San Diego Bight. These include HF Radar surface currents, REMUS tracks, drifter deployments, satellite imagery, as well as current and temperature profile measurements at a number of locations. The model is able to reproduce the observed current and temperature patterns reasonably well. Under calm conditions, the model results suggest that the hydrodynamics in the San Diego Bight are largely governed by internal waves. During rainfall events, which are typically accompanied by strong winds and high waves, wind and wave driven currents become dominant. An analysis will be made of what conditions determine the trapping and mixing of the plume inside the surfzone and/or the propagation of the plume through the breakers and onto the coastal shelf. The model is now also running in operational mode. Three day forecasts are made every 24 hours. This study was funded by the Office of Naval Research.

  5. On a Generalized Squared Gaussian Diffusion Model for Option Valuation

    Directory of Open Access Journals (Sweden)

    Edeki S.O.

    2017-01-01

    Full Text Available In financial mathematics, option pricing models are vital tools whose usefulness cannot be overemphasized. Modern approaches and modelling of financial derivatives are therefore required in option pricing and valuation settings. In this paper, we derive via the application of Ito lemma, a pricing model referred to as Generalized Squared Gaussian Diffusion Model (GSGDM for option pricing and valuation. Same approach can be considered via Stratonovich stochastic dynamics. We also show that the classical Black-Scholes, and the square root constant elasticity of variance models are special cases of the GSGDM. In addition, general solution of the GSGDM is obtained using modified variational iterative method (MVIM.

  6. Color Texture Segmentation by Decomposition of Gaussian Mixture Model

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří; Somol, Petr; Haindl, Michal; Pudil, Pavel

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 287-296 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA MŠk 2C06019 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : texture segmentation * gaussian mixture model * EM algorithm Subject RIV: IN - Informatics, Computer Science Impact factor: 0.402, year: 2005 http://library.utia.cas.cz/separaty/historie/grim-color texture segmentation by decomposition of gaussian mixture model.pdf

  7. On the thermodynamic properties of the generalized Gaussian core model

    Directory of Open Access Journals (Sweden)

    B.M.Mladek

    2005-01-01

    Full Text Available We present results of a systematic investigation of the properties of the generalized Gaussian core model of index n. The potential of this system interpolates via the index n between the potential of the Gaussian core model and the penetrable sphere system, thereby varying the steepness of the repulsion. We have used both conventional and self-consistent liquid state theories to calculate the structural and thermodynamic properties of the system; reference data are provided by computer simulations. The results indicate that the concept of self-consistency becomes indispensable to guarantee excellent agreement with simulation data; in particular, structural consistency (in our approach taken into account via the zero separation theorem is obviously a very important requirement. Simulation results for the dimensionless equation of state, β P / ρ, indicate that for an index-value of 4, a clustering transition, possibly into a structurally ordered phase might set in as the system is compressed.

  8. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  9. Modeling of Interactions of Ablated Plumes

    National Research Council Canada - National Science Library

    Povitsky, Alex

    2008-01-01

    Heat transfer modulation between the gas flow and the Thermal Protection Shield (TPS) that occurs because of ejection of under-expanded pyrolysis gases through the cracks in the TPS is studied by numerical modeling...

  10. Prediction of Geological Subsurfaces Based on Gaussian Random Field Models

    Energy Technology Data Exchange (ETDEWEB)

    Abrahamsen, Petter

    1997-12-31

    During the sixties, random functions became practical tools for predicting ore reserves with associated precision measures in the mining industry. This was the start of the geostatistical methods called kriging. These methods are used, for example, in petroleum exploration. This thesis reviews the possibilities for using Gaussian random functions in modelling of geological subsurfaces. It develops methods for including many sources of information and observations for precise prediction of the depth of geological subsurfaces. The simple properties of Gaussian distributions make it possible to calculate optimal predictors in the mean square sense. This is done in a discussion of kriging predictors. These predictors are then extended to deal with several subsurfaces simultaneously. It is shown how additional velocity observations can be used to improve predictions. The use of gradient data and even higher order derivatives are also considered and gradient data are used in an example. 130 refs., 44 figs., 12 tabs.

  11. A model of non-Gaussian diffusion in heterogeneous media

    Science.gov (United States)

    Lanoiselée, Yann; Grebenkov, Denis S.

    2018-04-01

    Recent progress in single-particle tracking has shown evidence of the non-Gaussian distribution of displacements in living cells, both near the cellular membrane and inside the cytoskeleton. Similar behavior has also been observed in granular materials, turbulent flows, gels and colloidal suspensions, suggesting that this is a general feature of diffusion in complex media. A possible interpretation of this phenomenon is that a tracer explores a medium with spatio-temporal fluctuations which result in local changes of diffusivity. We propose and investigate an ergodic, easily interpretable model, which implements the concept of diffusing diffusivity. Depending on the parameters, the distribution of displacements can be either flat or peaked at small displacements with an exponential tail at large displacements. We show that the distribution converges slowly to a Gaussian one. We calculate statistical properties, derive the asymptotic behavior and discuss some implications and extensions.

  12. Evaluation of Gaussian approximations for data assimilation in reservoir models

    KAUST Repository

    Iglesias, Marco A.

    2013-07-14

    The Bayesian framework is the standard approach for data assimilation in reservoir modeling. This framework involves characterizing the posterior distribution of geological parameters in terms of a given prior distribution and data from the reservoir dynamics, together with a forward model connecting the space of geological parameters to the data space. Since the posterior distribution quantifies the uncertainty in the geologic parameters of the reservoir, the characterization of the posterior is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based on Gaussian approximations, are often used for characterizing the posterior distribution. Evaluating the performance of those Gaussian approximations is typically conducted by assessing their ability at reproducing the truth within the confidence interval provided by the ad hoc technique under consideration. This has the disadvantage of mixing up the approximation properties of the history matching algorithm employed with the information content of the particular observations used, making it hard to evaluate the effect of the ad hoc approximations alone. In this paper, we avoid this disadvantage by comparing the ad hoc techniques with a fully resolved state-of-the-art probing of the Bayesian posterior distribution. The ad hoc techniques whose performance we assess are based on (1) linearization around the maximum a posteriori estimate, (2) randomized maximum likelihood, and (3) ensemble Kalman filter-type methods. In order to fully resolve the posterior distribution, we implement a state-of-the art Markov chain Monte Carlo (MCMC) method that scales well with respect to the dimension of the parameter space, enabling us to study realistic forward models, in two space dimensions, at a high level of grid refinement. Our

  13. A mantle plume model for the Equatorial Highlands of Venus

    Science.gov (United States)

    Kiefer, Walter S.; Hager, Bradford H.

    1991-01-01

    The possibility that the Equatorial Highlands are the surface expressions of hot upwelling mantle plumes is considered via a series of mantle plume models developed using a cylindrical axisymmetric finite element code and depth-dependent Newtonian rheology. The results are scaled by assuming whole mantle convection and that Venus and the earth have similar mantle heat flows. The best model fits are for Beta and Atla. The common feature of the allowed viscosity models is that they lack a pronounced low-viscosity zone in the upper mantle. The shape of Venus's long-wavelength admittance spectrum and the slope of its geoid spectrum are also consistent with the lack of a low-viscosity zone. It is argued that the lack of an asthenosphere on Venus is due to the mantle of Venus being drier than the earth's mantle. Mantle plumes may also have contributed to the formation of some smaller highland swells, such as the Bell and Eistla regions and the Hathor/Innini/Ushas region.

  14. Graphical Gaussian models with edge and vertex symmetries

    DEFF Research Database (Denmark)

    Højsgaard, Søren; Lauritzen, Steffen L

    2008-01-01

    We introduce new types of graphical Gaussian models by placing symmetry restrictions on the concentration or correlation matrix. The models can be represented by coloured graphs, where parameters that are associated with edges or vertices of the same colour are restricted to being identical. We...... study the properties of such models and derive the necessary algorithms for calculating maximum likelihood estimates. We identify conditions for restrictions on the concentration and correlation matrices being equivalent. This is for example the case when symmetries are generated by permutation...

  15. A forward model for the helium plume effect and the interpretation of helium charge exchange measurements at ASDEX Upgrade

    Science.gov (United States)

    Kappatou, A.; McDermott, R. M.; Pütterich, T.; Dux, R.; Geiger, B.; Jaspers, R. J. E.; Donné, A. J. H.; Viezzer, E.; Cavedon, M.; the ASDEX Upgrade Team

    2018-05-01

    The analysis of the charge exchange measurements of helium is hindered by an additional emission contributing to the spectra, the helium ‘plume’ emission (Fonck et al 1984 Phys. Rev. A 29 3288), which complicates the interpretation of the measurements. The plume emission is indistinguishable from the active charge exchange signal when standard analysis of the spectra is applied and its intensity is of comparable magnitude for ASDEX Upgrade conditions, leading to a significant overestimation of the He2+ densities if not properly treated. Furthermore, the spectral line shape of the plume emission is non-Gaussian and leads to wrong ion temperature and flow measurements when not taken into account. A kinetic model for the helium plume emission has been developed for ASDEX Upgrade. The model is benchmarked against experimental measurements and is shown to capture the underlying physics mechanisms of the plume effect, as it can reproduce the experimental spectra and provides consistent values for the ion temperature, plasma rotation, and He2+ density.

  16. Case studies in Gaussian process modelling of computer codes

    International Nuclear Information System (INIS)

    Kennedy, Marc C.; Anderson, Clive W.; Conti, Stefano; O'Hagan, Anthony

    2006-01-01

    In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction. The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics

  17. Gaussian Process Regression Model in Spatial Logistic Regression

    Science.gov (United States)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  18. Using satellite imagery for qualitative evaluation of plume transport in modeling the effects of the Kuwait oil fire smoke plumes

    International Nuclear Information System (INIS)

    Bass, A.; Janota, P.

    1992-01-01

    To forecast the behavior of the Kuwait oil fire smoke plumes and their possible acute or chronic health effects over the Arabian Gulf region, TASC created a comprehensive health and environmental impacts modeling system. A specially-adapted Lagrangian puff transport model was used to create (a) short-term (multiday) forecasts of plume transport and ground-level concentrations of soot and SO 2 ; and (b) long-term (seasonal and longer) estimates of average surface concentrations and depositions. EPA-approved algorithms were used to transform exposures to SO 2 and soot (as PAH/BaP) into morbidity, mortality and crop damage risks. Absent any ground truth, satellite imagery from the NOAA Polar Orbiter and the ESA Geostationary Meteosat offered the only opportunity for timely qualitative evaluation of the long-range plume transport and diffusion predictions. This paper shows the use of actual satellite images (including animated loops of hourly Meteosat images) to evaluate plume forecasts in near-real-time, and to sanity-check the meso- and long-range plume transport projections for the long-term estimates. Example modeled concentrations, depositions and health effects are shown

  19. Fractional Gaussian noise: Prior specification and model comparison

    KAUST Repository

    Sø rbye, Sigrunn Holbek; Rue, Haavard

    2017-01-01

    Fractional Gaussian noise (fGn) is a stationary stochastic process used to model antipersistent or persistent dependency structures in observed time series. Properties of the autocovariance function of fGn are characterised by the Hurst exponent (H), which, in Bayesian contexts, typically has been assigned a uniform prior on the unit interval. This paper argues why a uniform prior is unreasonable and introduces the use of a penalised complexity (PC) prior for H. The PC prior is computed to penalise divergence from the special case of white noise and is invariant to reparameterisations. An immediate advantage is that the exact same prior can be used for the autocorrelation coefficient ϕ(symbol) of a first-order autoregressive process AR(1), as this model also reflects a flexible version of white noise. Within the general setting of latent Gaussian models, this allows us to compare an fGn model component with AR(1) using Bayes factors, avoiding the confounding effects of prior choices for the two hyperparameters H and ϕ(symbol). Among others, this is useful in climate regression models where inference for underlying linear or smooth trends depends heavily on the assumed noise model.

  20. Fractional Gaussian noise: Prior specification and model comparison

    KAUST Repository

    Sørbye, Sigrunn Holbek

    2017-07-07

    Fractional Gaussian noise (fGn) is a stationary stochastic process used to model antipersistent or persistent dependency structures in observed time series. Properties of the autocovariance function of fGn are characterised by the Hurst exponent (H), which, in Bayesian contexts, typically has been assigned a uniform prior on the unit interval. This paper argues why a uniform prior is unreasonable and introduces the use of a penalised complexity (PC) prior for H. The PC prior is computed to penalise divergence from the special case of white noise and is invariant to reparameterisations. An immediate advantage is that the exact same prior can be used for the autocorrelation coefficient ϕ(symbol) of a first-order autoregressive process AR(1), as this model also reflects a flexible version of white noise. Within the general setting of latent Gaussian models, this allows us to compare an fGn model component with AR(1) using Bayes factors, avoiding the confounding effects of prior choices for the two hyperparameters H and ϕ(symbol). Among others, this is useful in climate regression models where inference for underlying linear or smooth trends depends heavily on the assumed noise model.

  1. IR sensor design insight from missile-plume prediction models

    Science.gov (United States)

    Rapanotti, John L.; Gilbert, Bruno; Richer, Guy; Stowe, Robert

    2002-08-01

    Modern anti-tank missiles and the requirement of rapid deployment have significantly reduced the use of passive armour in protecting land vehicles. Vehicle survivability is becoming more dependent on sensors, computers and countermeasures to detect and avoid threats. An analysis of missile propellants suggests that missile detection based on plume characteristics alone may be more difficult than anticipated. Currently, the passive detection of missiles depends on signatures with a significant ultraviolet component. This approach is effective in detecting anti-aircraft missiles that rely on powerful motors to pursue high-speed aircraft. The high temperature exhaust from these missiles contains significant levels of carbon dioxide, water and, often, metal oxides such as alumina. The plumes emits strongest in the infrared, 1 to 5micrometers , regions with a significant component of the signature extending into the ultraviolet domain. Many anti-tank missiles do not need the same level of propulsion and radiate significantly less. These low velocity missiles, relying on the destructive force of shaped-charge warhead, are more difficult to detect. There is virtually no ultraviolet component and detection based on UV sensors is impractical. The transition in missile detection from UV to IR is reasonable, based on trends in imaging technology, but from the analysis presented in this paper even IR imagers may have difficulty in detecting missile plumes. This suggests that the emphasis should be placed in the detection of the missile hard body in the longer wavelengths of 8 to 12micrometers . The analysis described in this paper is based on solution of the governing equations of plume physics and chemistry. These models will be used to develop better sensors and threat detection algorithms.

  2. Aquatic dispersion modelling of a tritium plume in Lake Ontario

    International Nuclear Information System (INIS)

    Klukas, M.H.; Moltyaner, G.L.

    1996-05-01

    Approximately 2900 kg of tritiated water, containing 2.3E+15 Bq of tritium, were released to Lake Ontario via the cooling water discharge when a leak developed in a moderator heat exchanger in Unit 1 at the Pickering Nuclear Generating Station (PNGS) on 1992 August 2. The release provided the opportunity to study the dispersion of a tritium plume in the coastal zone of Lake Ontario. Current direction over the two-week period following the release was predominantly parallel to the shore, and elevated tritium concentrations were observed up to 20 km east and 85 km west of the PNGS. Predictions of the tritium plume movement were made using current velocity measurements taken at 8-m depth, 2.5 km offshore from Darlington and using a empirical relationship where alongshore current speed is assumed to be proportional to the alongshore component of the wind speed. The tritium migration was best described using current velocity measurements. The tritium plume dispersion is modelled using the one-dimensional advection-dispersion equation. Transport parameters are the alongshore current speed and longitudinal dispersion coefficient. Longitudinal dispersion coefficients, estimated by fitting the solution of the advection-dispersion equation to measured concentration distance profiles ranged from 3.75 to 10.57 m 2 s -1 . Simulations using the fitted values of the dispersion coefficient were able to describe maximum tritium concentrations measured at water supply plants located within 25 km of Pickering to within a factor of 3. The dispersion coefficient is a function of spatial and temporal variability in current velocity and the fitted dispersion coefficients estimated here may not be suitable for predicting tritium plume dispersion under different current conditions. The sensitivity of the dispersion coefficient to variability in current conditions should be evaluated in further field experiments. (author). 13 refs., 7 tabs., 12 figs

  3. Numerical modeling of plasma plume evolution against ambient background gas in laser blow off experiments

    International Nuclear Information System (INIS)

    Patel, Bhavesh G.; Das, Amita; Kaw, Predhiman; Singh, Rajesh; Kumar, Ajai

    2012-01-01

    Two dimensional numerical modelling based on simplified hydrodynamic evolution for an expanding plasma plume (created by laser blow off) against an ambient background gas has been carried out. A comparison with experimental observations shows that these simulations capture most features of the plasma plume expansion. The plume location and other gross features are reproduced as per the experimental observation in quantitative detail. The plume shape evolution and its dependence on the ambient background gas are in good qualitative agreement with the experiment. This suggests that a simplified hydrodynamic expansion model is adequate for the description of plasma plume expansion.

  4. Fault Tolerant Control Using Gaussian Processes and Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Yang Xiaoke

    2015-03-01

    Full Text Available Essential ingredients for fault-tolerant control are the ability to represent system behaviour following the occurrence of a fault, and the ability to exploit this representation for deciding control actions. Gaussian processes seem to be very promising candidates for the first of these, and model predictive control has a proven capability for the second. We therefore propose to use the two together to obtain fault-tolerant control functionality. Our proposal is illustrated by several reasonably realistic examples drawn from flight control.

  5. Gaussian free turbulence: structures and relaxation in plasma models

    International Nuclear Information System (INIS)

    Gruzinov, A.V.

    1993-01-01

    Free-turbulent relaxation in two-dimensional MHD, the degenerate Hasegawa-Mima equation and a two-dimensional microtearing model are studied. The Gibbs distributions of these three systems can be completely analyzed, due to the special structure of their invariants and due to the existence of ultraviolet catastrophe. The free-turbulent field is seen to be a sum of a certain coherent structure (statistical attractor) and Gaussian random noise. Two-dimensional current layers are shown to be statistical attractors in 2D MHD. (author)

  6. Gaussian random bridges and a geometric model for information equilibrium

    Science.gov (United States)

    Mengütürk, Levent Ali

    2018-03-01

    The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.

  7. Comparison of results from dispersion models for regulatory purposes based on Gaussian-and Lagrangian-algorithms: an evaluating literature study

    International Nuclear Information System (INIS)

    Walter, H.

    2004-01-01

    Powerful tools to describe atmospheric transport processes for radiation protection can be provided by meteorology; these are atmospheric flow and dispersion models. Concerning dispersion models, Gaussian plume models have been used since a long time to describe atmospheric dispersion processes. Advantages of the Gaussian plume models are short computation time, good validation and broad acceptance worldwide. However, some limitations and their implications on model result interpretation have to be taken into account, as the mathematical derivation of an analytic solution of the equations of motion leads to severe constraints. In order to minimise these constraints, various dispersion models for scientific and regulatory purposes have been developed and applied. Among these the Lagrangian particle models are of special interest, because these models are able to simulate atmospheric transport processes close to reality, e.g. the influence of orography, topography, wind shear and other meteorological phenomena. Within this study, the characteristics and computational results of Gaussian dispersion models as well as of Lagrangian models have been compared and evaluated on the base of numerous papers and reports published in literature. Special emphasis has been laid on the intention that dispersion models should comply with EU requests (Richtlinie 96/29/Euratom, 1996) on a more realistic assessment of the radiation exposure to the population. (orig.)

  8. Out-of-equilibrium dynamics in a Gaussian trap model

    International Nuclear Information System (INIS)

    Diezemann, Gregor

    2007-01-01

    The violations of the fluctuation-dissipation theorem are analysed for a trap model with a Gaussian density of states. In this model, the system reaches thermal equilibrium for long times after a quench to any finite temperature and therefore all ageing effect are of a transient nature. For not too long times after the quench it is found that the so-called fluctuation-dissipation ratio tends to a non-trivial limit, thus indicating the possibility for the definition of a timescale-dependent effective temperature. However, different definitions of the effective temperature yield distinct results. In particular, plots of the integrated response versus the correlation function strongly depend on the way they are constructed. Also the definition of effective temperatures in the frequency domain is not unique for the model considered. This may have some implications for the interpretation of results from computer simulations and experimental determinations of effective temperatures

  9. The Gaussian streaming model and convolution Lagrangian effective field theory

    Energy Technology Data Exchange (ETDEWEB)

    Vlah, Zvonimir [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, Stanford, CA 94306 (United States); Castorina, Emanuele; White, Martin, E-mail: zvlah@stanford.edu, E-mail: ecastorina@berkeley.edu, E-mail: mwhite@berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2016-12-01

    We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM to a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.

  10. A Gaussian graphical model approach to climate networks

    Energy Technology Data Exchange (ETDEWEB)

    Zerenner, Tanja, E-mail: tanjaz@uni-bonn.de [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Friederichs, Petra; Hense, Andreas [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany); Lehnertz, Klaus [Department of Epileptology, University of Bonn, Sigmund-Freud-Straße 25, 53105 Bonn (Germany); Helmholtz Institute for Radiation and Nuclear Physics, University of Bonn, Nussallee 14-16, 53115 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany)

    2014-06-15

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.

  11. Fast uncertainty reduction strategies relying on Gaussian process models

    International Nuclear Information System (INIS)

    Chevalier, Clement

    2013-01-01

    This work deals with sequential and batch-sequential evaluation strategies of real-valued functions under limited evaluation budget, using Gaussian process models. Optimal Stepwise Uncertainty Reduction (SUR) strategies are investigated for two different problems, motivated by real test cases in nuclear safety. First we consider the problem of identifying the excursion set above a given threshold T of a real-valued function f. Then we study the question of finding the set of 'safe controlled configurations', i.e. the set of controlled inputs where the function remains below T, whatever the value of some others non-controlled inputs. New SUR strategies are presented, together with efficient procedures and formulas to compute and use them in real world applications. The use of fast formulas to recalculate quickly the posterior mean or covariance function of a Gaussian process (referred to as the 'kriging update formulas') does not only provide substantial computational savings. It is also one of the key tools to derive closed form formulas enabling a practical use of computationally-intensive sampling strategies. A contribution in batch-sequential optimization (with the multi-points Expected Improvement) is also presented. (author)

  12. A Gaussian graphical model approach to climate networks

    International Nuclear Information System (INIS)

    Zerenner, Tanja; Friederichs, Petra; Hense, Andreas; Lehnertz, Klaus

    2014-01-01

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately

  13. Flowfield and Radiation Analysis of Missile Exhaust Plumes Using a Turbulent-Chemistry Interaction Model

    National Research Council Canada - National Science Library

    Calhoon, W. H; Kenzakowski, D. C

    2000-01-01

    ... components and missile defense systems. Current engineering level models neglect turbulent-chemistry interactions and typically underpredict the intensity of plume afterburning and afterburning burnout...

  14. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  15. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.

    2012-01-01

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  16. Stochastic cluster algorithms for discrete Gaussian (SOS) models

    International Nuclear Information System (INIS)

    Evertz, H.G.; Hamburg Univ.; Hasenbusch, M.; Marcu, M.; Tel Aviv Univ.; Pinn, K.; Muenster Univ.; Solomon, S.

    1990-10-01

    We present new Monte Carlo cluster algorithms which eliminate critical slowing down in the simulation of solid-on-solid models. In this letter we focus on the two-dimensional discrete Gaussian model. The algorithms are based on reflecting the integer valued spin variables with respect to appropriately chosen reflection planes. The proper choice of the reflection plane turns out to be crucial in order to obtain a small dynamical exponent z. Actually, the successful versions of our algorithm are a mixture of two different procedures for choosing the reflection plane, one of them ergodic but slow, the other one non-ergodic and also slow when combined with a Metropolis algorithm. (orig.)

  17. Plume Characterization of a Laboratory Model 22 N GPIM Thruster via High-Frequency Raman Spectroscopy

    Science.gov (United States)

    Williams, George J.; Kojima, Jun J.; Arrington, Lynn A.; Deans, Matthew C.; Reed, Brian D.; Kinzbach, McKenzie I.; McLean, Christopher H.

    2015-01-01

    The Green Propellant Infusion Mission (GPIM) will demonstrate the capability of a green propulsion system, specifically, one using the monopropellant, AF-M315E. One of the risks identified for GPIM is potential contamination of sensitive areas of the spacecraft from the effluents in the plumes of AF-M315E thrusters. Plume characterization of a laboratory-model 22 N thruster via optical diagnostics was conducted at NASA GRC in a space-simulated environment. A high-frequency pulsed laser was coupled with an electron-multiplied ICCD camera to perform Raman spectroscopy in the near-field, low-pressure plume. The Raman data yielded plume constituents and temperatures over a range of thruster chamber pressures and as a function of thruster (catalyst) operating time. Schlieren images of the near-field plume enabled calculation of plume velocities and revealed general plume structure of the otherwise invisible plume. The measured velocities are compared to those predicted by a two-dimensional, kinetic model. Trends in data and numerical results are presented from catalyst mid-life to end-of-life. The results of this investigation were coupled with the Raman and Schlieren data to provide an anchor for plume impingement analysis presented in a companion paper. The results of both analyses will be used to improve understanding of the nature of AF-M315E plumes and their impacts to GPIM and other future missions.

  18. Sensitivity, applicability and validation of bi-gaussian off- and on-line models for the evaluation of the consequences of accidental releases in nuclear facilities

    International Nuclear Information System (INIS)

    Kretzschmar, J.G.; Mertens, I.; Vanderborght, B.

    1984-01-01

    A computer code CAERS (Computer Aided Emergency Response System) has been developed for the simulation of the short-term concentrations caused by an atmospheric emission. The concentration calculations are based on the bi-gaussian theorem with the possibility of using twelve different sets of turbulence typing schemes and dispersion parameters or the plume can be simulated with a bi-dimensional puff trajectory model with tri-gaussian diffusion of the puffs. With the puff trajectory model the emission and the wind conditions can be variable in time. Sixteen SF 6 tracer dispersion experiments, with mobile as well as stationary time averaging sampling, have been carried out for the validation of the on-line and off-line models of CAERS. The tracer experiments of this study have shown that the CAERS system, using the bi-gaussian model and the SCK/CEN turbulence typing scheme, can simulate short time concentration levels very well. The variations of the plume under non-steady emission and meteo conditions are well simulated by the puff trajectory model. This leads to the general conclusion that the atmospheric dispersion models of the CAERS system can give a significant contribution to the management and the interpretation of air pollution concentration measurements in emergency situations

  19. GaussianCpG: a Gaussian model for detection of CpG island in human genome sequences.

    Science.gov (United States)

    Yu, Ning; Guo, Xuan; Zelikovsky, Alexander; Pan, Yi

    2017-05-24

    As crucial markers in identifying biological elements and processes in mammalian genomes, CpG islands (CGI) play important roles in DNA methylation, gene regulation, epigenetic inheritance, gene mutation, chromosome inactivation and nuclesome retention. The generally accepted criteria of CGI rely on: (a) %G+C content is ≥ 50%, (b) the ratio of the observed CpG content and the expected CpG content is ≥ 0.6, and (c) the general length of CGI is greater than 200 nucleotides. Most existing computational methods for the prediction of CpG island are programmed on these rules. However, many experimentally verified CpG islands deviate from these artificial criteria. Experiments indicate that in many cases %G+C is human genome. We analyze the energy distribution over genomic primary structure for each CpG site and adopt the parameters from statistics of Human genome. The evaluation results show that the new model can predict CpG islands efficiently by balancing both sensitivity and specificity over known human CGI data sets. Compared with other models, GaussianCpG can achieve better performance in CGI detection. Our Gaussian model aims to simplify the complex interaction between nucleotides. The model is computed not by the linear statistical method but by the Gaussian energy distribution and accumulation. The parameters of Gaussian function are not arbitrarily designated but deliberately chosen by optimizing the biological statistics. By using the pseudopotential analysis on CpG islands, the novel model is validated on both the real and artificial data sets.

  20. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Science.gov (United States)

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  1. Field studies of submerged-diffuser thermal plumes with comparisons to predictive model results

    International Nuclear Information System (INIS)

    Frigo, A.A.; Paddock, R.A.; Ditmars, J.D.

    1976-01-01

    Thermal plumes from submerged discharges of cooling water from two power plants on Lake Michigan were studied. The system for the acquisition of water temperatures and ambient conditions permitted the three-dimensional structure of the plumes to be determined. The Zion Nuclear Power Station has two submerged discharge structures separated by only 94 m. Under conditions of flow from both structures, interaction between the two plumes resulted in larger thermal fields than would be predicted by the superposition of single non-interacting plumes. Maximum temperatures in the near-field region of the plume compared favorably with mathematical model predictions. A comparison of physical-model predictions for the plume at the D. C. Cook Nuclear Plant with prototype measurements indicated good agreement in the near-field region, but differences in the far-field occurred as similitude was not preserved there

  2. Surrogacy assessment using principal stratification and a Gaussian copula model.

    Science.gov (United States)

    Conlon, Asc; Taylor, Jmg; Elliott, M R

    2017-02-01

    In clinical trials, a surrogate outcome ( S) can be measured before the outcome of interest ( T) and may provide early information regarding the treatment ( Z) effect on T. Many methods of surrogacy validation rely on models for the conditional distribution of T given Z and S. However, S is a post-randomization variable, and unobserved, simultaneous predictors of S and T may exist, resulting in a non-causal interpretation. Frangakis and Rubin developed the concept of principal surrogacy, stratifying on the joint distribution of the surrogate marker under treatment and control to assess the association between the causal effects of treatment on the marker and the causal effects of treatment on the clinical outcome. Working within the principal surrogacy framework, we address the scenario of an ordinal categorical variable as a surrogate for a censored failure time true endpoint. A Gaussian copula model is used to model the joint distribution of the potential outcomes of T, given the potential outcomes of S. Because the proposed model cannot be fully identified from the data, we use a Bayesian estimation approach with prior distributions consistent with reasonable assumptions in the surrogacy assessment setting. The method is applied to data from a colorectal cancer clinical trial, previously analyzed by Burzykowski et al.

  3. The mantle-plume model, its feasibility and consequences

    NARCIS (Netherlands)

    Calsteren, van P.W.C.

    1981-01-01

    High beat-flow foci on the Earth have been named ‘hot-spots’ and are commonly correlated with ‘mantle-plumes’ in the deep. A mantle plume may be described as a portion of mantle material with a higher heat content than its surroundings. The intrusion of a mantle-plume is inferred to be similar to

  4. Multiphase CFD modeling of nearfield fate of sediment plumes

    DEFF Research Database (Denmark)

    Saremi, Sina; Hjelmager Jensen, Jacob

    2014-01-01

    Disposal of dredged material and the overflow discharge during the dredging activities is a matter of concern due to the potential risks imposed by the plumes on surrounding marine environment. This gives rise to accurately prediction of the fate of the sediment plumes released in ambient waters...

  5. Memory effects in the relaxation of the Gaussian trap model

    Science.gov (United States)

    Diezemann, Gregor; Heuer, Andreas

    2011-03-01

    We investigate the memory effect in a simple model for glassy relaxation, a trap model with a Gaussian density of states. In this model, thermal equilibrium is reached at all finite temperatures and we therefore can consider jumps from low to high temperatures in addition to the quenches usually considered in aging studies. We show that the evolution of the energy following the Kovacs protocol can approximately be expressed as a difference of two monotonously decaying functions and thus show the existence of a so-called Kovacs hump whenever these functions are not single exponentials. It is well established that the Kovacs effect also occurs in the linear response regime, and we show that most of the gross features do not change dramatically when large temperature jumps are considered. However, there is one distinguishing feature that only exists beyond the linear regime, which we discuss in detail. For the memory experiment with inverted temperatures, i.e., jumping up and then down again, we find a very similar behavior apart from an opposite sign of the hump.

  6. Model Intercomparison Study to Investigate a Dense Contaminant Plume in a Complex Hydrogeologic System

    International Nuclear Information System (INIS)

    Williams, Mark D.; Cole, Charles R.; Foley, Michael G.; Zinina, Galina A.; Zinin, Alexander I.; Vasil'Kova, Nelly A.; Samsonova, Lilia M.

    2001-01-01

    A joint Russian and U.S. model intercomparison study was undertaken for developing more realistic contaminant transport models of the Mayak Site, Southern Urals. The test problems were developed by the Russian Team based on their experience modeling contaminant migration near Lake Karachai. The intercomparison problems were designed to address lake and contaminant plume interactions, as well as river interactions and plume density effects. Different numerical codes were used. Overall there is good agreement between the results of both models. Features shown by both models include (1) the sinking of the plume below the lake, (2) the raising of the water table in the fresh water adjacent to the lake in response to the increased pressure from the dense plume, and (3) the formation of a second sinking plume in an area where evapotranspiration exceeded infiltration, thus increasing the solute concentrations above the source (i.e., lake) values

  7. Gaussian copula as a likelihood function for environmental models

    Science.gov (United States)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an

  8. Martian methane plume models for defining Mars rover methane source search strategies

    Science.gov (United States)

    Nicol, Christopher; Ellery, Alex; Lynch, Brian; Cloutis, Ed

    2018-07-01

    The detection of atmospheric methane on Mars implies an active methane source. This introduces the possibility of a biotic source with the implied need to determine whether the methane is indeed biotic in nature or geologically generated. There is a clear need for robotic algorithms which are capable of manoeuvring a rover through a methane plume on Mars to locate its source. We explore aspects of Mars methane plume modelling to reveal complex dynamics characterized by advection and diffusion. A statistical analysis of the plume model has been performed and compared to analyses of terrestrial plume models. Finally, we consider a robotic search strategy to find a methane plume source. We find that gradient-based techniques are ineffective, but that more sophisticated model-based search strategies are unlikely to be available in near-term rover missions.

  9. Modeling of plasma plume induced during laser welding

    International Nuclear Information System (INIS)

    Moscicki, T.; Hoffman, J.; Szymanski, Z.

    2005-01-01

    During laser welding, the interaction of intense laser radiation with a work-piece leads to the formation of a long, thin, cylindrical cavity in a metal, called a keyhole. Generation of a keyhole enables the laser beam to penetrate into the work-piece and is essential for deep welding. The keyhole contains ionized metal vapour and is surrounded by molten material called the weld pool. The metal vapour, which flows from the keyhole mixes with the shielding gas flowing from the opposite direction and forms a plasma plume over the keyhole mouth. The plasma plume has considerable influence on the processing conditions. Plasma strongly absorbs laser radiation and significantly changes energy transfer from the laser beam to a material. In this paper the results of theoretical modelling of plasma plume induced during welding with CO 2 laser are presented. The set of equations consists of equation of conservation of mass, energy, momentum and the diffusion equation: ∂ρ/∂t + ∇·(ρ ρ ν =0; ∂(ρE)/∂t + ∇·( ρ ν (ρE + p)) = ∇ (k eff ∇T - Σ j h j ρ J j + (τ eff · ρ ν )) + Σ i κ i I i - R; ∂/∂t(ρ ρ ν ) + ∇· (ρ ρ ν ρ ν ) = - ∇p + ∇(τ) + ρ ρ g + ρ F, where τ is viscous tensor τ = μ[(∇ ρ ν + ∇ ρT ν )-2/3∇· ρ ν I]; ∂/∂t(ρY i ) + ∇·(ρ ρ ν Y i ) = ∇·ρD i,m ∇T i ; where μ ν denotes velocity vector, E - energy, ρ mass density; k - thermal conductivity, T- temperature, κ - absorption coefficient, I i local laser intensity, R - radiation loss function, p - pressure, h j enthalpy, J j - diffusion flux of j component, ν g - gravity, μ F - external force, μ - dynamic viscosity, I - unit tensor, Y i - mass fraction of iron vapor in the gas mixture, D i,m - mass diffusion coefficient. The terms k eff and τ eff contain the turbulent component of the thermal conductivity and the viscosity, respectively. All the material functions are functions of the temperature and mass fraction only. The equations

  10. Fitting the Fractional Polynomial Model to Non-Gaussian Longitudinal Data

    Directory of Open Access Journals (Sweden)

    Ji Hoon Ryoo

    2017-08-01

    Full Text Available As in cross sectional studies, longitudinal studies involve non-Gaussian data such as binomial, Poisson, gamma, and inverse-Gaussian distributions, and multivariate exponential families. A number of statistical tools have thus been developed to deal with non-Gaussian longitudinal data, including analytic techniques to estimate parameters in both fixed and random effects models. However, as yet growth modeling with non-Gaussian data is somewhat limited when considering the transformed expectation of the response via a linear predictor as a functional form of explanatory variables. In this study, we introduce a fractional polynomial model (FPM that can be applied to model non-linear growth with non-Gaussian longitudinal data and demonstrate its use by fitting two empirical binary and count data models. The results clearly show the efficiency and flexibility of the FPM for such applications.

  11. Initialization of the Euler model MODIS with field data from the 'EPRI plume model validation project'

    International Nuclear Information System (INIS)

    Petersen, G.; Eppel, D.; Lautenschlager, M.; Mueller, A.

    1985-01-01

    The program deck MODIS (''MOment DIStribution'') is designed to be used as operational tool for modelling the dispersion of a point source under general atmospheric conditions. The concentration distribution is determined by calculating its cross-wind moments on a vertical grid oriented in the main wind direction. The model contains a parametrization for horizontal and vertical coefficients based on a second order closure model. The Eulerian time scales, preliminary determined by fitting measured plume cross sections, are confirmed by comparison with data from the EPRI plume model validation project. (orig.) [de

  12. Sensitivity analysis of alkaline plume modelling: influence of mineralogy

    International Nuclear Information System (INIS)

    Gaboreau, S.; Claret, F.; Marty, N.; Burnol, A.; Tournassat, C.; Gaucher, E.C.; Munier, I.; Michau, N.; Cochepin, B.

    2010-01-01

    Document available in extended abstract form only. In the context of a disposal facility for radioactive waste in clayey geological formation, an important modelling effort has been carried out in order to predict the time evolution of interacting cement based (concrete or cement) and clay (argillites and bentonite) materials. The high number of modelling input parameters associated with non negligible uncertainties makes often difficult the interpretation of modelling results. As a consequence, it is necessary to carry out sensitivity analysis on main modelling parameters. In a recent study, Marty et al. (2009) could demonstrate that numerical mesh refinement and consideration of dissolution/precipitation kinetics have a marked effect on (i) the time necessary to numerically clog the initial porosity and (ii) on the final mineral assemblage at the interface. On the contrary, these input parameters have little effect on the extension of the alkaline pH plume. In the present study, we propose to investigate the effects of the considered initial mineralogy on the principal simulation outputs: (1) the extension of the high pH plume, (2) the time to clog the porosity and (3) the alteration front in the clay barrier (extension and nature of mineralogy changes). This was done through sensitivity analysis on both concrete composition and clay mineralogical assemblies since in most published studies, authors considered either only one composition per materials or simplified mineralogy in order to facilitate or to reduce their calculation times. 1D Cartesian reactive transport models were run in order to point out the importance of (1) the crystallinity of concrete phases, (2) the type of clayey materials and (3) the choice of secondary phases that are allowed to precipitate during calculations. Two concrete materials with either nanocrystalline or crystalline phases were simulated in contact with two clayey materials (smectite MX80 or Callovo- Oxfordian argillites). Both

  13. Gaussian graphical modeling reveals specific lipid correlations in glioblastoma cells

    Science.gov (United States)

    Mueller, Nikola S.; Krumsiek, Jan; Theis, Fabian J.; Böhm, Christian; Meyer-Bäse, Anke

    2011-06-01

    Advances in high-throughput measurements of biological specimens necessitate the development of biologically driven computational techniques. To understand the molecular level of many human diseases, such as cancer, lipid quantifications have been shown to offer an excellent opportunity to reveal disease-specific regulations. The data analysis of the cell lipidome, however, remains a challenging task and cannot be accomplished solely based on intuitive reasoning. We have developed a method to identify a lipid correlation network which is entirely disease-specific. A powerful method to correlate experimentally measured lipid levels across the various samples is a Gaussian Graphical Model (GGM), which is based on partial correlation coefficients. In contrast to regular Pearson correlations, partial correlations aim to identify only direct correlations while eliminating indirect associations. Conventional GGM calculations on the entire dataset can, however, not provide information on whether a correlation is truly disease-specific with respect to the disease samples and not a correlation of control samples. Thus, we implemented a novel differential GGM approach unraveling only the disease-specific correlations, and applied it to the lipidome of immortal Glioblastoma tumor cells. A large set of lipid species were measured by mass spectrometry in order to evaluate lipid remodeling as a result to a combination of perturbation of cells inducing programmed cell death, while the other perturbations served solely as biological controls. With the differential GGM, we were able to reveal Glioblastoma-specific lipid correlations to advance biomedical research on novel gene therapies.

  14. Application of data assimilation to improve the forecasting capability of an atmospheric dispersion model for a radioactive plume

    International Nuclear Information System (INIS)

    Jeong, H.J.; Han, M.H.; Hwang, W.T.; Kim, E.H.

    2008-01-01

    Modeling an atmospheric dispersion of a radioactive plume plays an influential role in assessing the environmental impacts caused by nuclear accidents. The performance of data assimilation techniques combined with Gaussian model outputs and measurements to improve forecasting abilities are investigated in this study. Tracer dispersion experiments are performed to produce field data by assuming a radiological emergency. Adaptive neuro-fuzzy inference system (ANFIS) and linear regression filter are considered to assimilate the Gaussian model outputs with measurements. ANFIS is trained so that the model outputs are likely to be more accurate for the experimental data. Linear regression filter is designed to assimilate measurements similar to the ANFIS. It is confirmed that ANFIS could be an appropriate method for an improvement of the forecasting capability of an atmospheric dispersion model in the case of a radiological emergency, judging from the higher correlation coefficients between the measured and the assimilated ones rather than a linear regression filter. This kind of data assimilation method could support a decision-making system when deciding on the best available countermeasures for public health from among emergency preparedness alternatives

  15. Laser-generated plasma plume expansion: Combined continuous-microscopic modeling

    Science.gov (United States)

    Itina, Tatiana E.; Hermann, Jörg; Delaporte, Philippe; Sentis, Marc

    2002-12-01

    The physical phenomena involved in the interaction of a laser-generated plasma plume with a background gas are studied numerically. A three-dimensional combined model is developed to describe the plasma plume formation and its expansion in vacuum or into a background gas. The proposed approach takes advantages of both continuous and microscopic descriptions. The simulation technique is suitable for the simulation of high-rate laser ablation for a wide range of background pressure. The model takes into account the mass diffusion and the energy exchange between the ablated and background species, as well as the collective motion of the ablated species and the background-gas particles. The developed approach is used to investigate the influence of the background gas on the expansion dynamics of the plume obtained during the laser ablation of aluminum. At moderate pressures, both plume and gas compressions are weak and the process is mainly governed by the diffusive mixing. At higher pressures, the interaction is determined by the plume-gas pressure interplay, the plume front is strongly compressed, and its center exhibits oscillations. In this case, the snowplough effect takes place, leading to the formation of a compressed gas layer in front of the plume. The background pressure needed for the beginning of the snowplough effect is determined from the plume and gas density profiles obtained at various pressures. Simulation results are compared with experimentally measured density distributions. It is shown that the calculations suggest localized formation of molecules during reactive laser ablation.

  16. Laser-generated plasma plume expansion: Combined continuous-microscopic modeling

    International Nuclear Information System (INIS)

    Itina, Tatiana E.; Hermann, Joerg; Delaporte, Philippe; Sentis, Marc

    2002-01-01

    The physical phenomena involved in the interaction of a laser-generated plasma plume with a background gas are studied numerically. A three-dimensional combined model is developed to describe the plasma plume formation and its expansion in vacuum or into a background gas. The proposed approach takes advantages of both continuous and microscopic descriptions. The simulation technique is suitable for the simulation of high-rate laser ablation for a wide range of background pressure. The model takes into account the mass diffusion and the energy exchange between the ablated and background species, as well as the collective motion of the ablated species and the background-gas particles. The developed approach is used to investigate the influence of the background gas on the expansion dynamics of the plume obtained during the laser ablation of aluminum. At moderate pressures, both plume and gas compressions are weak and the process is mainly governed by the diffusive mixing. At higher pressures, the interaction is determined by the plume-gas pressure interplay, the plume front is strongly compressed, and its center exhibits oscillations. In this case, the snowplough effect takes place, leading to the formation of a compressed gas layer in front of the plume. The background pressure needed for the beginning of the snowplough effect is determined from the plume and gas density profiles obtained at various pressures. Simulation results are compared with experimentally measured density distributions. It is shown that the calculations suggest localized formation of molecules during reactive laser ablation

  17. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  18. Numerical modeling of continental lithospheric weak zone over plume

    Science.gov (United States)

    Perepechko, Y. V.; Sorokin, K. E.

    2011-12-01

    The work is devoted to the development of magmatic systems in the continental lithosphere over diffluent mantle plumes. The areas of tension originating over them are accompanied by appearance of fault zones, and the formation of permeable channels, which are distributed magmatic melts. The numerical simulation of the dynamics of deformation fields in the lithosphere due to convection currents in the upper mantle, and the formation of weakened zones that extend up to the upper crust and create the necessary conditions for the formation of intermediate magma chambers has been carried out. Thermodynamically consistent non-isothermal model simulates the processes of heat and mass transfer of a wide class of magmatic systems, as well as the process of strain localization in the lithosphere and their influence on the formation of high permeability zones in the lower crust. The substance of the lithosphere is a rheologic heterophase medium, which is described by a two-velocity hydrodynamics. This makes it possible to take into account the process of penetration of the melt from the asthenosphere into the weakened zone. The energy dissipation occurs mainly due to interfacial friction and inelastic relaxation of shear stresses. The results of calculation reveal a nonlinear process of the formation of porous channels and demonstrate the diversity of emerging dissipative structures which are determined by properties of both heterogeneous lithosphere and overlying crust. Mutual effect of a permeable channel and the corresponding filtration process of the melt on the mantle convection and the dynamics of the asthenosphere have been studied. The formation of dissipative structures in heterogeneous lithosphere above mantle plumes occurs in accordance with the following scenario: initially, the elastic behavior of heterophase lithosphere leads to the formation of the narrow weakened zone, though sufficiently extensive, with higher porosity. Further, the increase in the width of

  19. Simulation of ultrasonic surface waves with multi-Gaussian and point source beam models

    International Nuclear Information System (INIS)

    Zhao, Xinyu; Schmerr, Lester W. Jr.; Li, Xiongbing; Sedov, Alexander

    2014-01-01

    In the past decade, multi-Gaussian beam models have been developed to solve many complicated bulk wave propagation problems. However, to date those models have not been extended to simulate the generation of Rayleigh waves. Here we will combine Gaussian beams with an explicit high frequency expression for the Rayleigh wave Green function to produce a three-dimensional multi-Gaussian beam model for the fields radiated from an angle beam transducer mounted on a solid wedge. Simulation results obtained with this model are compared to those of a point source model. It is shown that the multi-Gaussian surface wave beam model agrees well with the point source model while being computationally much more efficient

  20. American Option Pricing using GARCH models and the Normal Inverse Gaussian distribution

    DEFF Research Database (Denmark)

    Stentoft, Lars Peter

    In this paper we propose a feasible way to price American options in a model with time varying volatility and conditional skewness and leptokurtosis using GARCH processes and the Normal Inverse Gaussian distribution. We show how the risk neutral dynamics can be obtained in this model, we interpret...... properties shows that there are important option pricing differences compared to the Gaussian case as well as to the symmetric special case. A large scale empirical examination shows that our model outperforms the Gaussian case for pricing options on three large US stocks as well as a major index...

  1. Background based Gaussian mixture model lesion segmentation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe [DEIB, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan 20133 (Italy); De Bernardi, Elisabetta [Department of Medicine and Surgery, Tecnomed Foundation, University of Milano—Bicocca, Monza 20900 (Italy); Zito, Felicia; Castellani, Massimo [Nuclear Medicine Department, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, Milan 20122 (Italy)

    2016-05-15

    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previous analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was

  2. Contaminant plume configuration and movement: an experimental model

    Science.gov (United States)

    Alencoao, A.; Reis, A.; Pereira, M. G.; Liberato, M. L. R.; Caramelo, L.; Amraoui, M.; Amorim, V.

    2009-04-01

    The relevance of Science and Technology in our daily routines makes it compulsory to educate citizens who have both scientific literacy and scientific knowledge. These will allow them to be intervening citizens in a constantly changing society. Thus, physical and natural sciences are included in school curricula, both in primary and secondary education, with the fundamental aim of developing in the students the skills, attitudes and knowledge needed for the understanding of the planet Earth and its real problems. On the other hand, teaching in Geosciences is more and more based on practical methodologies which use didactic material, sustaining teachers' pedagogical practices and facilitating students' learning tasks suggested on the syllabus defined for each school level. Themes related to exploring the different components of the Hydrological Cycle and themes related to natural environment protection and preservation, namely water resources and soil contamination by industrial and urban sewage are examples of subject matters included on the Portuguese syllabus. These topics motivated the conception and construction of experimental models for the study of the propagation of pollutants on a porous medium. The experimental models allow inducing a horizontal flux of water though different kinds of permeable substances (e.g. sand, silt), with contamination spots on its surface. These experimental activities facilitate the student to understand the flow path of contaminating substances on the saturated zone and to observe the contaminant plume configuration and movement. The activities are explored in a teaching and learning process perspective where the student builds its own knowledge through real question- problem based learning which relate Science, Technology and Society. These activities have been developed in the framework of project ‘Water in the Environment' (CV/PVI/0854) of the POCTI Program (Programa Operacional "Ciência, Tecnologia, Inovação") financed

  3. FOOTPRINT: A Screening Model for Estimating the Area of a Plume Produced From Gasoline Containing Ethanol

    Science.gov (United States)

    FOOTPRINT is a screening model used to estimate the length and surface area of benzene, toluene, ethylbenzene, and xylene (BTEX) plumes in groundwater, produced from a gasoline spill that contains ethanol.

  4. Interactive Gaussian Graphical Models for Discovering Depth Trends in ChemCam Data

    Science.gov (United States)

    Oyen, D. A.; Komurlu, C.; Lanza, N. L.

    2018-04-01

    Interactive Gaussian graphical models discover surface compositional features on rocks in ChemCam targets. Our approach visualizes shot-to-shot relationships among LIBS observations, and identifies the wavelengths involved in the trend.

  5. Statistical imitation system using relational interest points and Gaussian mixture models

    CSIR Research Space (South Africa)

    Claassens, J

    2009-11-01

    Full Text Available The author proposes an imitation system that uses relational interest points (RIPs) and Gaussian mixture models (GMMs) to characterize a behaviour. The system's structure is inspired by the Robot Programming by Demonstration (RDP) paradigm...

  6. Improved Expectation Maximization Algorithm for Gaussian Mixed Model Using the Kernel Method

    Directory of Open Access Journals (Sweden)

    Mohd Izhan Mohd Yusoff

    2013-01-01

    Full Text Available Fraud activities have contributed to heavy losses suffered by telecommunication companies. In this paper, we attempt to use Gaussian mixed model, which is a probabilistic model normally used in speech recognition to identify fraud calls in the telecommunication industry. We look at several issues encountered when calculating the maximum likelihood estimates of the Gaussian mixed model using an Expectation Maximization algorithm. Firstly, we look at a mechanism for the determination of the initial number of Gaussian components and the choice of the initial values of the algorithm using the kernel method. We show via simulation that the technique improves the performance of the algorithm. Secondly, we developed a procedure for determining the order of the Gaussian mixed model using the log-likelihood function and the Akaike information criteria. Finally, for illustration, we apply the improved algorithm to real telecommunication data. The modified method will pave the way to introduce a comprehensive method for detecting fraud calls in future work.

  7. A Plume Scale Model of Chlorinated Ethene Degradation

    DEFF Research Database (Denmark)

    Murray, Alexandra Marie; Broholm, Mette Martina; Badin, Alice

    leaked from a dry cleaning facility, and a 2 km plume extends from the source in an unconfined aquifer of homogenous fluvio-glacial sand. The area has significant iron deposits, most notably pyrite, which can abiotically degrade chlorinated ethenes. The source zone underwent thermal (steam) remediation...

  8. The importance of vertical resolution in the free troposphere for modeling intercontinental plumes

    Science.gov (United States)

    Zhuang, Jiawei; Jacob, Daniel J.; Eastham, Sebastian D.

    2018-05-01

    Chemical plumes in the free troposphere can preserve their identity for more than a week as they are transported on intercontinental scales. Current global models cannot reproduce this transport. The plumes dilute far too rapidly due to numerical diffusion in sheared flow. We show how model accuracy can be limited by either horizontal resolution (Δx) or vertical resolution (Δz). Balancing horizontal and vertical numerical diffusion, and weighing computational cost, implies an optimal grid resolution ratio (Δx / Δz)opt ˜ 1000 for simulating the plumes. This is considerably higher than current global models (Δx / Δz ˜ 20) and explains the rapid plume dilution in the models as caused by insufficient vertical resolution. Plume simulations with the Geophysical Fluid Dynamics Laboratory Finite-Volume Cubed-Sphere Dynamical Core (GFDL-FV3) over a range of horizontal and vertical grid resolutions confirm this limiting behavior. Our highest-resolution simulation (Δx ≈ 25 km, Δz ≈ 80 m) preserves the maximum mixing ratio in the plume to within 35 % after 8 days in strongly sheared flow, a drastic improvement over current models. Adding free tropospheric vertical levels in global models is computationally inexpensive and would also improve the simulation of water vapor.

  9. The importance of vertical resolution in the free troposphere for modeling intercontinental plumes

    Directory of Open Access Journals (Sweden)

    J. Zhuang

    2018-05-01

    Full Text Available Chemical plumes in the free troposphere can preserve their identity for more than a week as they are transported on intercontinental scales. Current global models cannot reproduce this transport. The plumes dilute far too rapidly due to numerical diffusion in sheared flow. We show how model accuracy can be limited by either horizontal resolution (Δx or vertical resolution (Δz. Balancing horizontal and vertical numerical diffusion, and weighing computational cost, implies an optimal grid resolution ratio (Δx ∕ Δzopt ∼ 1000 for simulating the plumes. This is considerably higher than current global models (Δx ∕ Δz ∼ 20 and explains the rapid plume dilution in the models as caused by insufficient vertical resolution. Plume simulations with the Geophysical Fluid Dynamics Laboratory Finite-Volume Cubed-Sphere Dynamical Core (GFDL-FV3 over a range of horizontal and vertical grid resolutions confirm this limiting behavior. Our highest-resolution simulation (Δx ≈ 25 km, Δz  ≈  80 m preserves the maximum mixing ratio in the plume to within 35 % after 8 days in strongly sheared flow, a drastic improvement over current models. Adding free tropospheric vertical levels in global models is computationally inexpensive and would also improve the simulation of water vapor.

  10. Probabilistic wind power forecasting with online model selection and warped gaussian process

    International Nuclear Information System (INIS)

    Kou, Peng; Liang, Deliang; Gao, Feng; Gao, Lin

    2014-01-01

    Highlights: • A new online ensemble model for the probabilistic wind power forecasting. • Quantifying the non-Gaussian uncertainties in wind power. • Online model selection that tracks the time-varying characteristic of wind generation. • Dynamically altering the input features. • Recursive update of base models. - Abstract: Based on the online model selection and the warped Gaussian process (WGP), this paper presents an ensemble model for the probabilistic wind power forecasting. This model provides the non-Gaussian predictive distributions, which quantify the non-Gaussian uncertainties associated with wind power. In order to follow the time-varying characteristics of wind generation, multiple time dependent base forecasting models and an online model selection strategy are established, thus adaptively selecting the most probable base model for each prediction. WGP is employed as the base model, which handles the non-Gaussian uncertainties in wind power series. Furthermore, a regime switch strategy is designed to modify the input feature set dynamically, thereby enhancing the adaptiveness of the model. In an online learning framework, the base models should also be time adaptive. To achieve this, a recursive algorithm is introduced, thus permitting the online updating of WGP base models. The proposed model has been tested on the actual data collected from both single and aggregated wind farms

  11. DeepBlow - a Lagrangian plume model for deep water blowouts

    International Nuclear Information System (INIS)

    Johansen, Oeistein

    2000-01-01

    This paper presents a sub-sea blowout model designed with special emphasis on deep-water conditions. The model is an integral plume model based on a Lagrangian concept. This concept is applied to multiphase discharges in the formation of water, oil and gas in a stratified water column with variable currents. The gas may be converted to hydrate in combination with seawater, dissolved into the plume water, or leaking out of the plume due to the slip between rising gas bubbles and the plume trajectory. Non-ideal behaviour of the gas is accounted for by the introduction of pressure- and temperature-dependent compressibility z-factor in the equation of state. A number of case studies are presented in the paper. One of the cases (blowout from 100 m depth) is compared with observations from a field experiment conducted in Norwegian waters in June 1996. The model results are found to compare favourably with the field observations when dissolution of gas into seawater is accounted in the model. For discharges at intermediate to shallow depths (100-250 m), the two major processes limiting plume rise will be: (a) dissolution of gas into ambient water, or (b) bubbles rising out of the inclined plume. These processes tend to be self-enforcing, i.e., when a gas is lost by either of these processes, plume rise tends to slow down and more time will be available for dissolution. For discharges in deep waters (700-1500 m depth), hydrate formation is found to be a dominating process in limiting plume rise. (Author)

  12. PLUME-MoM 1.0: a new 1-D model of volcanic plumes based on the method of moments

    Science.gov (United States)

    de'Michieli Vitturi, M.; Neri, A.; Barsotti, S.

    2015-05-01

    In this paper a new mathematical model for volcanic plumes, named PlumeMoM, is presented. The model describes the steady-state 1-D dynamics of the plume in a 3-D coordinate system, accounting for continuous variability in particle distribution of the pyroclastic mixture ejected at the vent. Volcanic plumes are composed of pyroclastic particles of many different sizes ranging from a few microns up to several centimeters and more. Proper description of such a multiparticle nature is crucial when quantifying changes in grain-size distribution along the plume and, therefore, for better characterization of source conditions of ash dispersal models. The new model is based on the method of moments, which allows description of the pyroclastic mixture dynamics not only in the spatial domain but also in the space of properties of the continuous size-distribution of the particles. This is achieved by formulation of fundamental transport equations for the multiparticle mixture with respect to the different moments of the grain-size distribution. Different formulations, in terms of the distribution of the particle number, as well as of the mass distribution expressed in terms of the Krumbein log scale, are also derived. Comparison between the new moments-based formulation and the classical approach, based on the discretization of the mixture in N discrete phases, shows that the new model allows the same results to be obtained with a significantly lower computational cost (particularly when a large number of discrete phases is adopted). Application of the new model, coupled with uncertainty quantification and global sensitivity analyses, enables investigation of the response of four key output variables (mean and standard deviation (SD) of the grain-size distribution at the top of the plume, plume height and amount of mass lost by the plume during the ascent) to changes in the main input parameters (mean and SD) characterizing the pyroclastic mixture at the base of the plume

  13. Information Geometric Complexity of a Trivariate Gaussian Statistical Model

    Directory of Open Access Journals (Sweden)

    Domenico Felice

    2014-05-01

    Full Text Available We evaluate the information geometric complexity of entropic motion on low-dimensional Gaussian statistical manifolds in order to quantify how difficult it is to make macroscopic predictions about systems in the presence of limited information. Specifically, we observe that the complexity of such entropic inferences not only depends on the amount of available pieces of information but also on the manner in which such pieces are correlated. Finally, we uncover that, for certain correlational structures, the impossibility of reaching the most favorable configuration from an entropic inference viewpoint seems to lead to an information geometric analog of the well-known frustration effect that occurs in statistical physics.

  14. Modeling Macro- and Micro-Scale Turbulent Mixing and Chemistry in Engine Exhaust Plumes

    Science.gov (United States)

    Menon, Suresh

    1998-01-01

    Simulation of turbulent mixing and chemical processes in the near-field plume and plume-vortex regimes has been successfully carried out recently using a reduced gas phase kinetics mechanism which substantially decreased the computational cost. A detailed mechanism including gas phase HOx, NOx, and SOx chemistry between the aircraft exhaust and the ambient air in near-field aircraft plumes is compiled. A reduced mechanism capturing the major chemical pathways is developed. Predictions by the reduced mechanism are found to be in good agreement with those by the detailed mechanism. With the reduced chemistry, the computer CPU time is saved by a factor of more than 3.5 for the near-field plume modeling. Distributions of major chemical species are obtained and analyzed. The computed sensitivities of major species with respect to reaction step are deduced for identification of the dominant gas phase kinetic reaction pathways in the jet plume. Both the near field plume and the plume-vortex regimes were investigated using advanced mixing models. In the near field, a stand-alone mixing model was used to investigate the impact of turbulent mixing on the micro- and macro-scale mixing processes using a reduced reaction kinetics model. The plume-vortex regime was simulated using a large-eddy simulation model. Vortex plume behind Boeing 737 and 747 aircraft was simulated along with relevant kinetics. Many features of the computed flow field show reasonable agreement with data. The entrainment of the engine plumes into the wing tip vortices and also the partial detrainment of the plume were numerically captured. The impact of fluid mechanics on the chemical processes was also studied. Results show that there are significant differences between spatial and temporal simulations especially in the predicted SO3 concentrations. This has important implications for the prediction of sulfuric acid aerosols in the wake and may partly explain the discrepancy between past numerical studies

  15. Expansion dynamics and equilibrium conditions in a laser ablation plume of lithium: Modeling and experiment

    International Nuclear Information System (INIS)

    Stapleton, M.W.; McKiernan, A.P.; Mosnier, J.-P.

    2005-01-01

    The gas dynamics and atomic kinetics of a laser ablation plume of lithium, expanding adiabatically in vacuum, are included in a numerical model, using isothermal and isentropic self-similar analytical solutions and steady-state collisional radiative equations, respectively. Measurements of plume expansion dynamics using ultrafast imaging for various laser wavelengths (266-1064 nm), fluences (2-6.5 J cm -2 ), and spot sizes (50-1000 μm) are performed to provide input parameters for the model and, thereby, study the influence of laser spot size, wavelength, and fluence, respectively, on both the plume expansion dynamics and atomic kinetics. Target recoil pressure, which clearly affects plume dynamics, is included in the model. The effects of laser wavelength and spot size on plume dynamics are discussed in terms of plasma absorption of laser light. A transition from isothermal to isentropic behavior for spot sizes greater than 50 μm is clearly evidenced. Equilibrium conditions are found to exist only up to 300 ns after the plume creation, while complete local thermodynamic equilibrium is found to be confined to the very early parts of the expansion

  16. Comparison of non-Gaussian and Gaussian diffusion models of diffusion weighted imaging of rectal cancer at 3.0 T MRI.

    Science.gov (United States)

    Zhang, Guangwen; Wang, Shuangshuang; Wen, Didi; Zhang, Jing; Wei, Xiaocheng; Ma, Wanling; Zhao, Weiwei; Wang, Mian; Wu, Guosheng; Zhang, Jinsong

    2016-12-09

    Water molecular diffusion in vivo tissue is much more complicated. We aimed to compare non-Gaussian diffusion models of diffusion-weighted imaging (DWI) including intra-voxel incoherent motion (IVIM), stretched-exponential model (SEM) and Gaussian diffusion model at 3.0 T MRI in patients with rectal cancer, and to determine the optimal model for investigating the water diffusion properties and characterization of rectal carcinoma. Fifty-nine consecutive patients with pathologically confirmed rectal adenocarcinoma underwent DWI with 16 b-values at a 3.0 T MRI system. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models (IVIM-mono, IVIM-bi and SEM) on primary tumor and adjacent normal rectal tissue. Parameters of standard apparent diffusion coefficient (ADC), slow- and fast-ADC, fraction of fast ADC (f), α value and distributed diffusion coefficient (DDC) were generated and compared between the tumor and normal tissues. The SEM exhibited the best fitting results of actual DWI signal in rectal cancer and the normal rectal wall (R 2  = 0.998, 0.999 respectively). The DDC achieved relatively high area under the curve (AUC = 0.980) in differentiating tumor from normal rectal wall. Non-Gaussian diffusion models could assess tissue properties more accurately than the ADC derived Gaussian diffusion model. SEM may be used as a potential optimal model for characterization of rectal cancer.

  17. Modeling investigation of the nutrient and phytoplankton variability in the Chesapeake Bay outflow plume

    Science.gov (United States)

    Jiang, Long; Xia, Meng

    2018-03-01

    The Chesapeake Bay outflow plume (CBOP) is the mixing zone between Chesapeake Bay and less eutrophic continental shelf waters. Variations in phytoplankton distribution in the CBOP are critical to the fish nursery habitat quality and ecosystem health; thus, an existing hydrodynamic-biogeochemical model for the bay and the adjacent coastal ocean was applied to understand the nutrient and phytoplankton variability in the plume and the dominant environmental drivers. The simulated nutrient and chlorophyll a distribution agreed well with field data and real-time satellite imagery. Based on the model calculation, the net dissolved inorganic nitrogen (DIN) and phosphorus (DIP) flux at the bay mouth was seaward and landward during 2003-2012, respectively. The CBOP was mostly nitrogen-limited because of the relatively low estuarine DIN export. The highest simulated phytoplankton biomass generally occurred in spring in the near field of the plume. Streamflow variations could regulate the estuarine residence time, and thus modulate nutrient export and phytoplankton biomass in the plume area; in comparison, changing nutrient loading with fixed streamflow had a less extensive impact, especially in the offshore and far-field regions. Correlation analyses and numerical experiments revealed that southerly winds on the shelf were effective in promoting the offshore plume expansion and phytoplankton accumulation. Climate change including precipitation and wind pattern shifts is likely to complicate the driving mechanisms of phytoplankton variability in the plume region.

  18. Plume-exit modeling to determine cloud condensation nuclei activity of aerosols from residential biofuel combustion

    Science.gov (United States)

    Mena, Francisco; Bond, Tami C.; Riemer, Nicole

    2017-08-01

    Residential biofuel combustion is an important source of aerosols and gases in the atmosphere. The change in cloud characteristics due to biofuel burning aerosols is uncertain, in part, due to the uncertainty in the added number of cloud condensation nuclei (CCN) from biofuel burning. We provide estimates of the CCN activity of biofuel burning aerosols by explicitly modeling plume dynamics (coagulation, condensation, chemical reactions, and dilution) in a young biofuel burning plume from emission until plume exit, defined here as the condition when the plume reaches ambient temperature and specific humidity through entrainment. We found that aerosol-scale dynamics affect CCN activity only during the first few seconds of evolution, after which the CCN efficiency reaches a constant value. Homogenizing factors in a plume are co-emission of semi-volatile organic compounds (SVOCs) or emission at small particle sizes; SVOC co-emission can be the main factor determining plume-exit CCN for hydrophobic or small particles. Coagulation limits emission of CCN to about 1016 per kilogram of fuel. Depending on emission factor, particle size, and composition, some of these particles may not activate at low supersaturation (ssat). Hygroscopic Aitken-mode particles can contribute to CCN through self-coagulation but have a small effect on the CCN activity of accumulation-mode particles, regardless of composition differences. Simple models (monodisperse coagulation and average hygroscopicity) can be used to estimate plume-exit CCN within about 20 % if particles are unimodal and have homogeneous composition, or when particles are emitted in the Aitken mode even if they are not homogeneous. On the other hand, if externally mixed particles are emitted in the accumulation mode without SVOCs, an average hygroscopicity overestimates emitted CCN by up to a factor of 2. This work has identified conditions under which particle populations become more homogeneous during plume processes. This

  19. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Michael [Univ. of Chicago, IL (United States)

    2017-03-13

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead to predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the

  20. Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.

    Science.gov (United States)

    Mao, Tianqi; Wang, Zhaocheng; Wang, Qi

    2017-01-23

    Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.

  1. Fitting non-gaussian Models to Financial data: An Empirical Study

    Directory of Open Access Journals (Sweden)

    Pablo Olivares

    2011-04-01

    Full Text Available In this paper are presented some experiences about the modeling of financial data by three classes of models as alternative to Gaussian Linear models. Dynamic Volatility, Stable L'evy and Diffusion with Jumps models are considered. The techniques are illustrated with some examples of financial series on currency, futures and indexes.

  2. Automated Generation of 3D Volcanic Gas Plume Models for Geobrowsers

    Science.gov (United States)

    Wright, T. E.; Burton, M.; Pyle, D. M.

    2007-12-01

    A network of five UV spectrometers on Etna automatically gathers column amounts of SO2 during daylight hours. Near-simultaneous scans from adjacent spectrometers, comprising 210 column amounts in total, are then converted to 2D slices showing the spatial distribution of the gas by tomographic reconstruction. The trajectory of the plume is computed using an automatically-submitted query to NOAA's HYSPLIT Trajectory Model. This also provides local estimates of air temperature, which are used to determine the atmospheric stability and therefore the degree to which the plume is dispersed by turbulence. This information is sufficient to construct an animated sequence of models which show how the plume is advected and diffused over time. These models are automatically generated in the Collada Digital Asset Exchange format and combined into a single file which displays the evolution of the plume in Google Earth. These models are useful for visualising and predicting the shape and distribution of the plume for civil defence, to assist field campaigns and as a means of communicating some of the work of volcano observatories to the public. The Simultaneous Algebraic Reconstruction Technique is used to create the 2D slices. This is a well-known method, based on iteratively updating a forward model (from 2D distribution to column amounts). Because it is based on a forward model, it also provides a simple way to quantify errors.

  3. Calculation of doses received while crossing a plume of radioactive material

    International Nuclear Information System (INIS)

    Scherpelz, R.I.; Desrosiers, A.E.

    1981-04-01

    A method has been developed for determining the dose received by a person while crossing a plume of radioactive material. The method uses a Gaussian plume model to arrive at a dose rate on the plume centerline at the position of the plume crossing. This dose rate may be due to any external or internal dose pathway. An algebraic formula can then be used to convert the plume centerline dose rate to a total dose integrated over the total time of plume crossing. Correction factors are presented for dose pathways in which the dose rate is not normally distributed about the plume centerline. The method is illustrated by a study done at the Pacific Northwest Laboratory, and results of this study are presented

  4. Modeling an Iodine Hall Thruster Plume in the Iodine Satellite (ISAT)

    Science.gov (United States)

    Choi, Maria

    2016-01-01

    An iodine-operated 200-W Hall thruster plume has been simulated using a hybrid-PIC model to predict the spacecraft surface-plume interaction for spacecraft integration purposes. For validation of the model, the plasma potential, electron temperature, ion current flux, and ion number density of xenon propellant were compared with available measurement data at the nominal operating condition. To simulate iodine plasma, various collision cross sections were found and used in the model. While time-varying atomic iodine species (i.e., I, I+, I2+) information is provided by HPHall simulation at the discharge channel exit, the molecular iodine species (i.e., I2, I2+) are introduced as Maxwellian particles at the channel exit. Simulation results show that xenon and iodine plasma plumes appear to be very similar under the assumptions of the model. Assuming a sticking coefficient of unity, iodine deposition rate is estimated.

  5. Infrared maritime target detection using a probabilistic single Gaussian model of sea clutter in Fourier domain

    Science.gov (United States)

    Zhou, Anran; Xie, Weixin; Pei, Jihong; Chen, Yapei

    2018-02-01

    For ship targets detection in cluttered infrared image sequences, a robust detection method, based on the probabilistic single Gaussian model of sea background in Fourier domain, is put forward. The amplitude spectrum sequences at each frequency point of the pure seawater images in Fourier domain, being more stable than the gray value sequences of each background pixel in the spatial domain, are regarded as a Gaussian model. Next, a probability weighted matrix is built based on the stability of the pure seawater's total energy spectrum in the row direction, to make the Gaussian model more accurate. Then, the foreground frequency points are separated from the background frequency points by the model. Finally, the false-alarm points are removed utilizing ships' shape features. The performance of the proposed method is tested by visual and quantitative comparisons with others.

  6. Infrared signature modelling of a rocket jet plume - comparison with flight measurements

    International Nuclear Information System (INIS)

    Rialland, V; Perez, P; Roblin, A; Guy, A; Gueyffier, D; Smithson, T

    2016-01-01

    The infrared signature modelling of rocket plumes is a challenging problem involving rocket geometry, propellant composition, combustion modelling, trajectory calculations, fluid mechanics, atmosphere modelling, calculation of gas and particles radiative properties and of radiative transfer through the atmosphere. This paper presents ONERA simulation tools chained together to achieve infrared signature prediction, and the comparison of the estimated and measured signatures of an in-flight rocket plume. We consider the case of a solid rocket motor with aluminized propellant, the Black Brant sounding rocket. The calculation case reproduces the conditions of an experimental rocket launch, performed at White Sands in 1997, for which we obtained high quality infrared signature data sets from DRDC Valcartier. The jet plume is calculated using an in-house CFD software called CEDRE. The plume infrared signature is then computed on the spectral interval 1900-5000 cm -1 with a step of 5 cm -1 . The models and their hypotheses are presented and discussed. Then the resulting plume properties, radiance and spectra are detailed. Finally, the estimated infrared signature is compared with the spectral imaging measurements. The discrepancies are analyzed and discussed. (paper)

  7. Radiative modeling and characterization of aerosol plumes hyper-spectral imagery

    International Nuclear Information System (INIS)

    Alakian, A.

    2008-03-01

    This thesis aims at characterizing aerosols from plumes (biomass burning, industrial discharges, etc.) with hyper-spectral imagery. We want to estimate the optical properties of emitted particles and also their micro-physical properties such as number, size distribution and composition. To reach our goal, we have built a forward semi-analytical model, named APOM (Aerosol Plume Optical Model), which allows to simulate the radiative effects of aerosol plumes in the spectral range [0,4-2,5 μm] for nadir viewing sensors. Mathematical formulation and model coefficients are obtained from simulations performed with the radiative transfer code COMANCHE. APOM is assessed on simulated data and proves to be accurate with modeling errors between 1% and 3%. Three retrieval methods using APOM have been developed: L-APOM, M-APOM and A-APOM. These methods take advantage of spectral and spatial dimensions in hyper-spectral images. L-APOM and M-APOM assume a priori knowledge on particles but can estimate their optical and micro-physical properties. Their performances on simulated data are quite promising. A-APOM method does not require any a priori knowledge on particles but only estimates their optical properties. However, it still needs improvements before being usable. On real images, inversion provides satisfactory results for plumes above water but meets some difficulties for plumes above vegetation, which underlines some possibilities of improvement for the retrieval algorithm. (author)

  8. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  9. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  10. Formation of mantle "lone plumes" in the global downwelling zone - A multiscale modelling of subduction-controlled plume generation beneath the South China Sea

    Science.gov (United States)

    Zhang, Nan; Li, Zheng-Xiang

    2018-01-01

    It has been established that almost all known mantle plumes since the Mesozoic formed above the two lower mantle large low shear velocity provinces (LLSVPs). The Hainan plume is one of the rare exceptions in that instead of rising above the LLSVPs, it is located within the broad global mantle downwelling zone, therefore classified as a "lone plume". Here, we use the Hainan plume example to investigate the feasibility of such lone plumes being generated by subducting slabs in the mantle downwelling zone using 3D geodynamic modelling. Our geodynamic model has a high-resolution regional domain embedded in a relatively low resolution global domain, which is set up in an adaptive-mesh-refined, 3D mantle convection code ASPECT (Advanced Solver for Problems in Earth's ConvecTion). We use a recently published plate motion model to define the top mechanical boundary condition. Our modelling results suggest that cold slabs under the present-day Eurasia, formed from the Mesozoic subduction and closure of the Tethys oceans, have prevented deep mantle hot materials from moving to the South China Sea from regions north or west of the South China Sea. From the east side, the Western Pacific subduction systems started to promote the formation of a lower-mantle thermal-chemical pile in the vicinity of the future South China Sea region since 70 Ma ago. As the top of this lower-mantle thermal-chemical pile rises, it first moved to the west, and finally rested beneath the South China Sea. The presence of a thermochemical layer (possible the D″ layer) in the model helps stabilizing the plume root. Our modelling is the first implementation of multi-scale mesh in the regional model. It has been proved to be an effective way of modelling regional dynamics within a global plate motion and mantle dynamics background.

  11. Superdiffusion in a non-Markovian random walk model with a Gaussian memory profile

    Science.gov (United States)

    Borges, G. M.; Ferreira, A. S.; da Silva, M. A. A.; Cressoni, J. C.; Viswanathan, G. M.; Mariz, A. M.

    2012-09-01

    Most superdiffusive Non-Markovian random walk models assume that correlations are maintained at all time scales, e.g., fractional Brownian motion, Lévy walks, the Elephant walk and Alzheimer walk models. In the latter two models the random walker can always "remember" the initial times near t = 0. Assuming jump size distributions with finite variance, the question naturally arises: is superdiffusion possible if the walker is unable to recall the initial times? We give a conclusive answer to this general question, by studying a non-Markovian model in which the walker's memory of the past is weighted by a Gaussian centered at time t/2, at which time the walker had one half the present age, and with a standard deviation σt which grows linearly as the walker ages. For large widths we find that the model behaves similarly to the Elephant model, but for small widths this Gaussian memory profile model behaves like the Alzheimer walk model. We also report that the phenomenon of amnestically induced persistence, known to occur in the Alzheimer walk model, arises in the Gaussian memory profile model. We conclude that memory of the initial times is not a necessary condition for generating (log-periodic) superdiffusion. We show that the phenomenon of amnestically induced persistence extends to the case of a Gaussian memory profile.

  12. Volcanic Plume Elevation Model Derived From Landsat 8: examples on Holuhraun (Iceland) and Mount Etna (Italy)

    Science.gov (United States)

    de Michele, Marcello; Raucoules, Daniel; Arason, Þórður; Spinetti, Claudia; Corradini, Stefano; Merucci, Luca

    2016-04-01

    The retrieval of both height and velocity of a volcanic plume is an important issue in volcanology. As an example, it is known that large volcanic eruptions can temporarily alter the climate, causing global cooling and shifting precipitation patterns; the ash/gas dispersion in the atmosphere, their impact and lifetime around the globe, greatly depends on the injection altitude. Plume height information is critical for ash dispersion modelling and air traffic security. Furthermore, plume height during explosive volcanism is the primary parameter for estimating mass eruption rate. Knowing the plume altitude is also important to get the correct amount of SO2 concentration from dedicated spaceborne spectrometers. Moreover, the distribution of ash deposits on ground greatly depends on the ash cloud altitude, which has an impact on risk assessment and crisis management. Furthermore, a spatially detailed plume height measure could be used as a hint for gas emission rate estimation and for ash plume volume researches, which both have an impact on climate research, air quality assessment for aviation and finally for the understanding of the volcanic system itself as ash/gas emission rates are related to the state of pressurization of the magmatic chamber. Today, the community mainly relies on ground based measurements but often they can be difficult to collect as by definition volcanic areas are dangerous areas (presence of toxic gases) and can be remotely situated and difficult to access. Satellite remote sensing offers a comprehensive and safe way to estimate plume height. Conventional photogrammetric restitution based on satellite imagery fails in precisely retrieving a plume elevation model as the plume own velocity induces an apparent parallax that adds up to the standard parallax given by the stereoscopic view. Therefore, measurements based on standard satellite photogrammeric restitution do not apply as there is an ambiguity in the measurement of the plume position

  13. A Robust Non-Gaussian Data Assimilation Method for Highly Non-Linear Models

    Directory of Open Access Journals (Sweden)

    Elias D. Nino-Ruiz

    2018-03-01

    Full Text Available In this paper, we propose an efficient EnKF implementation for non-Gaussian data assimilation based on Gaussian Mixture Models and Markov-Chain-Monte-Carlo (MCMC methods. The proposed method works as follows: based on an ensemble of model realizations, prior errors are estimated via a Gaussian Mixture density whose parameters are approximated by means of an Expectation Maximization method. Then, by using an iterative method, observation operators are linearized about current solutions and posterior modes are estimated via a MCMC implementation. The acceptance/rejection criterion is similar to that of the Metropolis-Hastings rule. Experimental tests are performed on the Lorenz 96 model. The results show that the proposed method can decrease prior errors by several order of magnitudes in a root-mean-square-error sense for nearly sparse or dense observational networks.

  14. Non-gaussian turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Hoejstrup, J [NEG Micon Project Development A/S, Randers (Denmark); Hansen, K S [Denmarks Technical Univ., Dept. of Energy Engineering, Lyngby (Denmark); Pedersen, B J [VESTAS Wind Systems A/S, Lem (Denmark); Nielsen, M [Risoe National Lab., Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    The pdf`s of atmospheric turbulence have somewhat wider tails than a Gaussian, especially regarding accelerations, whereas velocities are close to Gaussian. This behaviour is being investigated using data from a large WEB-database in order to quantify the amount of non-Gaussianity. Models for non-Gaussian turbulence have been developed, by which artificial turbulence can be generated with specified distributions, spectra and cross-correlations. The artificial time series will then be used in load models and the resulting loads in the Gaussian and the non-Gaussian cases will be compared. (au)

  15. Plume Tracker: Interactive mapping of volcanic sulfur dioxide emissions with high-performance radiative transfer modeling

    Science.gov (United States)

    Realmuto, Vincent J.; Berk, Alexander

    2016-11-01

    We describe the development of Plume Tracker, an interactive toolkit for the analysis of multispectral thermal infrared observations of volcanic plumes and clouds. Plume Tracker is the successor to MAP_SO2, and together these flexible and comprehensive tools have enabled investigators to map sulfur dioxide (SO2) emissions from a number of volcanoes with TIR data from a variety of airborne and satellite instruments. Our objective for the development of Plume Tracker was to improve the computational performance of the retrieval procedures while retaining the accuracy of the retrievals. We have achieved a 300 × improvement in the benchmark performance of the retrieval procedures through the introduction of innovative data binning and signal reconstruction strategies, and improved the accuracy of the retrievals with a new method for evaluating the misfit between model and observed radiance spectra. We evaluated the accuracy of Plume Tracker retrievals with case studies based on MODIS and AIRS data acquired over Sarychev Peak Volcano, and ASTER data acquired over Kilauea and Turrialba Volcanoes. In the Sarychev Peak study, the AIRS-based estimate of total SO2 mass was 40% lower than the MODIS-based estimate. This result was consistent with a 45% reduction in the AIRS-based estimate of plume area relative to the corresponding MODIS-based estimate. In addition, we found that our AIRS-based estimate agreed with an independent estimate, based on a competing retrieval technique, within a margin of ± 20%. In the Kilauea study, the ASTER-based concentration estimates from 21 May 2012 were within ± 50% of concurrent ground-level concentration measurements. In the Turrialba study, the ASTER-based concentration estimates on 21 January 2012 were in exact agreement with SO2 concentrations measured at plume altitude on 1 February 2012.

  16. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  17. 'The formula that killed Wall Street': the Gaussian copula and modelling practices in investment banking.

    Science.gov (United States)

    MacKenzie, Donald; Spears, Taylor

    2014-06-01

    Drawing on documentary sources and 114 interviews with market participants, this and a companion article discuss the development and use in finance of the Gaussian copula family of models, which are employed to estimate the probability distribution of losses on a pool of loans or bonds, and which were centrally involved in the credit crisis. This article, which explores how and why the Gaussian copula family developed in the way it did, employs the concept of 'evaluation culture', a set of practices, preferences and beliefs concerning how to determine the economic value of financial instruments that is shared by members of multiple organizations. We identify an evaluation culture, dominant within the derivatives departments of investment banks, which we call the 'culture of no-arbitrage modelling', and explore its relation to the development of Gaussian copula models. The article suggests that two themes from the science and technology studies literature on models (modelling as 'impure' bricolage, and modelling as articulating with heterogeneous objectives and constraints) help elucidate the history of Gaussian copula models in finance.

  18. An Overview of Plume Tracker: Mapping Volcanic Emissions with Interactive Radiative Transfer Modeling

    Science.gov (United States)

    Realmuto, V. J.; Berk, A.; Guiang, C.

    2014-12-01

    Infrared remote sensing is a vital tool for the study of volcanic plumes, and radiative transfer (RT) modeling is required to derive quantitative estimation of the sulfur dioxide (SO2), sulfate aerosol (SO4), and silicate ash (pulverized rock) content of these plumes. In the thermal infrared, we must account for the temperature, emissivity, and elevation of the surface beneath the plume, plume altitude and thickness, and local atmospheric temperature and humidity. Our knowledge of these parameters is never perfect, and interactive mapping allows us to evaluate the impact of these uncertainties on our estimates of plume composition. To enable interactive mapping, the Jet Propulsion Laboratory is collaborating with Spectral Sciences, Inc., (SSI) to develop the Plume Tracker toolkit. This project is funded by a NASA AIST Program Grant (AIST-11-0053) to SSI. Plume Tracker integrates (1) retrieval procedures for surface temperature and emissivity, SO2, NH3, or CH4 column abundance, and scaling factors for H2O vapor and O3 profiles, (2) a RT modeling engine based on MODTRAN, and (3) interactive visualization and analysis utilities under a single graphics user interface. The principal obstacle to interactive mapping is the computational overhead of the RT modeling engine. Under AIST-11-0053 we have achieved a 300-fold increase in the performance of the retrieval procedures through the use of indexed caches of model spectra, optimization of the minimization procedures, and scaling of the effects of surface temperature and emissivity on model radiance spectra. In the final year of AIST-11-0053 we will implement parallel processing to exploit multi-core CPUs and cluster computing, and optimize the RT engine to eliminate redundant calculations when iterating over a range of gas concentrations. These enhancements will result in an additional 8 - 12X increase in performance. In addition to the improvements in performance, we have improved the accuracy of the Plume Tracker

  19. Modeling Laser and e-Beam Generated Plasma-Plume Experiments Using LASNEX

    CERN Document Server

    Ho, D

    1999-01-01

    The hydrodynamics code LASNEX is used to model the laser and e-beam generated plasma-plume experiments. The laser used has a wavelength of 1 (micro)m and the FWHM spot size is 1 mm. The total laser energy is 160 mJ. The simulation shows that the plume expands at a velocity of about 6 cm/(micro)s. The e-beam generated from the Experimental Test Accelerator (ETA) has 5.5 MeV and FWHM spot size ranges from 2 to 3.3 mm. From the simulations, the plasma plume expansion velocity ranges from about 3 to 6 mm/(micro)s and the velocity increases with decreasing spot size. All the simulation results reported here are in close agreement with experimental data.

  20. Model-independent test for scale-dependent non-Gaussianities in the cosmic microwave background.

    Science.gov (United States)

    Räth, C; Morfill, G E; Rossmanith, G; Banday, A J; Górski, K M

    2009-04-03

    We present a model-independent method to test for scale-dependent non-Gaussianities in combination with scaling indices as test statistics. Therefore, surrogate data sets are generated, in which the power spectrum of the original data is preserved, while the higher order correlations are partly randomized by applying a scale-dependent shuffling procedure to the Fourier phases. We apply this method to the Wilkinson Microwave Anisotropy Probe data of the cosmic microwave background and find signatures for non-Gaussianities on large scales. Further tests are required to elucidate the origin of the detected anomalies.

  1. Observation and modeling of the evolution of Texas power plant plumes

    Directory of Open Access Journals (Sweden)

    W. Zhou

    2012-01-01

    Full Text Available During the second Texas Air Quality Study 2006 (TexAQS II, a full range of pollutants was measured by aircraft in eastern Texas during successive transects of power plant plumes (PPPs. A regional photochemical model is applied to simulate the physical and chemical evolution of the plumes. The observations reveal that SO2 and NOy were rapidly removed from PPPs on a cloudy day but not on the cloud-free days, indicating efficient aqueous processing of these compounds in clouds. The model reasonably represents observed NOx oxidation and PAN formation in the plumes, but fails to capture the rapid loss of SO2 (0.37 h−1 and NOy (0.24 h−1 in some plumes on the cloudy day. Adjustments to the cloud liquid water content (QC and the default metal concentrations in the cloud module could explain some of the SO2 loss. However, NOy in the model was insensitive to QC. These findings highlight cloud processing as a major challenge to atmospheric models. Model-based estimates of ozone production efficiency (OPE in PPPs are 20–50 % lower than observation-based estimates for the cloudy day.

  2. A study of the atmospheric dispersion of a high release of krypton-85 above a complex coastal terrain, comparison with the predictions of Gaussian models (Briggs, Doury, ADMS4).

    Science.gov (United States)

    Leroy, C; Maro, D; Hébert, D; Solier, L; Rozet, M; Le Cavelier, S; Connan, O

    2010-11-01

    Atmospheric releases of krypton-85, from the nuclear fuel reprocessing plant at the AREVA NC facility at La Hague (France), were used to test Gaussian models of dispersion. In 2001-2002, the French Institute for Radiological Protection and Nuclear Safety (IRSN) studied the atmospheric dispersion of 15 releases, using krypton-85 as a tracer for plumes emitted from two 100-m-high stacks. Krypton-85 is a chemically inert radionuclide. Krypton-85 air concentration measurements were performed on the ground in the downwind direction, at distances between 0.36 and 3.3 km from the release, by neutral or slightly unstable atmospheric conditions. The standard deviation for the horizontal dispersion of the plume and the Atmospheric Transfer Coefficient (ATC) were determined from these measurements. The experimental results were compared with calculations using first generation (Doury, Briggs) and second generation (ADMS 4.0) Gaussian models. The ADMS 4.0 model was used in two configurations; one takes account of the effect of the built-up area, and the other the effect of the roughness of the surface on the plume dispersion. Only the Briggs model correctly reproduced the measured values for the width of the plume, whereas the ADMS 4.0 model overestimated it and the Doury model underestimated it. The agreement of the models with measured values of the ATC varied according to distance from the release point. For distances less than 2 km from the release point, the ADMS 4.0 model achieved the best agreement between model and measurement; beyond this distance, the best agreement was achieved by the Briggs and Doury models.

  3. Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models.

    Science.gov (United States)

    Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P H

    2016-11-01

    Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.

  4. Semiparametric Gaussian copula models : Geometry and efficient rank-based estimation

    NARCIS (Netherlands)

    Segers, J.; van den Akker, R.; Werker, B.J.M.

    2014-01-01

    We propose, for multivariate Gaussian copula models with unknown margins and structured correlation matrices, a rank-based, semiparametrically efficient estimator for the Euclidean copula parameter. This estimator is defined as a one-step update of a rank-based pilot estimator in the direction of

  5. Ground states and formal duality relations in the Gaussian core model

    NARCIS (Netherlands)

    Cohn, H.; Kumar, A.; Schürmann, A.

    2009-01-01

    We study dimensional trends in ground states for soft-matter systems. Specifically, using a high-dimensional version of Parrinello-Rahman dynamics, we investigate the behavior of the Gaussian core model in up to eight dimensions. The results include unexpected geometric structures, with surprising

  6. Fractional statistics in 2+1 dimensions through the Gaussian model

    International Nuclear Information System (INIS)

    Murthy, G.

    1986-01-01

    The free massless field in 2+1 dimensions is written as an ''integral'' over free massless fields in 1+1 dimensions. Taking the operators with fractional dimension in the Gaussian model as a springboard we construct operators with fractional statistics in the former theory

  7. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  8. An equivalence between the discrete Gaussian model and a generalized Sine Gordon theory on a lattice

    International Nuclear Information System (INIS)

    Baskaran, G.; Gupte, N.

    1983-11-01

    We demonstrate an equivalence between the statistical mechanics of the discrete Gaussian model and a generalized Sine-Gordon theory on an Euclidean lattice in arbitrary dimensions. The connection is obtained by a simple transformation of the partition function and is non perturbative in nature. (author)

  9. Numerically Accelerated Importance Sampling for Nonlinear Non-Gaussian State Space Models

    NARCIS (Netherlands)

    Koopman, S.J.; Lucas, A.; Scharth, M.

    2015-01-01

    We propose a general likelihood evaluation method for nonlinear non-Gaussian state-space models using the simulation-based method of efficient importance sampling. We minimize the simulation effort by replacing some key steps of the likelihood estimation procedure by numerical integration. We refer

  10. Footprint (A Screening Model for Estimating the Area of a Plume Produced from Gasoline Containing Ethanol

    Science.gov (United States)

    FOOTPRINT is a simple and user-friendly screening model to estimate the length and surface area of BTEX plumes in ground water produced from a spill of gasoline that contains ethanol. Ethanol has a potential negative impact on the natural biodegradation of BTEX compounds in groun...

  11. Modeling Emissions and Vertical Plume Transport of Crop Residue Burning Experiments in the Pacific Northwest

    Science.gov (United States)

    Zhou, L.; Baker, K. R.; Napelenok, S. L.; Pouliot, G.; Elleman, R. A.; ONeill, S. M.; Urbanski, S. P.; Wong, D. C.

    2017-12-01

    Crop residue burning has long been a common practice in agriculture with the smoke emissions from the burning linked to negative health impacts. A field study in eastern Washington and northern Idaho in August 2013 consisted of multiple burns of well characterized fuels with nearby surface and aerial measurements including trace species concentrations, plume rise height and boundary layer structure. The chemical transport model CMAQ (Community Multiscale Air Quality Model) was used to assess the fire emissions and subsequent vertical plume transport. The study first compared assumptions made by the 2014 National Emission Inventory approach for crop residue burning with the fuel and emissions information obtained from the field study and then investigated the sensitivity of modeled carbon monoxide (CO) and PM2.5 concentrations to these different emission estimates and plume rise treatment with CMAQ. The study suggests that improvements to the current parameterizations are needed in order for CMAQ to reliably reproduce smoke plumes from burning. In addition, there is enough variability in the smoke emissions, stemming from variable field-specific information such as field size, that attempts to model crop residue burning should use field-specific information whenever possible.

  12. Reactive transport modelling of biogeochemical processes and carbon isotope geochemistry inside a landfill leachate plume.

    NARCIS (Netherlands)

    van Breukelen, B.M.; Griffioen, J.; Roling, W.F.M.; van Verseveld, H.W.

    2004-01-01

    The biogeochemical processes governing leachate attenuation inside a landfill leachate plume (Banisveld, the Netherlands) were revealed and quantified using the 1D reactive transport model PHREEQC-2. Biodegradation of dissolved organic carbon (DOC) was simulated assuming first-order oxidation of two

  13. Evaluation of Gaussian approximations for data assimilation in reservoir models

    KAUST Repository

    Iglesias, Marco A.; Law, Kody J H; Stuart, Andrew M.

    2013-01-01

    is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based

  14. Bayes factor between Student t and Gaussian mixed models within an animal breeding context

    Directory of Open Access Journals (Sweden)

    García-Cortés Luis

    2008-07-01

    Full Text Available Abstract The implementation of Student t mixed models in animal breeding has been suggested as a useful statistical tool to effectively mute the impact of preferential treatment or other sources of outliers in field data. Nevertheless, these additional sources of variation are undeclared and we do not know whether a Student t mixed model is required or if a standard, and less parameterized, Gaussian mixed model would be sufficient to serve the intended purpose. Within this context, our aim was to develop the Bayes factor between two nested models that only differed in a bounded variable in order to easily compare a Student t and a Gaussian mixed model. It is important to highlight that the Student t density converges to a Gaussian process when degrees of freedom tend to infinity. The twomodels can then be viewed as nested models that differ in terms of degrees of freedom. The Bayes factor can be easily calculated from the output of a Markov chain Monte Carlo sampling of the complex model (Student t mixed model. The performance of this Bayes factor was tested under simulation and on a real dataset, using the deviation information criterion (DIC as the standard reference criterion. The two statistical tools showed similar trends along the parameter space, although the Bayes factor appeared to be the more conservative. There was considerable evidence favoring the Student t mixed model for data sets simulated under Student t processes with limited degrees of freedom, and moderate advantages associated with using the Gaussian mixed model when working with datasets simulated with 50 or more degrees of freedom. For the analysis of real data (weight of Pietrain pigs at six months, both the Bayes factor and DIC slightly favored the Student t mixed model, with there being a reduced incidence of outlier individuals in this population.

  15. Mathematical model of drift deposition from a bifurcated cooling tower plume

    International Nuclear Information System (INIS)

    Chen, N.C.J.; Jung, L.

    1978-01-01

    Cooling tower drift deposition modeling has been extended by including centrifugal force induced through plume bifurcation in a crosswind as a mechanism for drift droplet removal from the plume. The model, in its current state of development, is capable of predicting the trajectory of a single droplet from the stage of strong interaction with the vortex field soon after droplet emission at the tower top through the stage of droplet evaporation in an unsaturated atmosphere after droplet breakaway from the plume. The computer program developed from the mathematical formulation has been used to explore the dependency of the droplet trajectory on droplet size, vortex strength, point of droplet emission, drag coefficient, droplet efflux speed, and ambient conditions. A specific application to drift from a mechanical-draft cooling tower (for a wind speed twice the efflux speed, a relative humidity of 70 per cent, and an initial droplet radius of 100 μm) showed the droplet to follow a helical trajectory within the plume, with breakaway occurring at 2.5 tower diameters downwind and ground impact of the droplet (reduced through evaporation to 55 μm radius) at 11 tower diameters

  16. Numerical Modeling of Water Thermal Plumes Emitted by Thermal Power Plants

    Directory of Open Access Journals (Sweden)

    Azucena Durán-Colmenares

    2016-10-01

    Full Text Available This work focuses on the study of thermal dispersion of plumes emitted by power plants into the sea. Wastewater discharge from power stations causes impacts that require investigation or monitoring. A study to characterize the physical effects of thermal plumes into the sea is carried out here by numerical modeling and field measurements. The case study is the thermal discharges of the Presidente Adolfo López Mateos Power Plant, located in Veracruz, on the coast of the Gulf of Mexico. This plant is managed by the Federal Electricity Commission of Mexico. The physical effects of such plumes are related to the increase of seawater temperature caused by the hot water discharge of the plant. We focus on the implementation, calibration, and validation of the Delft3D-FLOW model, which solves the shallow-water equations. The numerical simulations consider a critical scenario where meteorological and oceanographic parameters are taken into account to reproduce the proper physical conditions of the environment. The results show a local physical effect of the thermal plumes within the study zone, given the predominant strong winds conditions of the scenario under study.

  17. Modeling and Mechanisms of Intercontinental Transport of Biomass-Burning Plumes

    Science.gov (United States)

    Reid, J. S.; Westphal, D. L.; Christopher, S. A.; Prins, E. M.; Justice, C. O.; Richardson, K. A.; Reid, E. A.; Eck, T. F.

    2003-12-01

    With the aid of fire products from GOES and MODIS, the NRL Aerosol Analysis and Prediction System (NAAPS) successfully monitors and predicts the formation and transport of massive smoke plumes between the continents in near real time. The goal of this system, formed under the joint Navy, NASA, and NOAA sponsored Fire Locating and Modeling of Burning Emissions (FLAMBE) project, is to provide 5 day forecasts of large biomass burning plumes and evaluate impacts on air quality, visibility, and regional radiative balance. In this paper we discuss and compare the mechanisms of intercontinental transport from the three most important sources in the world prone to long range advection: Africa, South/Central America, and Siberia. We demonstrate how these regions impact neighboring continents. As the meteorology of these three regions are distinct, differences in transport phenomenon subsequently result, particularly with respect to vertical distribution. Specific examples will be given on prediction and the impact of Siberian and Central American smoke plumes on the United States as well as transport phenomena from Africa to Australia. We present rules of thumb for radiation and air quality impacts. We also model clear sky bias (both positive and negative) with respect to MODIS data, and show the frequency to which frontal advection of smoke plumes masks remote sensing retrievals of smoke optical depth.

  18. Speech Enhancement by MAP Spectral Amplitude Estimation Using a Super-Gaussian Speech Model

    Directory of Open Access Journals (Sweden)

    Lotter Thomas

    2005-01-01

    Full Text Available This contribution presents two spectral amplitude estimators for acoustical background noise suppression based on maximum a posteriori estimation and super-Gaussian statistical modelling of the speech DFT amplitudes. The probability density function of the speech spectral amplitude is modelled with a simple parametric function, which allows a high approximation accuracy for Laplace- or Gamma-distributed real and imaginary parts of the speech DFT coefficients. Also, the statistical model can be adapted to optimally fit the distribution of the speech spectral amplitudes for a specific noise reduction system. Based on the super-Gaussian statistical model, computationally efficient maximum a posteriori speech estimators are derived, which outperform the commonly applied Ephraim-Malah algorithm.

  19. An integrated numerical model for the prediction of Gaussian and billet shapes

    International Nuclear Information System (INIS)

    Hattel, J.H.; Pryds, N.H.; Pedersen, T.B.

    2004-01-01

    Separate models for the atomisation and the deposition stages were recently integrated by the authors to form a unified model describing the entire spray-forming process. In the present paper, the focus is on describing the shape of the deposited material during the spray-forming process, obtained by this model. After a short review of the models and their coupling, the important factors which influence the resulting shape, i.e. Gaussian or billet, are addressed. The key parameters, which are utilized to predict the geometry and dimension of the deposited material, are the sticking efficiency and the shading effect for Gaussian and billet shape, respectively. From the obtained results, the effect of these parameters on the final shape is illustrated

  20. Particle-in-cell vs straight line Gaussian calculations for an area of complex topography

    International Nuclear Information System (INIS)

    Lange, R.; Sherman, C.

    1977-01-01

    Two numerical models for the calculation of time integrated air concentraton and ground deposition of airborne radioactive effluent releases are compared. The time dependent Particle-in-Cell (PIC) model and the steady state Gaussian plume model were used for the simulation. The area selected for the comparison was the Hudson River Valley, New York. Input for the models was synthesized from meteorological data gathered in previous studies by various investigators. It was found that the PIC model more closely simulated the three-dimensional effects of the meteorology and topography. Overall, the Gaussian model calculated higher concentrations under stable conditions. In addition, because of its consideration of exposure from the returning plume after flow reversal, the PIC model calculated air concentrations over larger areas than did the Gaussian model

  1. Factoring variations in natural images with deep Gaussian mixture models

    OpenAIRE

    van den Oord, Aäron; Schrauwen, Benjamin

    2014-01-01

    Generative models can be seen as the swiss army knives of machine learning, as many problems can be written probabilistically in terms of the distribution of the data, including prediction, reconstruction, imputation and simulation. One of the most promising directions for unsupervised learning may lie in Deep Learning methods, given their success in supervised learning. However, one of the cur- rent problems with deep unsupervised learning methods, is that they often are harder to scale. As ...

  2. Modelling Inverse Gaussian Data with Censored Response Values: EM versus MCMC

    Directory of Open Access Journals (Sweden)

    R. S. Sparks

    2011-01-01

    Full Text Available Low detection limits are common in measure environmental variables. Building models using data containing low or high detection limits without adjusting for the censoring produces biased models. This paper offers approaches to estimate an inverse Gaussian distribution when some of the data used are censored because of low or high detection limits. Adjustments for the censoring can be made if there is between 2% and 20% censoring using either the EM algorithm or MCMC. This paper compares these approaches.

  3. A Gaussian beam method for ultrasonic non-destructive evaluation modeling

    Science.gov (United States)

    Jacquet, O.; Leymarie, N.; Cassereau, D.

    2018-05-01

    The propagation of high-frequency ultrasonic body waves can be efficiently estimated with a semi-analytic Dynamic Ray Tracing approach using paraxial approximation. Although this asymptotic field estimation avoids the computational cost of numerical methods, it may encounter several limitations in reproducing identified highly interferential features. Nevertheless, some can be managed by allowing paraxial quantities to be complex-valued. This gives rise to localized solutions, known as paraxial Gaussian beams. Whereas their propagation and transmission/reflection laws are well-defined, the fact remains that the adopted complexification introduces additional initial conditions. While their choice is usually performed according to strategies specifically tailored to limited applications, a Gabor frame method has been implemented to indiscriminately initialize a reasonable number of paraxial Gaussian beams. Since this method can be applied for an usefully wide range of ultrasonic transducers, the typical case of the time-harmonic piston radiator is investigated. Compared to the commonly used Multi-Gaussian Beam model [1], a better agreement is obtained throughout the radiated field between the results of numerical integration (or analytical on-axis solution) and the resulting Gaussian beam superposition. Sparsity of the proposed solution is also discussed.

  4. Time Series Analysis of Non-Gaussian Observations Based on State Space Models from Both Classical and Bayesian Perspectives

    NARCIS (Netherlands)

    Durbin, J.; Koopman, S.J.M.

    1998-01-01

    The analysis of non-Gaussian time series using state space models is considered from both classical and Bayesian perspectives. The treatment in both cases is based on simulation using importance sampling and antithetic variables; Monte Carlo Markov chain methods are not employed. Non-Gaussian

  5. Source-term development for a contaminant plume for use by multimedia risk assessment models

    International Nuclear Information System (INIS)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.

    1999-01-01

    Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equal importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool

  6. Water Resources Research Program. Surface thermal plumes: evaluation of mathematical models for the near and complete field

    International Nuclear Information System (INIS)

    Dunn, W.E.; Policastro, A.J.; Paddock, R.A.

    1975-08-01

    This report evaluates mathematical models that may be used to predict the flow and temperature distributions resulting from heated surface discharges from power-plant outfalls. Part One discusses the basic physics of surface-plume dispersion and provides a critical review of 11 of the most popular and promising plume models developed to predict the near- and complete-field plume. Part Two compares predictions from the models to prototype data, laboratory data, or both. Part Two also provides a generic discussion of the issues surrounding near- and complete-field modeling. The principal conclusion of the report is that the available models, in their present stage of development, may be used to give only general estimates of plume characteristics; precise predictions are not currently possible. The Shirazi-Davis and Pritchard (No. 1) models appear superior to the others tested and are capable of correctly predicting general plume characteristics. (The predictions show roughly factor-of-two accuracy in centerline distance to a given isotherm, factor-of-two accuracy in plume width, and factor-of-five accuracy in isotherm areas.) The state of the art can best be improved by pursuing basic laboratory studies of plume dispersion along with further development of numerical-modeling techniques

  7. Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions

    Science.gov (United States)

    Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio

    1993-02-01

    The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.

  8. Variable Selection for Nonparametric Gaussian Process Priors: Models and Computational Strategies.

    Science.gov (United States)

    Savitsky, Terrance; Vannucci, Marina; Sha, Naijun

    2011-02-01

    This paper presents a unified treatment of Gaussian process models that extends to data from the exponential dispersion family and to survival data. Our specific interest is in the analysis of data sets with predictors that have an a priori unknown form of possibly nonlinear associations to the response. The modeling approach we describe incorporates Gaussian processes in a generalized linear model framework to obtain a class of nonparametric regression models where the covariance matrix depends on the predictors. We consider, in particular, continuous, categorical and count responses. We also look into models that account for survival outcomes. We explore alternative covariance formulations for the Gaussian process prior and demonstrate the flexibility of the construction. Next, we focus on the important problem of selecting variables from the set of possible predictors and describe a general framework that employs mixture priors. We compare alternative MCMC strategies for posterior inference and achieve a computationally efficient and practical approach. We demonstrate performances on simulated and benchmark data sets.

  9. Gaussian tunneling model of c-axis twist Josephson junctions

    International Nuclear Information System (INIS)

    Bille, A.; Klemm, R.A.; Scharnberg, K.

    2001-01-01

    We calculate the critical current density J c J ((var p hi) 0 ) for Josephson tunneling between identical high-temperature superconductors twisted an angle (var p hi) 0 about the c axis. Regardless of the shape of the two-dimensional Fermi surface and for very general tunneling matrix elements, an order parameter (OP) with general d-wave symmetry leads to J c J (π/4)=0. This general result is inconsistent with the data of Li et al. [Phys. Rev. Lett. 83, 4160 (1999)] on Bi 2 Sr 2 CaCu 2 O 8+δ (Bi2212), which showed J c J to be independent of (var p hi) 0 . If the momentum parallel to the barrier is conserved in the tunneling process, J c J should vary substantially with the twist angle (var p hi) 0 when the tight-binding Fermi surface appropriate for Bi2212 is taken into account, even if the OP is completely isotropic. We quantify the degree of momentum nonconservation necessary to render J c J ((var p hi) 0 ) constant within experimental error for a variety of pair states by interpolating between the coherent and incoherent limits using five specific models to describe the momentum dependence of the tunneling matrix element squared. From the data of Li et al., we conclude that the c-axis tunneling in Bi2212 must be very nearly incoherent, and that the OP must have a nonvanishing Fermi-surface average for T c . We further show that the apparent conventional sum-rule violation observed by Basov et al. [Science 283, 49 (1999)] can be consistent with such strongly incoherent c-axis tunneling.

  10. Discontinuous Galerkin modeling of the Columbia River's coupled estuary-plume dynamics

    Science.gov (United States)

    Vallaeys, Valentin; Kärnä, Tuomas; Delandmeter, Philippe; Lambrechts, Jonathan; Baptista, António M.; Deleersnijder, Eric; Hanert, Emmanuel

    2018-04-01

    The Columbia River (CR) estuary is characterized by high river discharge and strong tides that generate high velocity flows and sharp density gradients. Its dynamics strongly affects the coastal ocean circulation. Tidal straining in turn modulates the stratification in the estuary. Simulating the hydrodynamics of the CR estuary and plume therefore requires a multi-scale model as both shelf and estuarine circulations are coupled. Such a model has to keep numerical dissipation as low as possible in order to correctly represent the plume propagation and the salinity intrusion in the estuary. Here, we show that the 3D baroclinic discontinuous Galerkin finite element model SLIM 3D is able to reproduce the main features of the CR estuary-to-ocean continuum. We introduce new vertical discretization and mode splitting that allow us to model a region characterized by complex bathymetry and sharp density and velocity gradients. Our model takes into account the major forcings, i.e. tides, surface wind stress and river discharge, on a single multi-scale grid. The simulation period covers the end of spring-early summer of 2006, a period of high river flow and strong changes in the wind regime. SLIM 3D is validated with in-situ data on the shelf and at multiple locations in the estuary and compared with an operational implementation of SELFE. The model skill in the estuary and on the shelf indicate that SLIM 3D is able to reproduce the key processes driving the river plume dynamics, such as the occurrence of bidirectional plumes or reversals of the inner shelf coastal currents.

  11. The influence of model resolution on ozone in industrial volatile organic compound plumes.

    Science.gov (United States)

    Henderson, Barron H; Jeffries, Harvey E; Kim, Byeong-Uk; Vizuete, William G

    2010-09-01

    Regions with concentrated petrochemical industrial activity (e.g., Houston or Baton Rouge) frequently experience large, localized releases of volatile organic compounds (VOCs). Aircraft measurements suggest these released VOCs create plumes with ozone (O3) production rates 2-5 times higher than typical urban conditions. Modeling studies found that simulating high O3 productions requires superfine (1-km) horizontal grid cell size. Compared with fine modeling (4-kmin), the superfine resolution increases the peak O3 concentration by as much as 46%. To understand this drastic O3 change, this study quantifies model processes for O3 and "odd oxygen" (Ox) in both resolutions. For the entire plume, the superfine resolution increases the maximum O3 concentration 3% but only decreases the maximum Ox concentration 0.2%. The two grid sizes produce approximately equal Ox mass but by different reaction pathways. Derived sensitivity to oxides of nitrogen (NOx) and VOC emissions suggests resolution-specific sensitivity to NOx and VOC emissions. Different sensitivity to emissions will result in different O3 responses to subsequently encountered emissions (within the city or downwind). Sensitivity of O3 to emission changes also results in different simulated O3 responses to the same control strategies. Sensitivity of O3 to NOx and VOC emission changes is attributed to finer resolved Eulerian grid and finer resolved NOx emissions. Urban NOx concentration gradients are often caused by roadway mobile sources that would not typically be addressed with Plume-in-Grid models. This study shows that grid cell size (an artifact of modeling) influences simulated control strategies and could bias regulatory decisions. Understanding the dynamics of VOC plume dependence on grid size is the first step toward providing more detailed guidance for resolution. These results underscore VOC and NOx resolution interdependencies best addressed by finer resolution. On the basis of these results, the

  12. Water Resources Research Program. Surface thermal plumes: evaluation of mathematical models for the near and complete field

    International Nuclear Information System (INIS)

    Dunn, W.E.; Policastro, A.J.; Paddock, R.A.

    1975-05-01

    This report evaluates mathematical models that may be used to predict the flow and temperature distributions resulting from heated surface discharges from power-plant outfalls. Part One discusses the basic physics of surface-plume dispersion and provides a critical review of 11 of the most popular and promising plume models developed to predict the near- and complete-field plume. The principal conclusion of the report is that the available models, in their present stage of development, may be used to give only general estimates of plume characteristics; precise predictions are not currently possible. The Shirazi-Davis and Pritchard (No. 1) models appear superior to the others tested and are capable of correctly predicting general plume characteristics. (The predictions show roughly factor-of-two accuracy in centerline distance to a given isotherm, factor-of-two accuracy in plume width, and factor-of-five accuracy in isotherm areas.) The state of the art can best be improved by pursuing basic laboratory studies of plume dispersion along with further development of numerical-modeling techniques

  13. Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise

    Directory of Open Access Journals (Sweden)

    Bowen Hou

    2017-11-01

    Full Text Available As one of the most critical issues for target track, α -jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter is derived and applied on α -jerk tracking model to handle non-Gaussian noise. The weighted least square solution is presented and the standard Kalman filter is deduced firstly. A novel Kalman filter with the weighted least square based on the maximum correntropy criterion is deduced. The robustness of the maximum correntropy criterion is also analyzed with the influence function and compared with the Huber-based filter, and, moreover, the kernel size of Gaussian kernel plays an important role in the filter algorithm. A new adaptive kernel method is proposed in this paper to adjust the parameter in real time. Finally, simulation results indicate the validity and the efficiency of the proposed filter. The comparison study shows that the proposed filter can significantly reduce the noise influence for α -jerk model.

  14. Soft Sensor Modeling Based on Multiple Gaussian Process Regression and Fuzzy C-mean Clustering

    Directory of Open Access Journals (Sweden)

    Xianglin ZHU

    2014-06-01

    Full Text Available In order to overcome the difficulties of online measurement of some crucial biochemical variables in fermentation processes, a new soft sensor modeling method is presented based on the Gaussian process regression and fuzzy C-mean clustering. With the consideration that the typical fermentation process can be distributed into 4 phases including lag phase, exponential growth phase, stable phase and dead phase, the training samples are classified into 4 subcategories by using fuzzy C- mean clustering algorithm. For each sub-category, the samples are trained using the Gaussian process regression and the corresponding soft-sensing sub-model is established respectively. For a new sample, the membership between this sample and sub-models are computed based on the Euclidean distance, and then the prediction output of soft sensor is obtained using the weighting sum. Taking the Lysine fermentation as example, the simulation and experiment are carried out and the corresponding results show that the presented method achieves better fitting and generalization ability than radial basis function neutral network and single Gaussian process regression model.

  15. Stochastic resonance in a piecewise nonlinear model driven by multiplicative non-Gaussian noise and additive white noise

    Science.gov (United States)

    Guo, Yongfeng; Shen, Yajun; Tan, Jianguo

    2016-09-01

    The phenomenon of stochastic resonance (SR) in a piecewise nonlinear model driven by a periodic signal and correlated noises for the cases of a multiplicative non-Gaussian noise and an additive Gaussian white noise is investigated. Applying the path integral approach, the unified colored noise approximation and the two-state model theory, the analytical expression of the signal-to-noise ratio (SNR) is derived. It is found that conventional stochastic resonance exists in this system. From numerical computations we obtain that: (i) As a function of the non-Gaussian noise intensity, the SNR is increased when the non-Gaussian noise deviation parameter q is increased. (ii) As a function of the Gaussian noise intensity, the SNR is decreased when q is increased. This demonstrates that the effect of the non-Gaussian noise on SNR is different from that of the Gaussian noise in this system. Moreover, we further discuss the effect of the correlation time of the non-Gaussian noise, cross-correlation strength, the amplitude and frequency of the periodic signal on SR.

  16. Visual plumes coastal dispersion modeling in southwest Sabah ...

    African Journals Online (AJOL)

    In theory, the dilution capacity of open waters, particularly coastal areas, straits and oceans are enormous. This means that for surface and sub-merged ... Prior to the modeling exercise, field data pertaining to ambient water quality, hydraulic characteristics and tide patterns were collected. The modeling results indicated that ...

  17. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    Science.gov (United States)

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it

  18. Extreme-Strike and Small-time Asymptotics for Gaussian Stochastic Volatility Models

    OpenAIRE

    Zhang, Xin

    2016-01-01

    Asymptotic behavior of implied volatility is of our interest in this dissertation. For extreme strike, we consider a stochastic volatility asset price model in which the volatility is the absolute value of a continuous Gaussian process with arbitrary prescribed mean and covariance. By exhibiting a Karhunen-Loève expansion for the integrated variance, and using sharp estimates of the density of a general second-chaos variable, we derive asymptotics for the asset price density for large or smal...

  19. Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model

    KAUST Repository

    Wang, Shitao

    2016-05-27

    Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model\\'s estimates of the plume\\'s trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate\\'s contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.

  20. Modeling of Heat Transfer and Ablation of Refractory Material Due to Rocket Plume Impingement

    Science.gov (United States)

    Harris, Michael F.; Vu, Bruce T.

    2012-01-01

    CR Tech's Thermal Desktop-SINDA/FLUINT software was used in the thermal analysis of a flame deflector design for Launch Complex 39B at Kennedy Space Center, Florida. The analysis of the flame deflector takes into account heat transfer due to plume impingement from expected vehicles to be launched at KSC. The heat flux from the plume was computed using computational fluid dynamics provided by Ames Research Center in Moffet Field, California. The results from the CFD solutions were mapped onto a 3-D Thermal Desktop model of the flame deflector using the boundary condition mapping capabilities in Thermal Desktop. The ablation subroutine in SINDA/FLUINT was then used to model the ablation of the refractory material.

  1. Particle rejuvenation of Rao-Blackwellized sequential Monte Carlo smoothers for conditionally linear and Gaussian models

    Science.gov (United States)

    Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric

    2017-12-01

    This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.

  2. Performance modeling and analysis of parallel Gaussian elimination on multi-core computers

    Directory of Open Access Journals (Sweden)

    Fadi N. Sibai

    2014-01-01

    Full Text Available Gaussian elimination is used in many applications and in particular in the solution of systems of linear equations. This paper presents mathematical performance models and analysis of four parallel Gaussian Elimination methods (precisely the Original method and the new Meet in the Middle –MiM– algorithms and their variants with SIMD vectorization on multi-core systems. Analytical performance models of the four methods are formulated and presented followed by evaluations of these models with modern multi-core systems’ operation latencies. Our results reveal that the four methods generally exhibit good performance scaling with increasing matrix size and number of cores. SIMD vectorization only makes a large difference in performance for low number of cores. For a large matrix size (n ⩾ 16 K, the performance difference between the MiM and Original methods falls from 16× with four cores to 4× with 16 K cores. The efficiencies of all four methods are low with 1 K cores or more stressing a major problem of multi-core systems where the network-on-chip and memory latencies are too high in relation to basic arithmetic operations. Thus Gaussian Elimination can greatly benefit from the resources of multi-core systems, but higher performance gains can be achieved if multi-core systems can be designed with lower memory operation, synchronization, and interconnect communication latencies, requirements of utmost importance and challenge in the exascale computing age.

  3. Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian

    Science.gov (United States)

    Teneng, Dean

    2013-09-01

    We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.

  4. Gaussian and Affine Approximation of Stochastic Diffusion Models for Interest and Mortality Rates

    Directory of Open Access Journals (Sweden)

    Marcus C. Christiansen

    2013-10-01

    Full Text Available In the actuarial literature, it has become common practice to model future capital returns and mortality rates stochastically in order to capture market risk and forecasting risk. Although interest rates often should and mortality rates always have to be non-negative, many authors use stochastic diffusion models with an affine drift term and additive noise. As a result, the diffusion process is Gaussian and, thus, analytically tractable, but negative values occur with positive probability. The argument is that the class of Gaussian diffusions would be a good approximation of the real future development. We challenge that reasoning and study the asymptotics of diffusion processes with affine drift and a general noise term with corresponding diffusion processes with an affine drift term and an affine noise term or additive noise. Our study helps to quantify the error that is made by approximating diffusive interest and mortality rate models with Gaussian diffusions and affine diffusions. In particular, we discuss forward interest and forward mortality rates and the error that approximations cause on the valuation of life insurance claims.

  5. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    Science.gov (United States)

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing

    2018-05-01

    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  6. Simulation of Mexico City plumes during the MIRAGE-Mex field campaign using the WRF-Chem model

    Directory of Open Access Journals (Sweden)

    X. Tie

    2009-07-01

    Full Text Available The quantification of tropospheric O3 production in the downwind of the Mexico City plume is a major objective of the MIRAGE-Mex field campaign. We used a regional chemistry-transport model (WRF-Chem to predict the distribution of O3 and its precursors in Mexico City and the surrounding region during March 2006, and compared the model with in-situ aircraft measurements of O3, CO, VOCs, NOx, and NOy concentrations. The comparison shows that the model is capable of capturing the timing and location of the measured city plumes, and the calculated variability along the flights is generally consistent with the measured results, showing a rapid increase in O3 and its precursors when city plumes are detected. However, there are some notable differences between the calculated and measured values, suggesting that, during transport from the surface of the city to the outflow plume, ozone mixing ratios are underestimated by about 0–25% during different flights. The calculated O3-NOx, O3-CO, and O3-NOz correlations generally agree with the measured values, and the analyses of these correlations suggest that photochemical O3 production continues in the plume downwind of the city (aged plume, adding to the O3 already produced in the city and exported with the plume. The model is also used to quantify the contributions to OH reactivity from various compounds in the aged plume. This analysis suggests that oxygenated organics (OVOCs have the highest OH reactivity and play important roles for the O3 production in the aging plume. Furthermore, O3 production per NOx molecule consumed (O3 production efficiency is more efficient in the aged plume than in the young plume near the city. The major contributor to the high O3 production efficiency in the aged plume is the

  7. An integrated numerical model for the prediction of Gaussian and billet shapes

    DEFF Research Database (Denmark)

    Hattel, Jesper; Pryds, Nini; Pedersen, Trine Bjerre

    2004-01-01

    Separate models for the atomisation and the deposition stages were recently integrated by the authors to form a unified model describing the entire spray-forming process. In the present paper, the focus is on describing the shape of the deposited material during the spray-forming process, obtained...... by this model. After a short review of the models and their coupling, the important factors which influence the resulting shape, i.e. Gaussian or billet, are addressed. The key parameters, which are utilized to predict the geometry and dimension of the deposited material, are the sticking efficiency...

  8. Modeling the Response of Primary Production and Sedimentation to Variable Nitrate Loading in the Mississippi River Plume

    National Research Council Canada - National Science Library

    Green, Rebecca E; Breed, Greg A; Dagg, Michael J; Lohrenz, Steven E

    2008-01-01

    ...% reduction in annual nitrogen discharge into the Gulf of Mexico. We developed an ecosystem model for the Mississippi River plume to investigate the response of organic matter production and sedimentation to variable nitrate loading...

  9. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models

    Science.gov (United States)

    Amalia, Junita; Purhadi, Otok, Bambang Widjanarko

    2017-11-01

    Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.

  10. How Non-Gaussian Shocks Affect Risk Premia in Non-Linear DSGE Models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller

    This paper studies how non-Gaussian shocks affect risk premia in DSGE models approximated to second and third order. Based on an extension of the results in Schmitt-Grohé & Uribe (2004) to third order, we derive propositions for how rare disasters, stochastic volatility, and GARCH affect any risk...... premia in a wide class of DSGE models. To quantify these effects, we then set up a standard New Keynesian DSGE model where total factor productivity includes rare disasters, stochastic volatility, and GARCH. We …find that rare disasters increase the mean level of the 10-year nominal term premium, whereas...

  11. A New GPU-Enabled MODTRAN Thermal Model for the PLUME TRACKER Volcanic Emission Analysis Toolkit

    Science.gov (United States)

    Acharya, P. K.; Berk, A.; Guiang, C.; Kennett, R.; Perkins, T.; Realmuto, V. J.

    2013-12-01

    Real-time quantification of volcanic gaseous and particulate releases is important for (1) recognizing rapid increases in SO2 gaseous emissions which may signal an impending eruption; (2) characterizing ash clouds to enable safe and efficient commercial aviation; and (3) quantifying the impact of volcanic aerosols on climate forcing. The Jet Propulsion Laboratory (JPL) has developed state-of-the-art algorithms, embedded in their analyst-driven Plume Tracker toolkit, for performing SO2, NH3, and CH4 retrievals from remotely sensed multi-spectral Thermal InfraRed spectral imagery. While Plume Tracker provides accurate results, it typically requires extensive analyst time. A major bottleneck in this processing is the relatively slow but accurate FORTRAN-based MODTRAN atmospheric and plume radiance model, developed by Spectral Sciences, Inc. (SSI). To overcome this bottleneck, SSI in collaboration with JPL, is porting these slow thermal radiance algorithms onto massively parallel, relatively inexpensive and commercially-available GPUs. This paper discusses SSI's efforts to accelerate the MODTRAN thermal emission algorithms used by Plume Tracker. Specifically, we are developing a GPU implementation of the Curtis-Godson averaging and the Voigt in-band transmittances from near line center molecular absorption, which comprise the major computational bottleneck. The transmittance calculations were decomposed into separate functions, individually implemented as GPU kernels, and tested for accuracy and performance relative to the original CPU code. Speedup factors of 14 to 30× were realized for individual processing components on an NVIDIA GeForce GTX 295 graphics card with no loss of accuracy. Due to the separate host (CPU) and device (GPU) memory spaces, a redesign of the MODTRAN architecture was required to ensure efficient data transfer between host and device, and to facilitate high parallel throughput. Currently, we are incorporating the separate GPU kernels into a

  12. Mathematical modelling of thermal-plume interaction at Waterford Nuclear Power Station

    International Nuclear Information System (INIS)

    Tsai, S.Y.H.

    1981-01-01

    The Waldrop plume model was used to analyze the mixing and interaction of thermal effluents in the Mississippi River resulting from heated-water discharges from the Waterford Nuclear Power Station Unit 3 and from two nearby fossil-fueled power stations. The computer program of the model was modified and expanded to accommodate the multiple intake and discharge boundary conditions at the Waterford site. Numerical results of thermal-plume temperatures for individual and combined operation of the three power stations were obtained for typical low river flow (200,000 cfs) and maximum station operating conditions. The predicted temperature distributions indicated that the surface jet discharge from Waterford Unit 3 would interact with the thermal plumes produced by the two fossil-fueled stations. The results also showed that heat recirculation between the discharge of an upstream fossil-fueled plant and the intake of Waterford Unit 3 is to be expected. However, the resulting combined temperature distributions were found to be well within the thermal standards established by the state of Louisiana

  13. Oscillometric blood pressure estimation by combining nonparametric bootstrap with Gaussian mixture model.

    Science.gov (United States)

    Lee, Soojeong; Rajan, Sreeraman; Jeon, Gwanggil; Chang, Joon-Hyuk; Dajani, Hilmi R; Groza, Voicu Z

    2017-06-01

    Blood pressure (BP) is one of the most important vital indicators and plays a key role in determining the cardiovascular activity of patients. This paper proposes a hybrid approach consisting of nonparametric bootstrap (NPB) and machine learning techniques to obtain the characteristic ratios (CR) used in the blood pressure estimation algorithm to improve the accuracy of systolic blood pressure (SBP) and diastolic blood pressure (DBP) estimates and obtain confidence intervals (CI). The NPB technique is used to circumvent the requirement for large sample set for obtaining the CI. A mixture of Gaussian densities is assumed for the CRs and Gaussian mixture model (GMM) is chosen to estimate the SBP and DBP ratios. The K-means clustering technique is used to obtain the mixture order of the Gaussian densities. The proposed approach achieves grade "A" under British Society of Hypertension testing protocol and is superior to the conventional approach based on maximum amplitude algorithm (MAA) that uses fixed CR ratios. The proposed approach also yields a lower mean error (ME) and the standard deviation of the error (SDE) in the estimates when compared to the conventional MAA method. In addition, CIs obtained through the proposed hybrid approach are also narrower with a lower SDE. The proposed approach combining the NPB technique with the GMM provides a methodology to derive individualized characteristic ratio. The results exhibit that the proposed approach enhances the accuracy of SBP and DBP estimation and provides narrower confidence intervals for the estimates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Application of Gaussian cubature to model two-dimensional population balances

    Directory of Open Access Journals (Sweden)

    Bałdyga Jerzy

    2017-09-01

    Full Text Available In many systems of engineering interest the moment transformation of population balance is applied. One of the methods to solve the transformed population balance equations is the quadrature method of moments. It is based on the approximation of the density function in the source term by the Gaussian quadrature so that it preserves the moments of the original distribution. In this work we propose another method to be applied to the multivariate population problem in chemical engineering, namely a Gaussian cubature (GC technique that applies linear programming for the approximation of the multivariate distribution. Examples of the application of the Gaussian cubature (GC are presented for four processes typical for chemical engineering applications. The first and second ones are devoted to crystallization modeling with direction-dependent two-dimensional and three-dimensional growth rates, the third one represents drop dispersion accompanied by mass transfer in liquid-liquid dispersions and finally the fourth case regards the aggregation and sintering of particle populations.

  15. Model analysis of the chemical conversion of exhaust species in the expanding plumes of subsonic aircraft

    Energy Technology Data Exchange (ETDEWEB)

    Moellhoff, M.; Hendricks, J.; Lippert, E.; Petry, H. [Koeln Univ. (Germany). Inst. fuer Geophysik und Meteorologie; Sausen, R. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Oberpfaffenhofen (Germany). Inst. fuer Physik der Atmosphaere

    1997-12-31

    A box model and two different one-dimensional models are used to investigate the chemical conversion of exhaust species in the dispersing plume of a subsonic aircraft flying at cruise altitude. The effect of varying daytime of release as well as the impact of changing dispersion time is studied with special respect to the aircraft induced O{sub 3} production. Effective emission amounts for consideration in mesoscale and global models are calculated. Simulations with modified photolysis rates are performed to show the sensitivity of the photochemistry to the occurrence of cirrus clouds. (author) 8 refs.

  16. Model analysis of the chemical conversion of exhaust species in the expanding plumes of subsonic aircraft

    Energy Technology Data Exchange (ETDEWEB)

    Moellhoff, M; Hendricks, J; Lippert, E; Petry, H [Koeln Univ. (Germany). Inst. fuer Geophysik und Meteorologie; Sausen, R [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Oberpfaffenhofen (Germany). Inst. fuer Physik der Atmosphaere

    1998-12-31

    A box model and two different one-dimensional models are used to investigate the chemical conversion of exhaust species in the dispersing plume of a subsonic aircraft flying at cruise altitude. The effect of varying daytime of release as well as the impact of changing dispersion time is studied with special respect to the aircraft induced O{sub 3} production. Effective emission amounts for consideration in mesoscale and global models are calculated. Simulations with modified photolysis rates are performed to show the sensitivity of the photochemistry to the occurrence of cirrus clouds. (author) 8 refs.

  17. Parametric estimation of covariance function in Gaussian-process based Kriging models. Application to uncertainty quantification for computer experiments

    International Nuclear Information System (INIS)

    Bachoc, F.

    2013-01-01

    The parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the two estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a meta modeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown. (author) [fr

  18. Nannofossils in 2011 El Hierro eruptive products reinstate plume model for Canary Islands

    Science.gov (United States)

    Zaczek, Kirsten; Troll, Valentin R.; Cachao, Mario; Ferreira, Jorge; Deegan, Frances M.; Carracedo, Juan Carlos; Soler, Vicente; Meade, Fiona C.; Burchardt, Steffi

    2015-01-01

    The origin and life cycle of ocean islands have been debated since the early days of Geology. In the case of the Canary archipelago, its proximity to the Atlas orogen led to initial fracture-controlled models for island genesis, while later workers cited a Miocene-Quaternary east-west age-progression to support an underlying mantle-plume. The recent discovery of submarine Cretaceous volcanic rocks near the westernmost island of El Hierro now questions this systematic age-progression within the archipelago. If a mantle-plume is indeed responsible for the Canaries, the onshore volcanic age-progression should be complemented by progressively younger pre-island sedimentary strata towards the west, however, direct age constraints for the westernmost pre-island sediments are lacking. Here we report on new age data obtained from calcareous nannofossils in sedimentary xenoliths erupted during the 2011 El Hierro events, which date the sub-island sedimentary rocks to between late Cretaceous and Pliocene in age. This age-range includes substantially younger pre-volcanic sedimentary rocks than the Jurassic to Miocene strata known from the older eastern islands and now reinstate the mantle-plume hypothesis as the most plausible explanation for Canary volcanism. The recently discovered Cretaceous submarine volcanic rocks in the region are, in turn, part of an older, fracture-related tectonic episode.

  19. Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models

    DEFF Research Database (Denmark)

    Vehtari, Aki; Mononen, Tommi; Tolvanen, Ville

    2016-01-01

    The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validation. In this article, we consider Gaussian latent variable models where the integration over the latent values is approximated using the Laplace method or expectation propagation (EP). We study...... the properties of several Bayesian leave-one-out (LOO) cross-validation approximations that in most cases can be computed with a small additional cost after forming the posterior approximation given the full data. Our main objective is to assess the accuracy of the approximative LOO cross-validation estimators...

  20. Mathematic model analysis of Gaussian beam propagation through an arbitrary thickness random phase screen.

    Science.gov (United States)

    Tian, Yuzhen; Guo, Jin; Wang, Rui; Wang, Tingfeng

    2011-09-12

    In order to research the statistical properties of Gaussian beam propagation through an arbitrary thickness random phase screen for adaptive optics and laser communication application in the laboratory, we establish mathematic models of statistical quantities, which are based on the Rytov method and the thin phase screen model, involved in the propagation process. And the analytic results are developed for an arbitrary thickness phase screen based on the Kolmogorov power spectrum. The comparison between the arbitrary thickness phase screen and the thin phase screen shows that it is more suitable for our results to describe the generalized case, especially the scintillation index.

  1. MODELS OF COVARIANCE FUNCTIONS OF GAUSSIAN RANDOM FIELDS ESCAPING FROM ISOTROPY, STATIONARITY AND NON NEGATIVITY

    Directory of Open Access Journals (Sweden)

    Pablo Gregori

    2014-03-01

    Full Text Available This paper represents a survey of recent advances in modeling of space or space-time Gaussian Random Fields (GRF, tools of Geostatistics at hand for the understanding of special cases of noise in image analysis. They can be used when stationarity or isotropy are unrealistic assumptions, or even when negative covariance between some couples of locations are evident. We show some strategies in order to escape from these restrictions, on the basis of rich classes of well known stationary or isotropic non negative covariance models, and through suitable operations, like linear combinations, generalized means, or with particular Fourier transforms.

  2. Descent and mixing of the overflow plume from Storfjord in Svalbard: an idealized numerical model study

    Directory of Open Access Journals (Sweden)

    I. Fer

    2008-05-01

    Full Text Available Storfjorden in the Svalbard Archipelago is a sill-fjord that produces significant volumes of dense, brine-enriched shelf water through ice formation. The dense water produced in the fjord overflows the sill and can reach deep into the Fram Strait. For conditions corresponding to a moderate ice production year, the pathway of the overflow, its descent and evolving water mass properties due to mixing are investigated for the first time using a high resolution 3-D numerical model. An idealized modeling approach forced by a typical annual cycle of buoyancy forcing due to ice production is chosen in a terrain-following vertical co-ordinate. Comparison with observational data, including hydrography, fine resolution current measurements and direct turbulence measurements using a microstructure profiler, gives confidence on the model performance. The model eddy diffusivity profiles contrasted to those inferred from the turbulence measurements give confidence on the skill of the Mellor Yamada scheme in representing sub-grid scale mixing for the Storfjorden overflow, and probably for gravity current modeling, in general. The Storfjorden overflow is characterized by low Froude number dynamics except at the shelf break where the plume narrows, accelerates with speed reaching 0.6 m s−1, yielding local Froude number in excess of unity. The volume flux of the plume increases by five-fold from the sill to downstream of the shelf-break. Rotational hydraulic control is not applicable for transport estimates at the sill using upstream basin information. To the leading order, geostrophy establishes the lateral slope of the plume interface at the sill. This allows for a transport estimate that is consistent with the model results by evaluating a weir relation at the sill.

  3. Discrimination of numerical proportions: A comparison of binomial and Gaussian models.

    Science.gov (United States)

    Raidvee, Aire; Lember, Jüri; Allik, Jüri

    2017-01-01

    Observers discriminated the numerical proportion of two sets of elements (N = 9, 13, 33, and 65) that differed either by color or orientation. According to the standard Thurstonian approach, the accuracy of proportion discrimination is determined by irreducible noise in the nervous system that stochastically transforms the number of presented visual elements onto a continuum of psychological states representing numerosity. As an alternative to this customary approach, we propose a Thurstonian-binomial model, which assumes discrete perceptual states, each of which is associated with a certain visual element. It is shown that the probability β with which each visual element can be noticed and registered by the perceptual system can explain data of numerical proportion discrimination at least as well as the continuous Thurstonian-Gaussian model, and better, if the greater parsimony of the Thurstonian-binomial model is taken into account using AIC model selection. We conclude that Gaussian and binomial models represent two different fundamental principles-internal noise vs. using only a fraction of available information-which are both plausible descriptions of visual perception.

  4. Clustering gene expression time series data using an infinite Gaussian process mixture model.

    Science.gov (United States)

    McDowell, Ian C; Manandhar, Dinesh; Vockley, Christopher M; Schmid, Amy K; Reddy, Timothy E; Engelhardt, Barbara E

    2018-01-01

    Transcriptome-wide time series expression profiling is used to characterize the cellular response to environmental perturbations. The first step to analyzing transcriptional response data is often to cluster genes with similar responses. Here, we present a nonparametric model-based method, Dirichlet process Gaussian process mixture model (DPGP), which jointly models data clusters with a Dirichlet process and temporal dependencies with Gaussian processes. We demonstrate the accuracy of DPGP in comparison to state-of-the-art approaches using hundreds of simulated data sets. To further test our method, we apply DPGP to published microarray data from a microbial model organism exposed to stress and to novel RNA-seq data from a human cell line exposed to the glucocorticoid dexamethasone. We validate our clusters by examining local transcription factor binding and histone modifications. Our results demonstrate that jointly modeling cluster number and temporal dependencies can reveal shared regulatory mechanisms. DPGP software is freely available online at https://github.com/PrincetonUniversity/DP_GP_cluster.

  5. Clustering gene expression time series data using an infinite Gaussian process mixture model.

    Directory of Open Access Journals (Sweden)

    Ian C McDowell

    2018-01-01

    Full Text Available Transcriptome-wide time series expression profiling is used to characterize the cellular response to environmental perturbations. The first step to analyzing transcriptional response data is often to cluster genes with similar responses. Here, we present a nonparametric model-based method, Dirichlet process Gaussian process mixture model (DPGP, which jointly models data clusters with a Dirichlet process and temporal dependencies with Gaussian processes. We demonstrate the accuracy of DPGP in comparison to state-of-the-art approaches using hundreds of simulated data sets. To further test our method, we apply DPGP to published microarray data from a microbial model organism exposed to stress and to novel RNA-seq data from a human cell line exposed to the glucocorticoid dexamethasone. We validate our clusters by examining local transcription factor binding and histone modifications. Our results demonstrate that jointly modeling cluster number and temporal dependencies can reveal shared regulatory mechanisms. DPGP software is freely available online at https://github.com/PrincetonUniversity/DP_GP_cluster.

  6. Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.

    Science.gov (United States)

    Martínez, C A; Khare, K; Rahman, S; Elzo, M A

    2017-10-01

    Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.

  7. A regional scale model for ozone in the United States with subgrid representation of urban and power plant plumes

    International Nuclear Information System (INIS)

    Sillman, S.; Logan, J.A.; Wofsy, S.C.

    1990-01-01

    A new approach to modeling regional air chemistry is presented for application to industrialized regions such as the continental US. Rural chemistry and transport are simulated using a coarse grid, while chemistry and transport in urban and power plant plumes are represented by detailed subgrid models. Emissions from urban and power plant sources are processed in generalized plumes where chemistry and dilution proceed for 8-12 hours before mixing with air in a large resolution element. A realistic fraction of pollutants reacts under high-NO x conditions, and NO x is removed significantly before dispersal. Results from this model are compared with results from grid odels that do not distinguish plumes and with observational data defining regional ozone distributions. Grid models with coarse resolution are found to artificially disperse NO x over rural areas, therefore overestimating rural levels of both NO x and O 3 . Regional net ozone production is too high in coarse grid models, because production of O 3 is more efficient per molecule of NO x in the low-concentration regime of rural areas than in heavily polluted plumes from major emission sources. Ozone levels simulated by this model are shown to agree with observations in urban plumes and in rural regions. The model reproduces accurately average regional and peak ozone concentrations observed during a 4-day ozone episode. Computational costs for the model are reduced 25-to 100-fold as compared to fine-mesh models

  8. Primordial non-Gaussianities in single field inflationary models with non-trivial initial states

    Energy Technology Data Exchange (ETDEWEB)

    Bahrami, Sina; Flanagan, Éanna É., E-mail: sb933@cornell.edu, E-mail: eef3@cornell.edu [Department of Physics, Cornell University, Ithaca, NY 14853 (United States)

    2014-10-01

    We compute the non-Gaussianities that arise in single field, slow roll inflationary models arising from arbitrary homogeneous initial states, as well as subleading contributions to the power spectrum. Non Bunch-Davies vacuum initial states can arise if the transition to the single field, slow roll inflation phase occurs only shortly before observable modes left the horizon. They can also arise from new physics at high energies that has been integrated out. Our general result for the bispectrum exhibits several features that were previously seen in special cases.

  9. Encrypted data stream identification using randomness sparse representation and fuzzy Gaussian mixture model

    Science.gov (United States)

    Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan

    2016-07-01

    The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.

  10. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Abdenaceur Boudlal

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  11. The Gaussian copula model for the joint deficit index for droughts

    Science.gov (United States)

    Van de Vyver, H.; Van den Bergh, J.

    2018-06-01

    The characterization of droughts and their impacts is very dependent on the time scale that is involved. In order to obtain an overall drought assessment, the cumulative effects of water deficits over different times need to be examined together. For example, the recently developed joint deficit index (JDI) is based on multivariate probabilities of precipitation over various time scales from 1- to 12-months, and was constructed from empirical copulas. In this paper, we examine the Gaussian copula model for the JDI. We model the covariance across the temporal scales with a two-parameter function that is commonly used in the specific context of spatial statistics or geostatistics. The validity of the covariance models is demonstrated with long-term precipitation series. Bootstrap experiments indicate that the Gaussian copula model has advantages over the empirical copula method in the context of drought severity assessment: (i) it is able to quantify droughts outside the range of the empirical copula, (ii) provides adequate drought quantification, and (iii) provides a better understanding of the uncertainty in the estimation.

  12. A novel multitarget model of radiation-induced cell killing based on the Gaussian distribution.

    Science.gov (United States)

    Zhao, Lei; Mi, Dong; Sun, Yeqing

    2017-05-07

    The multitarget version of the traditional target theory based on the Poisson distribution is still used to describe the dose-survival curves of cells after ionizing radiation in radiobiology and radiotherapy. However, noting that the usual ionizing radiation damage is the result of two sequential stochastic processes, the probability distribution of the damage number per cell should follow a compound Poisson distribution, like e.g. Neyman's distribution of type A (N. A.). In consideration of that the Gaussian distribution can be considered as the approximation of the N. A. in the case of high flux, a multitarget model based on the Gaussian distribution is proposed to describe the cell inactivation effects in low linear energy transfer (LET) radiation with high dose-rate. Theoretical analysis and experimental data fitting indicate that the present theory is superior to the traditional multitarget model and similar to the Linear - Quadratic (LQ) model in describing the biological effects of low-LET radiation with high dose-rate, and the parameter ratio in the present model can be used as an alternative indicator to reflect the radiation damage and radiosensitivity of the cells. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    Directory of Open Access Journals (Sweden)

    Tomoaki Nakamura

    2017-12-01

    Full Text Available Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM, the emission distributions of which are Gaussian processes (GPs. Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods.

  14. Active space of pheromone plume and its relationship to effective attraction radius in applied models.

    Science.gov (United States)

    Byers, John A

    2008-09-01

    The release rate of a semiochemical lure that attracts flying insects has a specific effective attraction radius (EAR) that corresponds to the lure's orientation response strength. EAR is defined as the radius of a passive sphere that intercepts the same number of insects as a semiochemical-baited trap. It is estimated by calculating the ratio of trap catches in the field in baited and unbaited traps and the interception area of the unbaited trap. EAR serves as a standardized method for comparing the attractive strengths of lures that is independent of population density. In two-dimensional encounter rate models that are used to describe insect mass trapping and mating disruption, a circular EAR (EAR(c)) describes a key parameter that affects catch or influence by pheromone in the models. However, the spherical EAR, as measured in the field, should be transformed to an EAR(c) for appropriate predictions in such models. The EAR(c) is calculated as (pi/2EAR(2))/F (L), where F (L) is the effective thickness of the flight layer where the insect searches. F (L) was estimated from catches of insects (42 species in the orders Coleoptera, Lepidoptera, Diptera, Hemiptera, and Thysanoptera) on traps at various heights as reported in the literature. The EAR(c) was proposed further as a simple but equivalent alternative to simulations of highly complex active-space plumes with variable response surfaces that have proven exceedingly difficult to quantify in nature. This hypothesis was explored in simulations where flying insects, represented as coordinate points, moved about in a correlated random walk in an area that contained a pheromone plume, represented as a sector of active space composed of a capture probability surface of variable complexity. In this plume model, catch was monitored at a constant density of flying insects and then compared to simulations in which a circular EAR(c) was enlarged until an equivalent rate was caught. This demonstrated that there is a

  15. Modelling the possible interaction between edge-driven convection and the Canary Islands mantle plume

    Science.gov (United States)

    Negredo, A. M.; Rodríguez-González, J.; Fullea, J.; Van Hunen, J.

    2017-12-01

    The close location between many hotspots and the edges of cratonic lithosphere has led to the hypothesis that these hotspots could be explained by small-scale mantle convection at the edge of cratons (Edge Driven Convection, EDC). The Canary Volcanic Province hotspot represents a paradigmatic example of this situation due to its close location to the NW edge of the African Craton. Geochemical evidence, prominent low seismic velocity anomalies in the upper and lower mantle, and the rough NE-SW age-progression of volcanic centers consistently point out to a deep-seated mantle plume as the origin of the Canary Volcanic Province. It has been hypothesized that the plume material could be affected by upper mantle convection caused by the thermal contrast between thin oceanic lithosphere and thick (cold) African craton. Deflection of upwelling blobs due to convection currents would be responsible for the broader and more irregular pattern of volcanism in the Canary Province compared to the Madeira Province. In this study we design a model setup inspired on this scenario to investigate the consequences of possible interaction between ascending mantle plumes and EDC. The Finite Element code ASPECT is used to solve convection in a 2D box. The compositional field and melt fraction distribution are also computed. Free slip along all boundaries and constant temperature at top and bottom boundaries are assumed. The initial temperature distribution assumes a small long-wavelength perturbation. The viscosity structure is based on a thick cratonic lithosphere progressively varying to a thin, or initially inexistent, oceanic lithosphere. The effects of assuming different rheologies, as well as steep or gradual changes in lithospheric thickness are tested. Modelling results show that a very thin oceanic lithosphere (models assuming temperature-dependent viscosity and large viscosity variations evolve to large-scale (upper mantle) convection cells, with upwelling of hot material being

  16. Coupled petrological-geodynamical modeling of a compositionally heterogeneous mantle plume

    Science.gov (United States)

    Rummel, Lisa; Kaus, Boris J. P.; White, Richard W.; Mertz, Dieter F.; Yang, Jianfeng; Baumann, Tobias S.

    2018-01-01

    Self-consistent geodynamic modeling that includes melting is challenging as the chemistry of the source rocks continuously changes as a result of melt extraction. Here, we describe a new method to study the interaction between physical and chemical processes in an uprising heterogeneous mantle plume by combining a geodynamic code with a thermodynamic modeling approach for magma generation and evolution. We pre-computed hundreds of phase diagrams, each of them for a different chemical system. After melt is extracted, the phase diagram with the closest bulk rock chemistry to the depleted source rock is updated locally. The petrological evolution of rocks is tracked via evolving chemical compositions of source rocks and extracted melts using twelve oxide compositional parameters. As a result, a wide variety of newly generated magmatic rocks can in principle be produced from mantle rocks with different degrees of depletion. The results show that a variable geothermal gradient, the amount of extracted melt and plume excess temperature affect the magma production and chemistry by influencing decompression melting and the depletion of rocks. Decompression melting is facilitated by a shallower lithosphere-asthenosphere boundary and an increase in the amount of extracted magma is induced by a lower critical melt fraction for melt extraction and/or higher plume temperatures. Increasing critical melt fractions activates the extraction of melts triggered by decompression at a later stage and slows down the depletion process from the metasomatized mantle. Melt compositional trends are used to determine melting related processes by focusing on K2O/Na2O ratio as indicator for the rock type that has been molten. Thus, a step-like-profile in K2O/Na2O might be explained by a transition between melting metasomatized and pyrolitic mantle components reproducible through numerical modeling of a heterogeneous asthenospheric mantle source. A potential application of the developed method

  17. Effect of grid resolution and subgrid assumptions on the model prediction of a reactive buoyant plume under convective conditions

    International Nuclear Information System (INIS)

    Chock, D.P.; Winkler, S.L.; Pu Sun

    2002-01-01

    We have introduced a new and elaborate approach to understand the impact of grid resolution and subgrid chemistry assumption on the grid-model prediction of species concentrations for a system with highly non-homogeneous chemistry - a reactive buoyant plume immediately downwind of the stack in a convective boundary layer. The Parcel-Grid approach plume was used to describe both the air parcel turbulent transport and chemistry. This approach allows an identical transport process for all simulations. It also allows a description of subgrid chemistry. The ambient and plume parcel transport follows the description of Luhar and Britter (Atmos. Environ, 23 (1989) 1911, 26A (1992) 1283). The chemistry follows that of the Carbon-Bond mechanism. Three different grid sizes were considered: fine, medium and coarse, together with three different subgrid chemistry assumptions: micro-scale or individual parcel, tagged-parcel (plume and ambient parcels treated separately), and untagged-parcel (plume and ambient parcels treated indiscriminately). Reducing the subgrid information is not necessarily similar to increasing the model grid size. In our example, increasing the grid size leads to a reduction in the suppression of ozone in the presence of a high-NO x stack plume, and a reduction in the effectiveness of the NO x -inhibition effect. On the other hand, reducing the subgrid information (by using the untagged-parcel assumption) leads to an increase in ozone reduction and an enhancement of the NO x -inhibition effect insofar as the ozone extremum is concerned. (author)

  18. Fast fitting of non-Gaussian state-space models to animal movement data via Template Model Builder

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Whoriskey, Kim; Yurkowski, David

    2015-01-01

    recommend using the Laplace approximation combined with automatic differentiation (as implemented in the novel R package Template Model Builder; TMB) for the fast fitting of continuous-time multivariate non-Gaussian SSMs. Through Argos satellite tracking data, we demonstrate that the use of continuous...... are able to estimate additional parameters compared to previous methods, all without requiring a substantial increase in computational time. The model implementation is made available through the R package argosTrack....

  19. Numerical analysis and modeling of plume meandering in passive scalar dispersion downstream of a wall-mounted cube

    International Nuclear Information System (INIS)

    Rossi, R.; Iaccarino, G.

    2013-01-01

    Highlights: • Scalar dispersion downstream of a wall-mounted cube is examined by DNS and RANS models. • Vortex-shedding and plume meandering are established in the wake of the cube. • Low-frequency modulation is observed in the vortex-shedding and plume meandering. • Counter-gradient transport takes place in the streamwise component of the scalar flux. • Concentration decay and plume spread improved by the unsteady RANS model. -- Abstract: A DNS database is employed to examine the onset of plume meandering downstream of a wall-mounted cube and to address the impact of large-scale unsteadiness in modeling dispersion using the RANS equations. The cube is immersed in a uniform stream where the thin boundary-layer developing over the flat plate is responsible for the onset of vortex-shedding in the wake of the bluff-body. Spectra of velocity and concentration fluctuations exhibit a prominent peak in the energy content at the same frequency, showing that the plume meandering is established by the action of the vortex-shedding. The vortex-shedding and plume meandering display a low-frequency modulation where coherent fluctuations are suppressed at times with a quasi-regular period. The onset of the low-frequency modulation is indicated by a secondary peak in the energy spectrum and confirmed by the autocorrelation of velocity and scalar fluctuations. Unsteady RANS simulations performed with the v 2 − f model are able to detect the onset of the plume meandering and show remarkable improvement of the predicted decay rate and rate of spread of the scalar plume when compared to steady RANS solutions. By computing explicitly the periodic component of velocity and scalar fluctuations, the unsteady v 2 − f model is able to provide a representation of scalar flux components consistent with DNS statistics, where the counter-gradient transport mechanism that takes place in the streamwise component is also captured by URANS results. Nonetheless, the agreement with DNS

  20. Assessing clustering strategies for Gaussian mixture filtering a subsurface contaminant model

    KAUST Repository

    Liu, Bo

    2016-02-03

    An ensemble-based Gaussian mixture (GM) filtering framework is studied in this paper in term of its dependence on the choice of the clustering method to construct the GM. In this approach, a number of particles sampled from the posterior distribution are first integrated forward with the dynamical model for forecasting. A GM representation of the forecast distribution is then constructed from the forecast particles. Once an observation becomes available, the forecast GM is updated according to Bayes’ rule. This leads to (i) a Kalman filter-like update of the particles, and (ii) a Particle filter-like update of their weights, generalizing the ensemble Kalman filter update to non-Gaussian distributions. We focus on investigating the impact of the clustering strategy on the behavior of the filter. Three different clustering methods for constructing the prior GM are considered: (i) a standard kernel density estimation, (ii) clustering with a specified mixture component size, and (iii) adaptive clustering (with a variable GM size). Numerical experiments are performed using a two-dimensional reactive contaminant transport model in which the contaminant concentration and the heterogenous hydraulic conductivity fields are estimated within a confined aquifer using solute concentration data. The experimental results suggest that the performance of the GM filter is sensitive to the choice of the GM model. In particular, increasing the size of the GM does not necessarily result in improved performances. In this respect, the best results are obtained with the proposed adaptive clustering scheme.

  1. A novel Gaussian model based battery state estimation approach: State-of-Energy

    International Nuclear Information System (INIS)

    He, HongWen; Zhang, YongZhi; Xiong, Rui; Wang, Chun

    2015-01-01

    Highlights: • The Gaussian model is employed to construct a novel battery model. • The genetic algorithm is used to implement model parameter identification. • The AIC is used to decide the best hysteresis order of the battery model. • A novel battery SoE estimator is proposed and verified by two kinds of batteries. - Abstract: State-of-energy (SoE) is a very important index for battery management system (BMS) used in electric vehicles (EVs), it is indispensable for ensuring safety and reliable operation of batteries. For achieving battery SoE accurately, the main work can be summarized in three aspects. (1) In considering that different kinds of batteries show different open circuit voltage behaviors, the Gaussian model is employed to construct the battery model. What is more, the genetic algorithm is employed to locate the optimal parameter for the selecting battery model. (2) To determine an optimal tradeoff between battery model complexity and prediction precision, the Akaike information criterion (AIC) is used to determine the best hysteresis order of the combined battery model. Results from a comparative analysis show that the first-order hysteresis battery model is thought of being the best based on the AIC values. (3) The central difference Kalman filter (CDKF) is used to estimate the real-time SoE and an erroneous initial SoE is considered to evaluate the robustness of the SoE estimator. Lastly, two kinds of lithium-ion batteries are used to verify the proposed SoE estimation approach. The results show that the maximum SoE estimation error is within 1% for both LiFePO 4 and LiMn 2 O 4 battery datasets

  2. HGSYSTEM/UF6 model enhancements for plume rise and dispersion around buildings, lift-off of buoyant plumes, and robustness of numerical solver

    International Nuclear Information System (INIS)

    Hanna, S.R.; Chang, J.C.

    1997-01-01

    The HGSYSTEM/UF 6 model was developed for use in preparing Safety Analysis Reports (SARs) by estimating the consequences of possible accidental releases of UF 6 to the atmosphere at the gaseous diffusion plants (GDPs) located in Portsmouth, Ohio, and Paducah, Kentucky. Although the latter report carries a 1996 date, the work that is described was completed in late 1994. When that report was written, the primary release scenarios of interest were thought to be gas pipeline and liquid tank ruptures over open terrain away from the influence of buildings. However, upon further analysis of possible release scenarios, the developers of the SARs decided it was necessary to also consider accidental releases within buildings. Consequently, during the fall and winter of 1995-96, modules were added to HGSYSTEM/UF 6 to account for flow and dispersion around buildings. The original HGSYSTEM/UF 6 model also contained a preliminary method for accounting for the possible lift-off of ground-based buoyant plumes. An improved model and a new set of wind tunnel data for buoyant plumes trapped in building recirculation cavities have become available that appear to be useful for revising the lift-off algorithm and modifying it for use in recirculation cavities. This improved lift-off model has been incorporated in the updated modules for dispersion around buildings

  3. Hybrid 3D model for the interaction of plasma thruster plumes with nearby objects

    Science.gov (United States)

    Cichocki, Filippo; Domínguez-Vázquez, Adrián; Merino, Mario; Ahedo, Eduardo

    2017-12-01

    This paper presents a hybrid particle-in-cell (PIC) fluid approach to model the interaction of a plasma plume with a spacecraft and/or any nearby object. Ions and neutrals are modeled with a PIC approach, while electrons are treated as a fluid. After a first iteration of the code, the domain is split into quasineutral and non-neutral regions, based on non-neutrality criteria, such as the relative charge density and the Debye length-to-cell size ratio. At the material boundaries of the former quasineutral region, a dedicated algorithm ensures that the Bohm condition is met. In the latter non-neutral regions, the electron density and electric potential are obtained by solving the coupled electron momentum balance and Poisson equations. Boundary conditions for both the electric current and potential are finally obtained with a plasma sheath sub-code and an equivalent circuit model. The hybrid code is validated by applying it to a typical plasma plume-spacecraft interaction scenario, and the physics and capabilities of the model are finally discussed.

  4. Mixed Platoon Flow Dispersion Model Based on Speed-Truncated Gaussian Mixture Distribution

    Directory of Open Access Journals (Sweden)

    Weitiao Wu

    2013-01-01

    Full Text Available A mixed traffic flow feature is presented on urban arterials in China due to a large amount of buses. Based on field data, a macroscopic mixed platoon flow dispersion model (MPFDM was proposed to simulate the platoon dispersion process along the road section between two adjacent intersections from the flow view. More close to field observation, truncated Gaussian mixture distribution was adopted as the speed density distribution for mixed platoon. Expectation maximum (EM algorithm was used for parameters estimation. The relationship between the arriving flow distribution at downstream intersection and the departing flow distribution at upstream intersection was investigated using the proposed model. Comparison analysis using virtual flow data was performed between the Robertson model and the MPFDM. The results confirmed the validity of the proposed model.

  5. Inverse modelling of atmospheric tracers: non-Gaussian methods and second-order sensitivity analysis

    Directory of Open Access Journals (Sweden)

    M. Bocquet

    2008-02-01

    Full Text Available For a start, recent techniques devoted to the reconstruction of sources of an atmospheric tracer at continental scale are introduced. A first method is based on the principle of maximum entropy on the mean and is briefly reviewed here. A second approach, which has not been applied in this field yet, is based on an exact Bayesian approach, through a maximum a posteriori estimator. The methods share common grounds, and both perform equally well in practice. When specific prior hypotheses on the sources are taken into account such as positivity, or boundedness, both methods lead to purposefully devised cost-functions. These cost-functions are not necessarily quadratic because the underlying assumptions are not Gaussian. As a consequence, several mathematical tools developed in data assimilation on the basis of quadratic cost-functions in order to establish a posteriori analysis, need to be extended to this non-Gaussian framework. Concomitantly, the second-order sensitivity analysis needs to be adapted, as well as the computations of the averaging kernels of the source and the errors obtained in the reconstruction. All of these developments are applied to a real case of tracer dispersion: the European Tracer Experiment [ETEX]. Comparisons are made between a least squares cost function (similar to the so-called 4D-Var approach and a cost-function which is not based on Gaussian hypotheses. Besides, the information content of the observations which is used in the reconstruction is computed and studied on the application case. A connection with the degrees of freedom for signal is also established. As a by-product of these methodological developments, conclusions are drawn on the information content of the ETEX dataset as seen from the inverse modelling point of view.

  6. Bayesian sensitivity analysis of a 1D vascular model with Gaussian process emulators.

    Science.gov (United States)

    Melis, Alessandro; Clayton, Richard H; Marzo, Alberto

    2017-12-01

    One-dimensional models of the cardiovascular system can capture the physics of pulse waves but involve many parameters. Since these may vary among individuals, patient-specific models are difficult to construct. Sensitivity analysis can be used to rank model parameters by their effect on outputs and to quantify how uncertainty in parameters influences output uncertainty. This type of analysis is often conducted with a Monte Carlo method, where large numbers of model runs are used to assess input-output relations. The aim of this study was to demonstrate the computational efficiency of variance-based sensitivity analysis of 1D vascular models using Gaussian process emulators, compared to a standard Monte Carlo approach. The methodology was tested on four vascular networks of increasing complexity to analyse its scalability. The computational time needed to perform the sensitivity analysis with an emulator was reduced by the 99.96% compared to a Monte Carlo approach. Despite the reduced computational time, sensitivity indices obtained using the two approaches were comparable. The scalability study showed that the number of mechanistic simulations needed to train a Gaussian process for sensitivity analysis was of the order O(d), rather than O(d×103) needed for Monte Carlo analysis (where d is the number of parameters in the model). The efficiency of this approach, combined with capacity to estimate the impact of uncertain parameters on model outputs, will enable development of patient-specific models of the vascular system, and has the potential to produce results with clinical relevance. © 2017 The Authors International Journal for Numerical Methods in Biomedical Engineering Published by John Wiley & Sons Ltd.

  7. Segmentation of Concealed Objects in Passive Millimeter-Wave Images Based on the Gaussian Mixture Model

    Science.gov (United States)

    Yu, Wangyang; Chen, Xiangguang; Wu, Lei

    2015-04-01

    Passive millimeter wave (PMMW) imaging has become one of the most effective means to detect the objects concealed under clothing. Due to the limitations of the available hardware and the inherent physical properties of PMMW imaging systems, images often exhibit poor contrast and low signal-to-noise ratios. Thus, it is difficult to achieve ideal results by using a general segmentation algorithm. In this paper, an advanced Gaussian Mixture Model (GMM) algorithm for the segmentation of concealed objects in PMMW images is presented. Our work is concerned with the fact that the GMM is a parametric statistical model, which is often used to characterize the statistical behavior of images. Our approach is three-fold: First, we remove the noise from the image using both a notch reject filter and a total variation filter. Next, we use an adaptive parameter initialization GMM algorithm (APIGMM) for simulating the histogram of images. The APIGMM provides an initial number of Gaussian components and start with more appropriate parameter. Bayesian decision is employed to separate the pixels of concealed objects from other areas. At last, the confidence interval (CI) method, alongside local gradient information, is used to extract the concealed objects. The proposed hybrid segmentation approach detects the concealed objects more accurately, even compared to two other state-of-the-art segmentation methods.

  8. Childhood malnutrition in Egypt using geoadditive Gaussian and latent variable models.

    Science.gov (United States)

    Khatab, Khaled

    2010-04-01

    Major progress has been made over the last 30 years in reducing the prevalence of malnutrition amongst children less than 5 years of age in developing countries. However, approximately 27% of children under the age of 5 in these countries are still malnourished. This work focuses on the childhood malnutrition in one of the biggest developing countries, Egypt. This study examined the association between bio-demographic and socioeconomic determinants and the malnutrition problem in children less than 5 years of age using the 2003 Demographic and Health survey data for Egypt. In the first step, we use separate geoadditive Gaussian models with the continuous response variables stunting (height-for-age), underweight (weight-for-age), and wasting (weight-for-height) as indicators of nutritional status in our case study. In a second step, based on the results of the first step, we apply the geoadditive Gaussian latent variable model for continuous indicators in which the 3 measurements of the malnutrition status of children are assumed as indicators for the latent variable "nutritional status".

  9. Multivariate spatial Gaussian mixture modeling for statistical clustering of hemodynamic parameters in functional MRI

    International Nuclear Information System (INIS)

    Fouque, A.L.; Ciuciu, Ph.; Risser, L.; Fouque, A.L.; Ciuciu, Ph.; Risser, L.

    2009-01-01

    In this paper, a novel statistical parcellation of intra-subject functional MRI (fMRI) data is proposed. The key idea is to identify functionally homogenous regions of interest from their hemodynamic parameters. To this end, a non-parametric voxel-based estimation of hemodynamic response function is performed as a prerequisite. Then, the extracted hemodynamic features are entered as the input data of a Multivariate Spatial Gaussian Mixture Model (MSGMM) to be fitted. The goal of the spatial aspect is to favor the recovery of connected components in the mixture. Our statistical clustering approach is original in the sense that it extends existing works done on univariate spatially regularized Gaussian mixtures. A specific Gibbs sampler is derived to account for different covariance structures in the feature space. On realistic artificial fMRI datasets, it is shown that our algorithm is helpful for identifying a parsimonious functional parcellation required in the context of joint detection estimation of brain activity. This allows us to overcome the classical assumption of spatial stationarity of the BOLD signal model. (authors)

  10. Comparing the performance of geostatistical models with additional information from covariates for sewage plume characterization.

    Science.gov (United States)

    Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia

    2015-04-01

    In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.

  11. Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model

    KAUST Repository

    Wang, Shitao; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Winokur, Justin; Knio, Omar

    2016-01-01

    Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model's estimates of the plume's trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate's contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.

  12. Non-Gaussianity and statistical anisotropy from vector field populated inflationary models

    CERN Document Server

    Dimastrogiovanni, Emanuela; Matarrese, Sabino; Riotto, Antonio

    2010-01-01

    We present a review of vector field models of inflation and, in particular, of the statistical anisotropy and non-Gaussianity predictions of models with SU(2) vector multiplets. Non-Abelian gauge groups introduce a richer amount of predictions compared to the Abelian ones, mostly because of the presence of vector fields self-interactions. Primordial vector fields can violate isotropy leaving their imprint in the comoving curvature fluctuations zeta at late times. We provide the analytic expressions of the correlation functions of zeta up to fourth order and an analysis of their amplitudes and shapes. The statistical anisotropy signatures expected in these models are important and, potentially, the anisotropic contributions to the bispectrum and the trispectrum can overcome the isotropic parts.

  13. Sensitivity analysis of an operational advanced Gaussian model to different turbulent regimes

    International Nuclear Information System (INIS)

    Mangia, C.; Rizza, U.; Tirabassi, T.

    1998-01-01

    A non-reactive air pollution model evaluating ground level concentration is presented. It relies on a new Gaussian formulation (Lupini, R. and Tirabassi, T., J. Appl. Meteor., 20 (1981) 565-570; Tirabassi, T. and Rizza, U., Atmos. Environ., 28 (1994) 611-615) for transport and vertical diffusion in the Atmospheric Boundary Layer (ABL). In this formulation, the source height is replaced by a virtual height expressed by simple functions of meteorological variables. The model accepts a general profile of wind u(z) and eddy diffusivity coefficient K z . The lateral dispersion coefficient is based on Taylor's theory (Taylor, G. I., Proc. London Math. Soc., 20 (1921) 196-204). The turbulence in the ABL is subdivided into various regimes, each characterized by different parameters for length and velocity scales. The model performances under unstable conditions have been tested utilizing two different data sets

  14. Finite size scaling of the Higgs-Yukawa model near the Gaussian fixed point

    Energy Technology Data Exchange (ETDEWEB)

    Chu, David Y.J.; Lin, C.J. David [National Chiao-Tung Univ., Hsinchu, Taiwan (China); Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Knippschild, Bastian [HISKP, Bonn (Germany); Nagy, Attila [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Univ. Berlin (Germany)

    2016-12-15

    We study the scaling properties of Higgs-Yukawa models. Using the technique of Finite-Size Scaling, we are able to derive scaling functions that describe the observables of the model in the vicinity of a Gaussian fixed point. A feasibility study of our strategy is performed for the pure scalar theory in the weak-coupling regime. Choosing the on-shell renormalisation scheme gives us an advantage to fit the scaling functions against lattice data with only a small number of fit parameters. These formulae can be used to determine the universality of the observed phase transitions, and thus play an essential role in future investigations of Higgs-Yukawa models, in particular in the strong Yukawa coupling region.

  15. Smoke plume trajectory from in-situ burning of crude oil: complex terrain modeling

    International Nuclear Information System (INIS)

    McGrattan, K.

    1997-01-01

    Numerical models have been used to predict the concentration of particulate matter or other combustion products downwind from a proposed in- situ burning of an oil spill. One of the models used was the National Institute of Standards and Technology (NIST) model, ALOFT (A Large Outdoor Fire plume Trajectory), which is based on the conservation equations that govern the introduction of hot gases and particulate matter into the atmosphere. By using a model based on fundamental equations, it becomes a relatively simple matter to simulate smoke dispersal flow patterns, and to compute the solution to the equations of motion that govern the transport of pollutants in the lower atmosphere at a resolution that is comparable to that of the underlying terrain data. 9 refs., 2 tabs., 5 figs

  16. Characterization and modeling of turbidity density plume induced into stratified reservoir by flood runoffs.

    Science.gov (United States)

    Chung, S W; Lee, H S

    2009-01-01

    In monsoon climate area, turbidity flows typically induced by flood runoffs cause numerous environmental impacts such as impairment of fish habitat and river attraction, and degradation of water supply efficiency. This study was aimed to characterize the physical dynamics of turbidity plume induced into a stratified reservoir using field monitoring and numerical simulations, and to assess the effect of different withdrawal scenarios on the control of downstream water quality. Three different turbidity models (RUN1, RUN2, RUN3) were developed based on a two-dimensional laterally averaged hydrodynamic and transport model, and validated against field data. RUN1 assumed constant settling velocity of suspended sediment, while RUN2 estimated the settling velocity as a function of particle size, density, and water temperature to consider vertical stratification. RUN3 included a lumped first-order turbidity attenuation rate taking into account the effects of particles aggregation and degradable organic particles. RUN3 showed best performance in replicating the observed variations of in-reservoir and release turbidity. Numerical experiments implemented to assess the effectiveness of different withdrawal depths showed that the alterations of withdrawal depth can modify the pathway and flow regimes of the turbidity plume, but its effect on the control of release water quality could be trivial.

  17. Airborne Detection and Dynamic Modeling of Carbon Dioxide and Methane Plumes

    Science.gov (United States)

    Jacob, Jamey; Mitchell, Taylor; Whyte, Seabrook

    2015-11-01

    To facilitate safe storage of greenhouse gases such as CO2 and CH4, airborne monitoring is investigated. Conventional soil gas monitoring has difficulty in distinguishing gas flux signals from leakage with those associated with meteorologically driven changes. A low-cost, lightweight sensor system has been developed and implemented onboard a small unmanned aircraft that measures gas concentration and is combined with other atmospheric diagnostics, including thermodynamic data and velocity from hot-wire and multi-hole probes. To characterize the system behavior and verify its effectiveness, field tests have been conducted over controlled rangeland burns and over simulated leaks. In the former case, since fire produces carbon dioxide over a large area, this was an opportunity to test in an environment that while only vaguely similar to a carbon sequestration leak source, also exhibits interesting plume behavior. In the simulated field tests, compressed gas tanks are used to mimic leaks and generate gaseous plumes. Since the sensor response time is a function of vehicle airspeed, dynamic calibration models are required to determine accurate location of gas concentration in (x , y , z , t) . Results are compared with simulations using combined flight and atmospheric dynamic models. Supported by Department of Energy Award DE-FE0012173.

  18. The impact from emitted NO{sub x} and VOC in an aircraft plume. Model results for the free troposphere

    Energy Technology Data Exchange (ETDEWEB)

    Pleijel, K.

    1998-04-01

    The chemical fate of gaseous species in a specific aircraft plume is investigated using an expanding box model. The model treats the gas phase chemical reactions in detail, while other parameters are subject to a high degree of simplification. Model simulations were carried out in a plume up to an age of three days. The role of emitted VOC, NO{sub x} and CO as well as of background concentrations of VOC, NO{sub x} and ozone on aircraft plume chemistry was investigated. Background concentrations were varied in a span of measured values in the free troposphere. High background concentrations of VOC were found to double the average plume production of ozone and organic nitrates. In a high NO{sub x} environment the plume production of ozone and organic nitrates decreased by around 50%. The production of nitric acid was found to be less sensitive to background concentrations of VOC, and increased by up to 50% in a high NO{sub x} environment. Mainly, emitted NO{sub x} caused the plume production of ozone, nitric acid and organic nitrates. The ozone production during the first hours is determined by the relative amount of NO{sub 2} in the NO{sub x} emissions. The impact from emitted VOC was in relative values up to 20% of the ozone production and 65% of the production of organic nitrates. The strongest relative influence from VOC was found in an environment characterized by low VOC and high NO{sub x} background concentrations, where the absolute peak production was lower than in the other scenarios. The effect from emitting VOC and NO{sub x} at the same time added around 5% for ozone, 15% for nitric acid and 10% for organic nitrates to the plume production caused by NO{sub x} and VOC when emitted separately 47 refs, 15 figs, 4 tabs

  19. Gaussian mixture models and semantic gating improve reconstructions from human brain activity

    Directory of Open Access Journals (Sweden)

    Sanne eSchoenmakers

    2015-01-01

    Full Text Available Better acquisition protocols and analysis techniques are making it possible to use fMRI to obtain highly detailed visualizations of brain processes. In particular we focus on the reconstruction of natural images from BOLD responses in visual cortex. We expand our linear Gaussian framework for percept decoding with Gaussian mixture models to better represent the prior distribution of natural images. Reconstruction of such images then boils down to probabilistic inference in a hybrid Bayesian network. In our set-up, different mixture components correspond to different character categories. Our framework can automatically infer higher-order semantic categories from lower-level brain areas. Furthermore the framework can gate semantic information from higher-order brain areas to enforce the correct category during reconstruction. When categorical information is not available, we show that automatically learned clusters in the data give a similar improvement in reconstruction. The hybrid Bayesian network leads to highly accurate reconstructions in both supervised and unsupervised settings.

  20. Thermal time constant: optimising the skin temperature predictive modelling in lower limb prostheses using Gaussian processes.

    Science.gov (United States)

    Mathur, Neha; Glesk, Ivan; Buis, Arjan

    2016-06-01

    Elevated skin temperature at the body/device interface of lower-limb prostheses is one of the major factors that affect tissue health. The heat dissipation in prosthetic sockets is greatly influenced by the thermal conductive properties of the hard socket and liner material employed. However, monitoring of the interface temperature at skin level in lower-limb prosthesis is notoriously complicated. This is due to the flexible nature of the interface liners used which requires consistent positioning of sensors during donning and doffing. Predicting the residual limb temperature by monitoring the temperature between socket and liner rather than skin and liner could be an important step in alleviating complaints on increased temperature and perspiration in prosthetic sockets. To predict the residual limb temperature, a machine learning algorithm - Gaussian processes is employed, which utilizes the thermal time constant values of commonly used socket and liner materials. This Letter highlights the relevance of thermal time constant of prosthetic materials in Gaussian processes technique which would be useful in addressing the challenge of non-invasively monitoring the residual limb skin temperature. With the introduction of thermal time constant, the model can be optimised and generalised for a given prosthetic setup, thereby making the predictions more reliable.

  1. Multi-scale Modeling of Power Plant Plume Emissions and Comparisons with Observations

    Science.gov (United States)

    Costigan, K. R.; Lee, S.; Reisner, J.; Dubey, M. K.; Love, S. P.; Henderson, B. G.; Chylek, P.

    2011-12-01

    The Remote Sensing Verification Project (RSVP) test-bed located in the Four Corners region of Arizona, Utah, Colorado, and New Mexico offers a unique opportunity to develop new approaches for estimating emissions of CO2. Two major power plants located in this area produce very large signals of co-emitted CO2 and NO2 in this rural region. In addition to the Environmental Protection Agency (EPA) maintaining Continuous Emissions Monitoring Systems (CEMS) on each of the power plant stacks, the RSVP program has deployed an array of in-situ and remote sensing instruments, which provide both point and integrated measurements. To aid in the synthesis and interpretation of the measurements, a multi-scale atmospheric modeling approach is implemented, using two atmospheric numerical models: the Weather Research and Forecasting Model with chemistry (WRF-Chem; Grell et al., 2005) and the HIGRAD model (Reisner et al., 2003). The high fidelity HIGRAD model incorporates a multi-phase Lagrangian particle based approach to track individual chemical species of stack plumes at ultra-high resolution, using an adaptive mesh. It is particularly suited to model buoyancy effects and entrainment processes at the edges of the power plant plumes. WRF-Chem is a community model that has been applied to a number of air quality problems and offers several physical and chemical schemes that can be used to model the transport and chemical transformation of the anthropogenic plumes out of the local region. Multiple nested grids employed in this study allow the model to incorporate atmospheric variability ranging from synoptic scales to micro-scales (~200 m), while including locally developed flows influenced by the nearby complex terrain of the San Juan Mountains. The simulated local atmospheric dynamics are provided to force the HIGRAD model, which links mesoscale atmospheric variability to the small-scale simulation of the power plant plumes. We will discuss how these two models are applied and

  2. A new conceptual model for whole mantle convection and the origin of hotspot plumes

    Science.gov (United States)

    Yoshida, Masaki

    2014-08-01

    A new conceptual model of mantle convection is constructed for consideration of the origin of hotspot plumes, using recent evidence from seismology, high-pressure experiments, geodynamic modeling, geoid inversion studies, and post-glacial rebound analyses. This conceptual model delivers several key points. Firstly, some of the small-scale mantle upwellings observed as hotspots on the Earth's surface originate at the base of the mantle transition zone (MTZ), in which the Archean granitic continental material crust (TTG; tonalite-trondhjemite-granodiorite) with abundant radiogenic elements is accumulated. Secondly, the TTG crust and the subducted oceanic crust that have accumulated at the base of MTZ could act as thermal or mechanical insulators, leading to the formation of a hot and less viscous layer just beneath the MTZ; which may enhance the instability of plume generation at the base of the MTZ. Thirdly, the origin of some hotspot plumes is isolated from the large low shear-wave velocity provinces (LLSVPs) under Africa and the South Pacific. I consider that the conceptual model explains why almost all the hotspots around Africa are located above the margins of the African LLSVP. Because a planetary-scale trench system surrounding a “Pangean cell” has been spatially stable throughout the Phanerozoic, a large amount of the oceanic crustal layer is likely to be trapped in the MTZ under the Pangean cell. Therefore, under Africa, almost all of the hotspot plumes originate from the base of the MTZ, where a large amount of TTG and/or oceanic crusts has accumulated. This conceptual model may explain the fact that almost all the hotspots around Africa are located on margins above the African LLSVP. It is also considered that some of the hotspot plumes under the South Pacific thread through the TTG/oceanic crusts accumulated around the bottom of the MTZ, and some have their roots in the South Pacific LLSVP while others originate from the MTZ. The numerical simulations

  3. The tropospheric processing of acidic gases and hydrogen sulphide in volcanic gas plumes as inferred from field and model investigations

    Directory of Open Access Journals (Sweden)

    A. Aiuppa

    2007-01-01

    Full Text Available Improving the constraints on the atmospheric fate and depletion rates of acidic compounds persistently emitted by non-erupting (quiescent volcanoes is important for quantitatively predicting the environmental impact of volcanic gas plumes. Here, we present new experimental data coupled with modelling studies to investigate the chemical processing of acidic volcanogenic species during tropospheric dispersion. Diffusive tube samplers were deployed at Mount Etna, a very active open-conduit basaltic volcano in eastern Sicily, and Vulcano Island, a closed-conduit quiescent volcano in the Aeolian Islands (northern Sicily. Sulphur dioxide (SO2, hydrogen sulphide (H2S, hydrogen chloride (HCl and hydrogen fluoride (HF concentrations in the volcanic plumes (typically several minutes to a few hours old were repeatedly determined at distances from the summit vents ranging from 0.1 to ~10 km, and under different environmental conditions. At both volcanoes, acidic gas concentrations were found to decrease exponentially with distance from the summit vents (e.g., SO2 decreases from ~10 000 μg/m3at 0.1 km from Etna's vents down to ~7 μg/m3 at ~10 km distance, reflecting the atmospheric dilution of the plume within the acid gas-free background troposphere. Conversely, SO2/HCl, SO2/HF, and SO2/H2S ratios in the plume showed no systematic changes with plume aging, and fit source compositions within analytical error. Assuming that SO2 losses by reaction are small during short-range atmospheric transport within quiescent (ash-free volcanic plumes, our observations suggest that, for these short transport distances, atmospheric reactions for H2S and halogens are also negligible. The one-dimensional model MISTRA was used to simulate quantitatively the evolution of halogen and sulphur compounds in the plume of Mt. Etna. Model predictions support the hypothesis of minor HCl chemical processing during plume transport, at least in cloud-free conditions. Larger

  4. Bayesian Sensitivity Analysis of a Cardiac Cell Model Using a Gaussian Process Emulator

    Science.gov (United States)

    Chang, Eugene T Y; Strong, Mark; Clayton, Richard H

    2015-01-01

    Models of electrical activity in cardiac cells have become important research tools as they can provide a quantitative description of detailed and integrative physiology. However, cardiac cell models have many parameters, and how uncertainties in these parameters affect the model output is difficult to assess without undertaking large numbers of model runs. In this study we show that a surrogate statistical model of a cardiac cell model (the Luo-Rudy 1991 model) can be built using Gaussian process (GP) emulators. Using this approach we examined how eight outputs describing the action potential shape and action potential duration restitution depend on six inputs, which we selected to be the maximum conductances in the Luo-Rudy 1991 model. We found that the GP emulators could be fitted to a small number of model runs, and behaved as would be expected based on the underlying physiology that the model represents. We have shown that an emulator approach is a powerful tool for uncertainty and sensitivity analysis in cardiac cell models. PMID:26114610

  5. Efficient Blind System Identification of Non-Gaussian Auto-Regressive Models with HMM Modeling of the Excitation

    DEFF Research Database (Denmark)

    Li, Chunjian; Andersen, Søren Vang

    2007-01-01

    We propose two blind system identification methods that exploit the underlying dynamics of non-Gaussian signals. The two signal models to be identified are: an Auto-Regressive (AR) model driven by a discrete-state Hidden Markov process, and the same model whose output is perturbed by white Gaussi...... outputs. The signal models are general and suitable to numerous important signals, such as speech signals and base-band communication signals. Applications to speech analysis and blind channel equalization are given to exemplify the efficiency of the new methods....

  6. PySSM: A Python Module for Bayesian Inference of Linear Gaussian State Space Models

    Directory of Open Access Journals (Sweden)

    Christopher Strickland

    2014-04-01

    Full Text Available PySSM is a Python package that has been developed for the analysis of time series using linear Gaussian state space models. PySSM is easy to use; models can be set up quickly and efficiently and a variety of different settings are available to the user. It also takes advantage of scientific libraries NumPy and SciPy and other high level features of the Python language. PySSM is also used as a platform for interfacing between optimized and parallelized Fortran routines. These Fortran routines heavily utilize basic linear algebra and linear algebra Package functions for maximum performance. PySSM contains classes for filtering, classical smoothing as well as simulation smoothing.

  7. Estimation of Seismic Wavelets Based on the Multivariate Scale Mixture of Gaussians Model

    Directory of Open Access Journals (Sweden)

    Jing-Huai Gao

    2009-12-01

    Full Text Available This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase. We can transform the estimation of the wavelet into determining these three parameters. The phase of the wavelet is estimated by constant-phase rotation to the seismic signal, while the other two parameters are obtained by the Higher-order Statistics (HOS (fourth-order cumulant matching method. In order to derive the estimator of the Higher-order Statistics (HOS, the multivariate scale mixture of Gaussians (MSMG model is applied to formulating the multivariate joint probability density function (PDF of the seismic signal. By this way, we can represent HOS as a polynomial function of second-order statistics to improve the anti-noise performance and accuracy. In addition, the proposed method can work well for short time series.

  8. A study of the Gaussian overlap approach in the two-center shell model

    International Nuclear Information System (INIS)

    Reinhard, P.-G.

    1976-01-01

    The Gaussian overlap approach (GOA) to the generator coordinate method (GCM) is carried through up to fourth order in the derivatives. By diagonalizing the norm overlap, a collective Schroedinger equation is obtained. The potential therein contains the usual potential energy surface (PES) plus correction terms, which subtract the zero-point energies (ZPE) is the PES. The formalism is applied to BCS states obtained from a two-center shell model (TCSM). To understand the crucial role of the pairing contributions in the GOA a schematic picture, the multi-level model, is constructed. An explicit numerical study of the convergence of the GOA is given for the TCSM, with the result that the GOA seems to be justified for medium and heavy nuclei but critical for light nuclei. (Auth.)

  9. LEARNING VECTOR QUANTIZATION FOR ADAPTED GAUSSIAN MIXTURE MODELS IN AUTOMATIC SPEAKER IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    IMEN TRABELSI

    2017-05-01

    Full Text Available Speaker Identification (SI aims at automatically identifying an individual by extracting and processing information from his/her voice. Speaker voice is a robust a biometric modality that has a strong impact in several application areas. In this study, a new combination learning scheme has been proposed based on Gaussian mixture model-universal background model (GMM-UBM and Learning vector quantization (LVQ for automatic text-independent speaker identification. Features vectors, constituted by the Mel Frequency Cepstral Coefficients (MFCC extracted from the speech signal are used to train the New England subset of the TIMIT database. The best results obtained (90% for gender- independent speaker identification, 97 % for male speakers and 93% for female speakers for test data using 36 MFCC features.

  10. Propagation of Gaussian laser beam in cold plasma of Drude model

    International Nuclear Information System (INIS)

    Wang Ying; Yuan Chengxun; Zhou Zhongxiang; Li Lei; Du Yanwei

    2011-01-01

    The propagation characters of Gaussian laser beam in plasmas of Drude model have been investigated by complex eikonal function assumption. The dielectric constant of Drude model is representative and applicable in describing the cold unmagnetized plasmas. The dynamics of ponderomotive nonlinearity, spatial diffraction, and collision attenuation is considered. The derived coupling equations determine the variations of laser beam and irradiation attenuation. The modified laser beam-width parameter F, the dimensionless axis irradiation intensity I, and the spatial electron density distribution n/n 0 have been studied in connection with collision frequency, initial laser intensity and beam-width, and electron temperature of plasma. The variations of laser beam and plasma density due to different selections of parameters are reasonably explained, and results indicate the feasible modification of the propagating characters of laser beam in plasmas, which possesses significance to fast ignition, extended propagation, and other applications.

  11. tgp: An R Package for Bayesian Nonstationary, Semiparametric Nonlinear Regression and Design by Treed Gaussian Process Models

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2007-06-01

    Full Text Available The tgp package for R is a tool for fully Bayesian nonstationary, semiparametric nonlinear regression and design by treed Gaussian processes with jumps to the limiting linear model. Special cases also implemented include Bayesian linear models, linear CART, stationary separable and isotropic Gaussian processes. In addition to inference and posterior prediction, the package supports the (sequential design of experiments under these models paired with several objective criteria. 1-d and 2-d plotting, with higher dimension projection and slice capabilities, and tree drawing functions (requiring maptree and combinat packages, are also provided for visualization of tgp objects.

  12. Sinking, merging and stationary plumes in a coupled chemotaxis-fluid model: a high-resolution numerical approach

    KAUST Repository

    Chertock, A.; Fellner, K.; Kurganov, A.; Lorz, A.; Markowich, P. A.

    2012-01-01

    examples, which illustrate (i) the formation of sinking plumes, (ii) the possible merging of neighbouring plumes and (iii) the convergence towards numerically stable stationary plumes. The examples with stable stationary plumes show how the surface

  13. Modelling Photoelectron Production in the Enceladus Plume and Comparison with Observations by CAPS-ELS

    Science.gov (United States)

    Taylor, S. A.; Coates, A. J.; Jones, G.; Wellbrock, A.; Waite, J. H., Jr.

    2016-12-01

    The Electron Spectrometer (ELS) of the Cassini Plasma Spectrometer (CAPS) measures electrons in the energy range 0.6-28,000 eV with an energy resolution of 16.7%. ELS has observed photoelectrons produced in the plume of Enceladus. These photoelectrons are found during Enceladus encounters in the energetic particle shadow where the spacecraft is shielded from penetrating radiation by the moon [Coates et al, 2013]. Observable is a population of photoelectrons at 20-30eV, which are seen at other bodies in the solar system and are usually associated with ionisation by the strong solar He II (30.4 nm) line. We have identified secondary peaks at 40-50eV detected by ELS which are also interpreted as a warmer population of photoelectrons created through the ionisation of neutrals in the Enceladus torus. We have constructed a model of photoelectron production in the plume and compared it with ELS Enceladus flyby data using automated fitting procedures. This has yielded estimates for electron temperature and density as well as a spacecraft potential estimate which is corrected for.

  14. ADAPTIVE BACKGROUND DENGAN METODE GAUSSIAN MIXTURE MODELS UNTUK REAL-TIME TRACKING

    Directory of Open Access Journals (Sweden)

    Silvia Rostianingsih

    2008-01-01

    Full Text Available Nowadays, motion tracking application is widely used for many purposes, such as detecting traffic jam and counting how many people enter a supermarket or a mall. A method to separate background and the tracked object is required for motion tracking. It will not be hard to develop the application if the tracking is performed on a static background, but it will be difficult if the tracked object is at a place with a non-static background, because the changing part of the background can be recognized as a tracking area. In order to handle the problem an application can be made to separate background where that separation can adapt to change that occur. This application is made to produce adaptive background using Gaussian Mixture Models (GMM as its method. GMM method clustered the input pixel data with pixel color value as it’s basic. After the cluster formed, dominant distributions are choosen as background distributions. This application is made by using Microsoft Visual C 6.0. The result of this research shows that GMM algorithm could made adaptive background satisfactory. This proofed by the result of the tests that succeed at all condition given. This application can be developed so the tracking process integrated in adaptive background maker process. Abstract in Bahasa Indonesia : Saat ini, aplikasi motion tracking digunakan secara luas untuk banyak tujuan, seperti mendeteksi kemacetan dan menghitung berapa banyak orang yang masuk ke sebuah supermarket atau sebuah mall. Sebuah metode untuk memisahkan antara background dan obyek yang di-track dibutuhkan untuk melakukan motion tracking. Membuat aplikasi tracking pada background yang statis bukanlah hal yang sulit, namun apabila tracking dilakukan pada background yang tidak statis akan lebih sulit, dikarenakan perubahan background dapat dikenali sebagai area tracking. Untuk mengatasi masalah tersebut, dapat dibuat suatu aplikasi untuk memisahkan background dimana aplikasi tersebut dapat

  15. Entrainment by turbulent plumes

    Science.gov (United States)

    Parker, David; Burridge, Henry; Partridge, Jamie; Linden, Paul

    2017-11-01

    Plumes are of relevance to nature and real consequence to industry. While the Morton, Taylor & Turner (1956) plume model is able to estimate the mean physical flux parameters, the process of entrainment is only parametrised in a time-averaged sense and a deeper understanding is key to understanding how they evolve. Various flow configurations, resulting in different entrainment values, are considered; we perform simultaneous PIV and plume-edge detection on saline plumes in water resulting from a point source, a line source and a line source where a vertical wall is placed immediately adjacent. Of particular interest is the effect the large scale eddies, forming at the edge of the plume and engulfing ambient fluid, have on the entrainment process. By using velocity statistics in a coordinate system based on the instantaneous scalar edge of the plume the significance of this large scale engulfment is quantified. It is found that significant mass is transported outside the plumes, in particular in regions where large scale structures are absent creating regions of relatively high-momentum ambient fluid. This suggests that the large scale processes, whereby ambient fluid is engulfed into the plume, contribute significantly to the entrainment.

  16. Group Targets Tracking Using Multiple Models GGIW-CPHD Based on Best-Fitting Gaussian Approximation and Strong Tracking Filter

    Directory of Open Access Journals (Sweden)

    Yun Wang

    2016-01-01

    Full Text Available Gamma Gaussian inverse Wishart cardinalized probability hypothesis density (GGIW-CPHD algorithm was always used to track group targets in the presence of cluttered measurements and missing detections. A multiple models GGIW-CPHD algorithm based on best-fitting Gaussian approximation method (BFG and strong tracking filter (STF is proposed aiming at the defect that the tracking error of GGIW-CPHD algorithm will increase when the group targets are maneuvering. The best-fitting Gaussian approximation method is proposed to implement the fusion of multiple models using the strong tracking filter to correct the predicted covariance matrix of the GGIW component. The corresponding likelihood functions are deduced to update the probability of multiple tracking models. From the simulation results we can see that the proposed tracking algorithm MM-GGIW-CPHD can effectively deal with the combination/spawning of groups and the tracking error of group targets in the maneuvering stage is decreased.

  17. Sinking, merging and stationary plumes in a coupled chemotaxis-fluid model: a high-resolution numerical approach

    KAUST Repository

    Chertock, A.

    2012-02-02

    Aquatic bacteria like Bacillus subtilis are heavier than water yet they are able to swim up an oxygen gradient and concentrate in a layer below the water surface, which will undergo Rayleigh-Taylor-type instabilities for sufficiently high concentrations. In the literature, a simplified chemotaxis-fluid system has been proposed as a model for bio-convection in modestly diluted cell suspensions. It couples a convective chemotaxis system for the oxygen-consuming and oxytactic bacteria with the incompressible Navier-Stokes equations subject to a gravitational force proportional to the relative surplus of the cell density compared to the water density. In this paper, we derive a high-resolution vorticity-based hybrid finite-volume finite-difference scheme, which allows us to investigate the nonlinear dynamics of a two-dimensional chemotaxis-fluid system with boundary conditions matching an experiment of Hillesdon et al. (Bull. Math. Biol., vol. 57, 1995, pp. 299-344). We present selected numerical examples, which illustrate (i) the formation of sinking plumes, (ii) the possible merging of neighbouring plumes and (iii) the convergence towards numerically stable stationary plumes. The examples with stable stationary plumes show how the surface-directed oxytaxis continuously feeds cells into a high-concentration layer near the surface, from where the fluid flow (recurring upwards in the space between the plumes) transports the cells into the plumes, where then gravity makes the cells sink and constitutes the driving force in maintaining the fluid convection and, thus, in shaping the plumes into (numerically) stable stationary states. Our numerical method is fully capable of solving the coupled chemotaxis-fluid system and enabling a full exploration of its dynamics, which cannot be done in a linearised framework. © 2012 Cambridge University Press.

  18. Construction of the exact Fisher information matrix of Gaussian time series models by means of matrix differential rules

    NARCIS (Netherlands)

    Klein, A.A.B.; Melard, G.; Zahaf, T.

    2000-01-01

    The Fisher information matrix is of fundamental importance for the analysis of parameter estimation of time series models. In this paper the exact information matrix of a multivariate Gaussian time series model expressed in state space form is derived. A computationally efficient procedure is used

  19. Fractional Gaussian noise-enhanced information capacity of a nonlinear neuron model with binary signal input

    Science.gov (United States)

    Gao, Feng-Yin; Kang, Yan-Mei; Chen, Xi; Chen, Guanrong

    2018-05-01

    This paper reveals the effect of fractional Gaussian noise with Hurst exponent H ∈(1 /2 ,1 ) on the information capacity of a general nonlinear neuron model with binary signal input. The fGn and its corresponding fractional Brownian motion exhibit long-range, strong-dependent increments. It extends standard Brownian motion to many types of fractional processes found in nature, such as the synaptic noise. In the paper, for the subthreshold binary signal, sufficient conditions are given based on the "forbidden interval" theorem to guarantee the occurrence of stochastic resonance, while for the suprathreshold binary signal, the simulated results show that additive fGn with Hurst exponent H ∈(1 /2 ,1 ) could increase the mutual information or bits count. The investigation indicated that the synaptic noise with the characters of long-range dependence and self-similarity might be the driving factor for the efficient encoding and decoding of the nervous system.

  20. Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images

    Science.gov (United States)

    Yao, Shoukui; Qin, Xiaojuan

    2018-02-01

    Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.

  1. Spot counting on fluorescence in situ hybridization in suspension images using Gaussian mixture model

    Science.gov (United States)

    Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin

    2015-03-01

    Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.

  2. Asymptotic properties of Pearson's rank-variate correlation coefficient under contaminated Gaussian model.

    Science.gov (United States)

    Ma, Rubao; Xu, Weichao; Zhang, Yun; Ye, Zhongfu

    2014-01-01

    This paper investigates the robustness properties of Pearson's rank-variate correlation coefficient (PRVCC) in scenarios where one channel is corrupted by impulsive noise and the other is impulsive noise-free. As shown in our previous work, these scenarios that frequently encountered in radar and/or sonar, can be well emulated by a particular bivariate contaminated Gaussian model (CGM). Under this CGM, we establish the asymptotic closed forms of the expectation and variance of PRVCC by means of the well known Delta method. To gain a deeper understanding, we also compare PRVCC with two other classical correlation coefficients, i.e., Spearman's rho (SR) and Kendall's tau (KT), in terms of the root mean squared error (RMSE). Monte Carlo simulations not only verify our theoretical findings, but also reveal the advantage of PRVCC by an example of estimating the time delay in the particular impulsive noise environment.

  3. Optimal multigrid algorithms for the massive Gaussian model and path integrals

    International Nuclear Information System (INIS)

    Brandt, A.; Galun, M.

    1996-01-01

    Multigrid algorithms are presented which, in addition to eliminating the critical slowing down, can also eliminate the open-quotes volume factorclose quotes. The elimination of the volume factor removes the need to produce many independent fine-grid configurations for averaging out their statistical deviations, by averaging over the many samples produced on coarse grids during the multigrid cycle. Thermodynamic limits of observables can be calculated to relative accuracy var-epsilon r in just O(var-epsilon r -2 ) computer operations, where var-epsilon r is the error relative to the standard deviation of the observable. In this paper, we describe in detail the calculation of the susceptibility in the one-dimensional massive Gaussian model, which is also a simple example of path integrals. Numerical experiments show that the susceptibility can be calculated to relative accuracy var-epsilon r in about 8 var-epsilon r -2 random number generations, independent of the mass size

  4. A Grasp-Pose Generation Method Based on Gaussian Mixture Models

    Directory of Open Access Journals (Sweden)

    Wenjia Wu

    2015-11-01

    Full Text Available A Gaussian Mixture Model (GMM-based grasp-pose generation method is proposed in this paper. Through offline training, the GMM is set up and used to depict the distribution of the robot's reachable orientations. By dividing the robot's workspace into small 3D voxels and training the GMM for each voxel, a look-up table covering all the workspace is built with the x, y and z positions as the index and the GMM as the entry. Through the definition of Task Space Regions (TSR, an object's feasible grasp poses are expressed as a continuous region. With the GMM, grasp poses can be preferentially sampled from regions with high reachability probabilities in the online grasp-planning stage. The GMM can also be used as a preliminary judgement of a grasp pose's reachability. Experiments on both a simulated and a real robot show the superiority of our method over the existing method.

  5. Joint hierarchical Gaussian process model with application to personalized prediction in medical monitoring.

    Science.gov (United States)

    Duan, Leo L; Wang, Xia; Clancy, John P; Szczesniak, Rhonda D

    2018-01-01

    A two-level Gaussian process (GP) joint model is proposed to improve personalized prediction of medical monitoring data. The proposed model is applied to jointly analyze multiple longitudinal biomedical outcomes, including continuous measurements and binary outcomes, to achieve better prediction in disease progression. At the population level of the hierarchy, two independent GPs are used to capture the nonlinear trends in both the continuous biomedical marker and the binary outcome, respectively; at the individual level, a third GP, which is shared by the longitudinal measurement model and the longitudinal binary model, induces the correlation between these two model components and strengthens information borrowing across individuals. The proposed model is particularly advantageous in personalized prediction. It is applied to the motivating clinical data on cystic fibrosis disease progression, for which lung function measurements and onset of acute respiratory events are monitored jointly throughout each patient's clinical course. The results from both the simulation studies and the cystic fibrosis data application suggest that the inclusion of the shared individual-level GPs under the joint model framework leads to important improvements in personalized disease progression prediction.

  6. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).

    Science.gov (United States)

    Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.

  7. Observation of thermal plumes from submerged discharges in the Great Lakes and their implications for modeling and monitoring

    International Nuclear Information System (INIS)

    Ditmars, J.D.; Paddock, R.A.; Frigo, A.A.

    1977-01-01

    Measurements of thermal plumes from submerged discharges of power plant cooling waters into the Great Lakes provide the opportunity to view the mixing processes at prototype scales and to observe the effects of the ambient environment on those processes. Examples of thermal plume behavior in Great Lakes' ambient environments are presented to demonstrate the importance of measurements of the detailed structure of the ambient environment, as well as of the plumes, for interpretation of prototype data for modeling and monitoring purposes. The examples are drawn from studies by Argonne National Laboratory (ANL) at the Zion Nuclear PowerStation and the D. C. Cook Nuclear Plant on Lake Michigan and at the J. A. FitzPatrick Nuclear Power Plant on Lake Ontario. These studies included measurements of water temperatures from a moving boat which provide a quasi-synoptic view of the three-dimensional temperature structure of the thermal plume and ambient water environment. Additional measurements of water velocities, which are made with continuously recording, moored, and profiling current meters, and of wind provide data on the detailed structure of the ambient environment. The detailed structure of the ambient environment, in terms of current, current shear, variable winds, and temperature stratification, often influence greatly thermal plume behavior. Although predictive model techniques and monitoring objectives often ignore the detailed aspects of the ambient environment, useful interpretation of prototype data for model evaluation or calibration and monitoring purposes requires detailed measurement of the ambient environment. Examination of prototype thermal plume data indicates that, in several instances, attention to only the gross characteristics of the ambient environment can be misleading and could result in significant errors in model calibration and extrapolation of data bases gathered in monitoring observations

  8. Gaussian Graphical Models Identify Networks of Dietary Intake in a German Adult Population.

    Science.gov (United States)

    Iqbal, Khalid; Buijsse, Brian; Wirth, Janine; Schulze, Matthias B; Floegel, Anna; Boeing, Heiner

    2016-03-01

    Data-reduction methods such as principal component analysis are often used to derive dietary patterns. However, such methods do not assess how foods are consumed in relation to each other. Gaussian graphical models (GGMs) are a set of novel methods that can address this issue. We sought to apply GGMs to derive sex-specific dietary intake networks representing consumption patterns in a German adult population. Dietary intake data from 10,780 men and 16,340 women of the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam cohort were cross-sectionally analyzed to construct dietary intake networks. Food intake for each participant was estimated using a 148-item food-frequency questionnaire that captured the intake of 49 food groups. GGMs were applied to log-transformed intakes (grams per day) of 49 food groups to construct sex-specific food networks. Semiparametric Gaussian copula graphical models (SGCGMs) were used to confirm GGM results. In men, GGMs identified 1 major dietary network that consisted of intakes of red meat, processed meat, cooked vegetables, sauces, potatoes, cabbage, poultry, legumes, mushrooms, soup, and whole-grain and refined breads. For women, a similar network was identified with the addition of fried potatoes. Other identified networks consisted of dairy products and sweet food groups. SGCGMs yielded results comparable to those of GGMs. GGMs are a powerful exploratory method that can be used to construct dietary networks representing dietary intake patterns that reveal how foods are consumed in relation to each other. GGMs indicated an apparent major role of red meat intake in a consumption pattern in the studied population. In the future, identified networks might be transformed into pattern scores for investigating their associations with health outcomes. © 2016 American Society for Nutrition.

  9. Fast pencil beam dose calculation for proton therapy using a double-Gaussian beam model

    Directory of Open Access Journals (Sweden)

    Joakim eda Silva

    2015-12-01

    Full Text Available The highly conformal dose distributions produced by scanned proton pencil beams are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a pencil beam algorithm running on graphics processing units (GPUs intended specifically for online dose calculation. Here we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such pencil beam algorithm for proton therapy running on a GPU. We employ two different parametrizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of pencil beams in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included whilst prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Further, the calculation time is relatively unaffected by the parametrization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.

  10. Modeling ozone plumes observed downwind of New York City over the North Atlantic Ocean during the ICARTT field campaign

    Directory of Open Access Journals (Sweden)

    S.-H. Lee

    2011-07-01

    Full Text Available Transport and chemical transformation of well-defined New York City (NYC urban plumes over the North Atlantic Ocean were studied using aircraft measurements collected on 20–21 July 2004 during the ICARTT (International Consortium for Atmospheric Research on Transport and Transformation field campaign and WRF-Chem (Weather Research and Forecasting-Chemistry model simulations. The strong NYC urban plumes were characterized by carbon monoxide (CO mixing ratios of 350–400 parts per billion by volume (ppbv and ozone (O3 levels of about 100 ppbv near New York City on 20 July in the WP-3D in-situ and DC-3 lidar aircraft measurements. On 21 July, the two aircraft captured strong urban plumes with about 350 ppbv CO and over 150 ppbv O3 (~160 ppbv maximum about 600 km downwind of NYC over the North Atlantic Ocean. The measured urban plumes extended vertically up to about 2 km near New York City, but shrank to 1–1.5 km over the stable marine boundary layer (MBL over the North Atlantic Ocean. The WRF-Chem model reproduced ozone formation processes, chemical characteristics, and meteorology of the measured urban plumes near New York City (20 July and in the far downwind region over the North Atlantic Ocean (21 July. The quasi-Lagrangian analysis of transport and chemical transformation of the simulated NYC urban plumes using WRF-Chem results showed that the pollutants can be efficiently transported in (isentropic layers in the lower atmosphere (<2–3 km over the North Atlantic Ocean while maintaining a dynamic vertical decoupling by cessation of turbulence in the stable MBL. The O3 mixing ratio in the NYC urban plumes remained at 80–90 ppbv during nocturnal transport over the stable MBL, then grew to over 100 ppbv by daytime oxidation of nitrogen oxides (NOx = NO + NO2 with mixing ratios on the order of 1 ppbv. Efficient transport of reactive nitrogen species (NOy, specifically nitric

  11. Dynamic Socialized Gaussian Process Models for Human Behavior Prediction in a Health Social Network

    Science.gov (United States)

    Shen, Yelong; Phan, NhatHai; Xiao, Xiao; Jin, Ruoming; Sun, Junfeng; Piniewski, Brigitte; Kil, David; Dou, Dejing

    2016-01-01

    Modeling and predicting human behaviors, such as the level and intensity of physical activity, is a key to preventing the cascade of obesity and helping spread healthy behaviors in a social network. In our conference paper, we have developed a social influence model, named Socialized Gaussian Process (SGP), for socialized human behavior modeling. Instead of explicitly modeling social influence as individuals' behaviors influenced by their friends' previous behaviors, SGP models the dynamic social correlation as the result of social influence. The SGP model naturally incorporates personal behavior factor and social correlation factor (i.e., the homophily principle: Friends tend to perform similar behaviors) into a unified model. And it models the social influence factor (i.e., an individual's behavior can be affected by his/her friends) implicitly in dynamic social correlation schemes. The detailed experimental evaluation has shown the SGP model achieves better prediction accuracy compared with most of baseline methods. However, a Socialized Random Forest model may perform better at the beginning compared with the SGP model. One of the main reasons is the dynamic social correlation function is purely based on the users' sequential behaviors without considering other physical activity-related features. To address this issue, we further propose a novel “multi-feature SGP model” (mfSGP) which improves the SGP model by using multiple physical activity-related features in the dynamic social correlation learning. Extensive experimental results illustrate that the mfSGP model clearly outperforms all other models in terms of prediction accuracy and running time. PMID:27746515

  12. Modeling of a VMJ PV array under Gaussian high intensity laser power beam condition

    Science.gov (United States)

    Eom, Jeongsook; Kim, Gunzung; Park, Yongwan

    2018-02-01

    The high intensity laser power beaming (HILPB) system is one of the most promising systems in the long-rang wireless power transfer field. The vertical multi-junction photovoltaic (VMJ PV) array converts the HILPB into electricity to power the load or charges a battery. The output power of a VMJ PV array depends mainly on irradiance values of each VMJ PV cells. For simulating an entire VMJ PV array, the irradiance profile of the Gaussian HILPB and the irradiance level of the VMJ PV cell are mathematically modeled first. The VMJ PV array is modeled as a network with dimension m*n, where m represents the number of VMJ PV cells in a column, and n represents the number of VMJ PV cells in a row. In order to validate the results obtained in modeling and simulation, a laboratory setup was developed using 55 VMJ PV array. By using the output power model of VMJ PV array, we can establish an optimal power transmission path by the receiver based on the received signal strength. When the laser beam from multiple transmitters aimed at a VMJ PV array at the same time, the received power is the sum of all energy at a VMJ PV array. The transmitter sends its power characteristics as optically coded laser pulses and powers as HILPB. Using the attenuated power model and output power model of VMJ PV array, the receiver can estimate the maximum receivable powers from the transmitters and select optimal transmitters.

  13. PENENTUAN HARGA KONTRAK OPSI TIPE ASIA MENGGUNAKAN MODEL SIMULASI NORMAL INVERSE GAUSSIAN (NIG

    Directory of Open Access Journals (Sweden)

    I PUTU OKA PARAMARTHA

    2015-02-01

    Full Text Available The aim to determine of the simulation results and to calculate the stock price of Asian Option with Normal Inverse Gaussian (NIG method and Monte Carlo method using MATLAB program. Results of both models are compared and selected a fair price. Besides to determine simulation accuracy of the stock price, speed of program execution MATLAB is calculated for both models for time efficiency. The first part, set variabels used to calculate the trajectory of stock prices at time t to simulate the stock price at the time. The second part, simulate the stock price with NIG model. The third part, simulate the stock price with Monte Carlo model. After simulating the stock price, calculated the value of the pay-off of the Asian Option, and then estimate the price of Asian Option by averaging the entire value of pay-off from each iteration. The last part, compare result of both models. The results of this research is price of Asian Option calculated using Monte Carlo simulation and NIG. The rates were calculated using the NIG produce a fair price, because of the pricing contract NIG using four parameters ?, ?, ?, and ?, while Monte Carlo is using only two parameters ? and ?. For execution time of the program, the Monte Carlo model is better in all iterations.

  14. Missing Value Imputation Based on Gaussian Mixture Model for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Xiaobo Yan

    2015-01-01

    Full Text Available This paper addresses missing value imputation for the Internet of Things (IoT. Nowadays, the IoT has been used widely and commonly by a variety of domains, such as transportation and logistics domain and healthcare domain. However, missing values are very common in the IoT for a variety of reasons, which results in the fact that the experimental data are incomplete. As a result of this, some work, which is related to the data of the IoT, can’t be carried out normally. And it leads to the reduction in the accuracy and reliability of the data analysis results. This paper, for the characteristics of the data itself and the features of missing data in IoT, divides the missing data into three types and defines three corresponding missing value imputation problems. Then, we propose three new models to solve the corresponding problems, and they are model of missing value imputation based on context and linear mean (MCL, model of missing value imputation based on binary search (MBS, and model of missing value imputation based on Gaussian mixture model (MGI. Experimental results showed that the three models can improve the accuracy, reliability, and stability of missing value imputation greatly and effectively.

  15. PENENTUAN HARGA KONTRAK OPSI TIPE ASIA MENGGUNAKAN MODEL SIMULASI NORMAL INVERSE GAUSSIAN (NIG

    Directory of Open Access Journals (Sweden)

    I PUTU OKA PARAMARTHA

    2014-08-01

    Full Text Available The aim to determine of the simulation results and to calculate the stock price of Asian Option with Normal Inverse Gaussian (NIG method and Monte Carlo method using MATLAB program. Results of both models are compared and selected a fair price. Besides to determine simulation accuracy of the stock price, speed of program execution MATLAB is calculated for both models for time efficiency. The first part, set variabels used to calculate the trajectory of stock prices at time t to simulate the stock price at the time. The second part, simulate the stock price with NIG model. The third part, simulate the stock price with Monte Carlo model. After simulating the stock price, calculated the value of the pay-off of the Asian Option, and then estimate the price of Asian Option by averaging the entire value of pay-off from each iteration. The last part, compare result of both models. The results of this research is price of Asian Option calculated using Monte Carlo simulation and NIG. The rates were calculated using the NIG produce a fair price, because of the pricing contract NIG using four parameters ?, ?, ?, and ?, while Monte Carlo is using only two parameters ? and ?. For execution time of the program, the Monte Carlo model is better in all iterations.

  16. 3D Numerical Model of Continental Breakup via Plume Lithosphere Interaction Near Cratonic Blocks: Implications for the Tanzanian Craton

    Science.gov (United States)

    Koptev, A.; Calais, E.; Burov, E. B.; Leroy, S. D.; Gerya, T.

    2014-12-01

    Although many continental rift basins and their successfully rifted counterparts at passive continental margins are magmatic, some are not. This dichotomy prompted end-member views of the mechanism driving continental rifting, deep-seated and mantle plume-driven for some, owing to shallow lithospheric stretching for others. In that regard, the East African Rift (EAR), the 3000 km-long divergent boundary between the Nubian and Somalian plates, provides a unique setting with the juxtaposition of the eastern, magma-rich, and western, magma-poor, branches on either sides of the 250-km thick Tanzanian craton. Here we implement high-resolution rheologically realistic 3D numerical model of plume-lithosphere interactions in extensional far-field settings to explain this contrasted behaviour in a unified framework starting from simple, symmetrical initial conditions with an isolated mantle plume rising beneath a craton in an east-west tensional far field stress. The upwelling mantle plume is deflected by the cratonic keel and preferentially channelled along one of its sides. This leads to the coeval development of a magma-rich branch above the plume head and a magma-poor one along the opposite side of the craton, the formation of a rotating microplate between the two rift branches, and the feeding of melt to both branches form a single mantle source. The model bears strong similarities with the evolution of the eastern and western branches of the central EAR and the geodetically observed rotation of the Victoria microplate. This result reconciles the passive (plume-activated) versus active (far-field tectonic stresses) rift models as our experiments shows both processes in action and demonstrate the possibility of developing both magmatic and amagmatic rifts in identical geotectonic environments.

  17. Recommendations on dose buildup factors used in models for calculating gamma doses for a plume

    International Nuclear Information System (INIS)

    Hedemann Jensen, P.; Thykier-Nielsen, S.

    1980-09-01

    Calculations of external γ-doses from radioactivity released to the atmosphere have been made using different dose buildup factor formulas. Some of the dose buildup factor formulas are used by the Nordic countries in their respective γ-dose models. A comparison of calculated γ-doses using these dose buildup factors shows that the γ-doses can be significantly dependent on the buildup factor formula used in the calculation. Increasing differences occur for increasing plume height, crosswind distance, and atmospheric stability and also for decreasing downwind distance. It is concluded that the most accurate γ-dose can be calculated by use of Capo's polynomial buildup factor formula. Capo-coefficients have been calculated and shown in this report for γ-energies below the original lower limit given by Capo. (author)

  18. Modelling of plume chemistry of high flying aircraft with H2 combustion engines

    International Nuclear Information System (INIS)

    Weibring, G.; Zellner, R.

    1993-01-01

    Emissions from hydrogen fueled aircraft engines include large concentrations of radicals such as NO, OH, O and H. We describe the result of modelling studies in which the evolution of the radical chemistry in an expanding and cooling plume for three different mixing velocities is evaluated. The simulations were made for hydrogen combustion engines at an altitude of 26 km. For the fastest mixing conditions, the radical concentrations decrease only because of dilution with the ambient air, since the time for chemical reaction is too short. With lower mixing velocities, however, larger chemical conversions were determined. For the slowest mixing conditions the unburned hydrogen is converted into water. As a consequence the radicals O and OH increase considerably around 1400 K. The only exception being NO, for which no chemical change during the expansion is found. The concentrations of the reservoir molecules like H 2 O 2 , N 2 O 5 or HNO 3 have been calculated to remain relatively small. (orig.)

  19. Bivariate Gaussian bridges: directional factorization of diffusion in Brownian bridge models.

    Science.gov (United States)

    Kranstauber, Bart; Safi, Kamran; Bartumeus, Frederic

    2014-01-01

    In recent years high resolution animal tracking data has become the standard in movement ecology. The Brownian Bridge Movement Model (BBMM) is a widely adopted approach to describe animal space use from such high resolution tracks. One of the underlying assumptions of the BBMM is isotropic diffusive motion between consecutive locations, i.e. invariant with respect to the direction. Here we propose to relax this often unrealistic assumption by separating the Brownian motion variance into two directional components, one parallel and one orthogonal to the direction of the motion. Our new model, the Bivariate Gaussian bridge (BGB), tracks movement heterogeneity across time. Using the BGB and identifying directed and non-directed movement within a trajectory resulted in more accurate utilisation distributions compared to dynamic Brownian bridges, especially for trajectories with a non-isotropic diffusion, such as directed movement or Lévy like movements. We evaluated our model with simulated trajectories and observed tracks, demonstrating that the improvement of our model scales with the directional correlation of a correlated random walk. We find that many of the animal trajectories do not adhere to the assumptions of the BBMM. The proposed model improves accuracy when describing the space use both in simulated correlated random walks as well as observed animal tracks. Our novel approach is implemented and available within the "move" package for R.

  20. On Diagnostic Checking of Vector ARMA-GARCH Models with Gaussian and Student-t Innovations

    Directory of Open Access Journals (Sweden)

    Yongning Wang

    2013-04-01

    Full Text Available This paper focuses on the diagnostic checking of vector ARMA (VARMA models with multivariate GARCH errors. For a fitted VARMA-GARCH model with Gaussian or Student-t innovations, we derive the asymptotic distributions of autocorrelation matrices of the cross-product vector of standardized residuals. This is different from the traditional approach that employs only the squared series of standardized residuals. We then study two portmanteau statistics, called Q1(M and Q2(M, for model checking. A residual-based bootstrap method is provided and demonstrated as an effective way to approximate the diagnostic checking statistics. Simulations are used to compare the performance of the proposed statistics with other methods available in the literature. In addition, we also investigate the effect of GARCH shocks on checking a fitted VARMA model. Empirical sizes and powers of the proposed statistics are investigated and the results suggest a procedure of using jointly Q1(M and Q2(M in diagnostic checking. The bivariate time series of FTSE 100 and DAX index returns is used to illustrate the performance of the proposed portmanteau statistics. The results show that it is important to consider the cross-product series of standardized residuals and GARCH effects in model checking.

  1. Bayesian modeling of JET Li-BES for edge electron density profiles using Gaussian processes

    Science.gov (United States)

    Kwak, Sehyun; Svensson, Jakob; Brix, Mathias; Ghim, Young-Chul; JET Contributors Collaboration

    2015-11-01

    A Bayesian model for the JET lithium beam emission spectroscopy (Li-BES) system has been developed to infer edge electron density profiles. The 26 spatial channels measure emission profiles with ~15 ms temporal resolution and ~1 cm spatial resolution. The lithium I (2p-2s) line radiation in an emission spectrum is calculated using a multi-state model, which expresses collisions between the neutral lithium beam atoms and the plasma particles as a set of differential equations. The emission spectrum is described in the model including photon and electronic noise, spectral line shapes, interference filter curves, and relative calibrations. This spectral modeling gets rid of the need of separate background measurements for calculating the intensity of the line radiation. Gaussian processes are applied to model both emission spectrum and edge electron density profile, and the electron temperature to calculate all the rate coefficients is obtained from the JET high resolution Thomson scattering (HRTS) system. The posterior distributions of the edge electron density profile are explored via the numerical technique and the Markov chain Monte Carlo (MCMC) samplings. See the Appendix of F. Romanelli et al., Proceedings of the 25th IAEA Fusion Energy Conference 2014, Saint Petersburg, Russia.

  2. The Gaussian Graphical Model in Cross-Sectional and Time-Series Data.

    Science.gov (United States)

    Epskamp, Sacha; Waldorp, Lourens J; Mõttus, René; Borsboom, Denny

    2018-04-16

    We discuss the Gaussian graphical model (GGM; an undirected network of partial correlation coefficients) and detail its utility as an exploratory data analysis tool. The GGM shows which variables predict one-another, allows for sparse modeling of covariance structures, and may highlight potential causal relationships between observed variables. We describe the utility in three kinds of psychological data sets: data sets in which consecutive cases are assumed independent (e.g., cross-sectional data), temporally ordered data sets (e.g., n = 1 time series), and a mixture of the 2 (e.g., n > 1 time series). In time-series analysis, the GGM can be used to model the residual structure of a vector-autoregression analysis (VAR), also termed graphical VAR. Two network models can then be obtained: a temporal network and a contemporaneous network. When analyzing data from multiple subjects, a GGM can also be formed on the covariance structure of stationary means-the between-subjects network. We discuss the interpretation of these models and propose estimation methods to obtain these networks, which we implement in the R packages graphicalVAR and mlVAR. The methods are showcased in two empirical examples, and simulation studies on these methods are included in the supplementary materials.

  3. Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-07-01

    What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.

  4. Low Density Supersonic Decelerator (LDSD) Supersonic Flight Dynamics Test (SFDT) Plume Induced Environment Modelling

    Science.gov (United States)

    Mobley, B. L.; Smith, S. D.; Van Norman, J. W.; Muppidi, S.; Clark, I

    2016-01-01

    Provide plume induced heating (radiation & convection) predictions in support of the LDSD thermal design (pre-flight SFDT-1) Predict plume induced aerodynamics in support of flight dynamics, to achieve targeted freestream conditions to test supersonic deceleration technologies (post-flight SFDT-1, pre-flight SFDT-2)

  5. Chemistry in aircraft plumes

    Energy Technology Data Exchange (ETDEWEB)

    Kraabol, A.G.; Stordal, F.; Knudsen, S. [Norwegian Inst. for Air Research, Kjeller (Norway); Konopka, P. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Wessling (Germany). Inst. fuer Physik der Atmosphaere

    1997-12-31

    An expanding plume model with chemistry has been used to study the chemical conversion of NO{sub x} to reservoir species in aircraft plumes. The heterogeneous conversion of N{sub 2}O{sub 5} to HNO{sub 3}(s) has been investigated when the emissions take place during night-time. The plume from an B747 has been simulated. During a ten-hour calculation the most important reservoir species was HNO{sub 3} for emissions at noon. The heterogeneous reactions had little impact on the chemical loss of NO{sub x} to reservoir species for emissions at night. (author) 4 refs.

  6. Chemistry in aircraft plumes

    Energy Technology Data Exchange (ETDEWEB)

    Kraabol, A G; Stordal, F; Knudsen, S [Norwegian Inst. for Air Research, Kjeller (Norway); Konopka, P [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Wessling (Germany). Inst. fuer Physik der Atmosphaere

    1998-12-31

    An expanding plume model with chemistry has been used to study the chemical conversion of NO{sub x} to reservoir species in aircraft plumes. The heterogeneous conversion of N{sub 2}O{sub 5} to HNO{sub 3}(s) has been investigated when the emissions take place during night-time. The plume from an B747 has been simulated. During a ten-hour calculation the most important reservoir species was HNO{sub 3} for emissions at noon. The heterogeneous reactions had little impact on the chemical loss of NO{sub x} to reservoir species for emissions at night. (author) 4 refs.

  7. A Gaussian mixture model based cost function for parameter estimation of chaotic biological systems

    Science.gov (United States)

    Shekofteh, Yasser; Jafari, Sajad; Sprott, Julien Clinton; Hashemi Golpayegani, S. Mohammad Reza; Almasganj, Farshad

    2015-02-01

    As we know, many biological systems such as neurons or the heart can exhibit chaotic behavior. Conventional methods for parameter estimation in models of these systems have some limitations caused by sensitivity to initial conditions. In this paper, a novel cost function is proposed to overcome those limitations by building a statistical model on the distribution of the real system attractor in state space. This cost function is defined by the use of a likelihood score in a Gaussian mixture model (GMM) which is fitted to the observed attractor generated by the real system. Using that learned GMM, a similarity score can be defined by the computed likelihood score of the model time series. We have applied the proposed method to the parameter estimation of two important biological systems, a neuron and a cardiac pacemaker, which show chaotic behavior. Some simulated experiments are given to verify the usefulness of the proposed approach in clean and noisy conditions. The results show the adequacy of the proposed cost function.

  8. A multivariate multilevel Gaussian model with a mixed effects structure in the mean and covariance part.

    Science.gov (United States)

    Li, Baoyue; Bruyneel, Luk; Lesaffre, Emmanuel

    2014-05-20

    A traditional Gaussian hierarchical model assumes a nested multilevel structure for the mean and a constant variance at each level. We propose a Bayesian multivariate multilevel factor model that assumes a multilevel structure for both the mean and the covariance matrix. That is, in addition to a multilevel structure for the mean we also assume that the covariance matrix depends on covariates and random effects. This allows to explore whether the covariance structure depends on the values of the higher levels and as such models heterogeneity in the variances and correlation structure of the multivariate outcome across the higher level values. The approach is applied to the three-dimensional vector of burnout measurements collected on nurses in a large European study to answer the research question whether the covariance matrix of the outcomes depends on recorded system-level features in the organization of nursing care, but also on not-recorded factors that vary with countries, hospitals, and nursing units. Simulations illustrate the performance of our modeling approach. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Modelling tools for integrating geological, geophysical and contamination data for characterization of groundwater plumes

    DEFF Research Database (Denmark)

    Balbarini, Nicola

    the contaminant plume in a shallow and a deep plume. These plumes have different chemical characteristics and different migration paths to the stream. This has implications for the risk assessment of the stream and groundwater in the area. The difficulty of determining groundwater flow paths means that it is also...... receptors, including streams. Key risk assessment parameters, such as contaminant mass discharge estimates, and tools are then used to evaluate the risk. The cost of drilling often makes investigations of large and/or deep contaminant plumes unfeasible. For this reason, it is important to develop cost...... organic compounds, including pharmaceutical compounds and chlorinated ethenes. The correlation between DCIP and organic compounds is indirect and depends on the chemical composition of the contaminant plume and the transport processes. Thus, the correlations are site specific and may change between...

  10. A Model for the Infrared Radiance of Optically Thin, Particulate Exhaust Plumes Generated by Pyrotechnic Flares Burning in a Vacuum

    National Research Council Canada - National Science Library

    Cohen, Douglas

    2000-01-01

    .... The model is used to predict how a magnesium-Teflon exhaust plume would look when viewed as an approximate point source by a distant infrared sensor and also to analyze the data acquired from three separate magnesium-Teflon flares burned in a large vacuum chamber.

  11. Field studies of the thermal plume from the D. C. Cook submerged discharge with comparisons to hydraulic-model results

    International Nuclear Information System (INIS)

    Frigo, A.A.; Paddock, R.A.; McCown, D.L.

    1975-06-01

    The Donald C. Cook Nuclear Plant at Bridgman, Michigan, uses submerged-diffuser discharges as a means of disposing waste heat into Lake Michigan. Preliminary results of temperature surveys of the thermal plume at the D. C. Cook Plant are presented. Indications are that the spatial extent of the plume at the surface is much smaller than previous results for surface shoreline discharges, particularly in the near and intermediate portions of the plume. Comparisons of limited prototype data with hydraulic (tank)-model predictions indicate that the model predictions for centerline temperature decay at the surface are too high for the initial 200 m from the discharge, but are generally correct beyond this point to the limits of the model. In addition, the hydraulic-model results underestimate the areal extent of the near and intermediate portions of the plume at the surface. Because this is the first report of a new field program, several inadequacies in the field-measurement techniques are noted and discussed. New techniques that have been developed to remedy these deficiencies, and which will be implemented for future field work, are also described. (auth)

  12. The SR Approach: a new Estimation Method for Non-Linear and Non-Gaussian Dynamic Term Structure Models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller; Christensen, Bent Jesper

    This paper suggests a new and easy approach to estimate linear and non-linear dynamic term structure models with latent factors. We impose no distributional assumptions on the factors and they may therefore be non-Gaussian. The novelty of our approach is to use many observables (yields or bonds p...

  13. A formal likelihood function for parameter and predictive inference of hydrologic models with correlated, heteroscedastic, and non-Gaussian errors

    NARCIS (Netherlands)

    Schoups, G.; Vrugt, J.A.

    2010-01-01

    Estimation of parameter and predictive uncertainty of hydrologic models has traditionally relied on several simplifying assumptions. Residual errors are often assumed to be independent and to be adequately described by a Gaussian probability distribution with a mean of zero and a constant variance.

  14. Solar Coronal Plumes

    Directory of Open Access Journals (Sweden)

    Giannina Poletto

    2015-12-01

    Full Text Available Polar plumes are thin long ray-like structures that project beyond the limb of the Sun polar regions, maintaining their identity over distances of several solar radii. Plumes have been first observed in white-light (WL images of the Sun, but, with the advent of the space era, they have been identified also in X-ray and UV wavelengths (XUV and, possibly, even in in situ data. This review traces the history of plumes, from the time they have been first imaged, to the complex means by which nowadays we attempt to reconstruct their 3-D structure. Spectroscopic techniques allowed us also to infer the physical parameters of plumes and estimate their electron and kinetic temperatures and their densities. However, perhaps the most interesting problem we need to solve is the role they cover in the solar wind origin and acceleration: Does the solar wind emanate from plumes or from the ambient coronal hole wherein they are embedded? Do plumes have a role in solar wind acceleration and mass loading? Answers to these questions are still somewhat ambiguous and theoretical modeling does not provide definite answers either. Recent data, with an unprecedented high spatial and temporal resolution, provide new information on the fine structure of plumes, their temporal evolution and relationship with other transient phenomena that may shed further light on these elusive features.

  15. 3D Thermo-Mechanical Models of Plume-Lithosphere Interactions: Implications for the Kenya rift

    Science.gov (United States)

    Scheck-Wenderoth, M.; Koptev, A.; Sippel, J.

    2017-12-01

    We present three-dimensional (3D) thermo-mechanical models aiming to explore the interaction of an active mantle plume with heterogeneous pre-stressed lithosphere in the Kenya rift region. As shown by the recent data-driven 3D gravity and thermal modeling (Sippel et al., 2017), the integrated strength of the lithosphere for the region of Kenya and northern Tanzania appears to be strongly controlled by the complex inherited crustal structure, which may have been decisive for the onset, localization and propagation of rifting. In order to test this hypothesis, we have performed a series of ultra-high resolution 3D numerical experiments that include a coupled mantle/lithosphere system in a dynamically and rheologically consistent framework. In contrast to our previous studies assuming a simple and quasi-symmetrical initial condition (Koptev et al., 2015, 2016, 2017), the complex 3D distribution of rock physical properties inferred from geological and geophysical observations (Sippel et al., 2017) has been incorporated into the model setup that comprises a stratified three-layer continental lithosphere composed of an upper and lower crust and lithospheric mantle overlaying the upper mantle. Following the evidence of the presence of a broad low-velocity seismic anomaly under the central parts of the East African Rift system (e.g. Nyblade et al, 2000; Chang et al., 2015), a 200-km radius mantle plume has been seeded at the bottom of a 635 km-depth model box representing a thermal anomaly of 300°C temperature excess. In all model runs, results show that the spatial distribution of surface deformation is indeed strongly controlled by crustal structure: within the southern part of the model box, a localized narrow zone stretched in NS direction (i.e. perpendicularly to applied far-field extension) is aligned along a structural boundary within the lower crust, whereas in the northern part of the model domain, deformation is more diffused and its eastern limit coincides with

  16. Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain

    Directory of Open Access Journals (Sweden)

    Jian Jia

    2015-01-01

    Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.

  17. Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series

    Science.gov (United States)

    Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth

    2017-12-01

    The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.

  18. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    Science.gov (United States)

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  19. Dipole saturated absorption modeling in gas phase: Dealing with a Gaussian beam

    Science.gov (United States)

    Dupré, Patrick

    2018-01-01

    With the advent of new accurate and sensitive spectrometers, cf. combining optical cavities (for absorption enhancement), the requirement for reliable molecular transition modeling is becoming more pressing. Unfortunately, there is no trivial approach which can provide a definitive formalism allowing us to solve the coupled systems of equations associated with nonlinear absorption. Here, we propose a general approach to deal with any spectral shape of the electromagnetic field interacting with a molecular species under saturation conditions. The development is specifically applied to Gaussian-shaped beams. To make the analytical expressions tractable, approximations are proposed. Finally, two or three numerical integrations are required for describing the Lamb-dip profile. The implemented model allows us to describe the saturated absorption under low pressure conditions where the broadening by the transit-time may dominate the collision rates. The model is applied to two specific overtone transitions of the molecular acetylene. The simulated line shapes are discussed versus the collision and the transit-time rates. The specific collisional and collision-free regimes are illustrated, while the Rabi frequency controls the intermediate regime. We illustrate how to recover the input parameters by fitting the simulated profiles.

  20. Using MOPITT data and a Chemistry and Transport Model to Investigate Injection Height of Plumes from Boreal Forest Fires

    Science.gov (United States)

    Hyer, E. J.; Allen, D. J.; Kasischke, E. S.; Warner, J. X.

    2003-12-01

    Trace gas emissions from boreal forest fires are a significant factor in atmospheric composition and its interannual variability. A number of recent observations of emissions plumes above individual fire events (Fromm and Servranckx, 2003; COBRA 2003; Lamarque et al., 2003; Wotawa and Trainer, 2000) suggest that vertical properties of forest fire emission plumes can be very different from fossil fuel emission plumes. Understanding and constraining the vertical properties of forest fire emission plumes and their injection into the atmosphere during fire events is critical for accurate modeling of atmospheric transport and chemistry. While excellent data have been collected in a handful of experiments on individual fire events, a systematic examination of the range of behavior observed in fire events has been hampered by the scarcity of vertical profiles of atmospheric composition. In this study, we used a high-resolution model of boreal forest fire emissions (Kasischke et al, in review) as input to the Goddard/UM CTM driven by the GEOS-3 DAS, operating at 2 by 2.5 degrees with 35 vertical levels. We modeled atmospheric injection and transport of CO emissions during the fire season of 2000 (May-September). We altered the parameters of the model to simulate a range of scenarios of plume injection, and compared the resulting output to the CO profiles from the MOPITT instrument. The results presented here pertain to the boreal forest, but our methods should be useful for atmospheric modelers hoping to more realistically model transport of emission plumes from biomass burning. References: COBRA2003: see http://www.fas.harvard.edu/~cobra/smoke_canada_030530.pdf Fromm, M. and R. Servranckx, 2003. "Stratospheric Injection of Forest Fire Emissions on August 4, 1998: A Satellite Image Analysis of the Causal Supercell Convection." Geophysical Research Abstracts 5:13118. Kasischke, E.S.; E.J. Hyer, N.H.F. French, A.I. Sukhinin, J.H. Hewson, B.J. Stocks, in review. "Carbon

  1. Investigation of Balcony Plume Entrainment

    OpenAIRE

    Liu, F.; Nielsen, Peter V.; Heiselberg, Per; Brohus, Henrik; Li, B. Z.

    2009-01-01

    An investigation on the scenarios of the spill plume and its equation was presented in this paper. The study includes two aspects, i.e., the small-scale experiment and the numerical simulation. Two balcony spill plume models are assessed by comparing with the FDS (Fire Dynamic Simulation) and small scale model experiment results. Besides validating the spill model by experiments, the effect of different fire location on balcony plume is also discussed.The results show that the balcony equatio...

  2. A Gaussian mixture model for definition of lung tumor volumes in positron emission tomography

    International Nuclear Information System (INIS)

    Aristophanous, Michalis; Penney, Bill C.; Martel, Mary K.; Pelizzari, Charles A.

    2007-01-01

    The increased interest in 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET) in radiation treatment planning in the past five years necessitated the independent and accurate segmentation of gross tumor volume (GTV) from FDG-PET scans. In some studies the radiation oncologist contours the GTV based on a computed tomography scan, while incorporating pertinent data from the PET images. Alternatively, a simple threshold, typically 40% of the maximum intensity, has been employed to differentiate tumor from normal tissue, while other researchers have developed algorithms to aid the PET based GTV definition. None of these methods, however, results in reliable PET tumor segmentation that can be used for more sophisticated treatment plans. For this reason, we developed a Gaussian mixture model (GMM) based segmentation technique on selected PET tumor regions from non-small cell lung cancer patients. The purpose of this study was to investigate the feasibility of using a GMM-based tumor volume definition in a robust, reliable and reproducible way. A GMM relies on the idea that any distribution, in our case a distribution of image intensities, can be expressed as a mixture of Gaussian densities representing different classes. According to our implementation, each class belongs to one of three regions in the image; the background (B), the uncertain (U) and the target (T), and from these regions we can obtain the tumor volume. User interaction in the implementation is required, but is limited to the initialization of the model parameters and the selection of an ''analysis region'' to which the modeling is restricted. The segmentation was developed on three and tested on another four clinical cases to ensure robustness against differences observed in the clinic. It also compared favorably with thresholding at 40% of the maximum intensity and a threshold determination function based on tumor to background image intensities proposed in a recent paper. The parts of the

  3. Study of asymmetry in motor areas related to handedness using the fMRI BOLD response Gaussian convolution model

    International Nuclear Information System (INIS)

    Gao Qing; Chen Huafu; Gong Qiyong

    2009-01-01

    Brain asymmetry is a phenomenon well known for handedness, and has been studied in the motor cortex. However, few studies have quantitatively assessed the asymmetrical cortical activities for handedness in motor areas. In the present study, we systematically and quantitatively investigated asymmetry in the left and right primary motor cortices during sequential finger movements using the Gaussian convolution model approach based on the functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) response. Six right-handed and six left-handed subjects were recruited to perform three types of hand movement tasks. The results for the expected value of the Gaussian convolution model showed that it took the dominant hand a longer average interval of response delay regardless of the handedness and bi- or uni-manual performance. The results for the standard deviation of the Gaussian model suggested that in the mass neurons, these intervals of the dominant hand were much more variable than those of the non-dominant hand. When comparing bi-manual movement conditions with uni-manual movement conditions in the primary motor cortex (PMC), both the expected value and standard deviation in the Gaussian function were significantly smaller (p < 0.05) in the bi-manual conditions, showing that the movement of the non-dominant hand influenced that of the dominant hand.

  4. Study of asymmetry in motor areas related to handedness using the fMRI BOLD response Gaussian convolution model

    Energy Technology Data Exchange (ETDEWEB)

    Gao Qing [School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054 (China); School of Applied Mathematics, University of Electronic Science and Technology of China, Chengdu 610054 (China); Chen Huafu [School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054 (China); School of Applied Mathematics, University of Electronic Science and Technology of China, Chengdu 610054 (China)], E-mail: Chenhf@uestc.edu.cn; Gong Qiyong [Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu 610041 (China)

    2009-10-30

    Brain asymmetry is a phenomenon well known for handedness, and has been studied in the motor cortex. However, few studies have quantitatively assessed the asymmetrical cortical activities for handedness in motor areas. In the present study, we systematically and quantitatively investigated asymmetry in the left and right primary motor cortices during sequential finger movements using the Gaussian convolution model approach based on the functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) response. Six right-handed and six left-handed subjects were recruited to perform three types of hand movement tasks. The results for the expected value of the Gaussian convolution model showed that it took the dominant hand a longer average interval of response delay regardless of the handedness and bi- or uni-manual performance. The results for the standard deviation of the Gaussian model suggested that in the mass neurons, these intervals of the dominant hand were much more variable than those of the non-dominant hand. When comparing bi-manual movement conditions with uni-manual movement conditions in the primary motor cortex (PMC), both the expected value and standard deviation in the Gaussian function were significantly smaller (p < 0.05) in the bi-manual conditions, showing that the movement of the non-dominant hand influenced that of the dominant hand.

  5. Numerical modeling of Gaussian beam propagation and diffraction in inhomogeneous media based on the complex eikonal equation

    Science.gov (United States)

    Huang, Xingguo; Sun, Hui

    2018-05-01

    Gaussian beam is an important complex geometrical optical technology for modeling seismic wave propagation and diffraction in the subsurface with complex geological structure. Current methods for Gaussian beam modeling rely on the dynamic ray tracing and the evanescent wave tracking. However, the dynamic ray tracing method is based on the paraxial ray approximation and the evanescent wave tracking method cannot describe strongly evanescent fields. This leads to inaccuracy of the computed wave fields in the region with a strong inhomogeneous medium. To address this problem, we compute Gaussian beam wave fields using the complex phase by directly solving the complex eikonal equation. In this method, the fast marching method, which is widely used for phase calculation, is combined with Gauss-Newton optimization algorithm to obtain the complex phase at the regular grid points. The main theoretical challenge in combination of this method with Gaussian beam modeling is to address the irregular boundary near the curved central ray. To cope with this challenge, we present the non-uniform finite difference operator and a modified fast marching method. The numerical results confirm the proposed approach.

  6. Particle-Resolved Modeling of Aerosol Mixing State in an Evolving Ship Plume

    Science.gov (United States)

    Riemer, N. S.; Tian, J.; Pfaffenberger, L.; Schlager, H.; Petzold, A.

    2011-12-01

    The aerosol mixing state is important since it impacts the particles' optical and CCN properties and thereby their climate impact. It evolves continuously during the particles' residence time in the atmosphere as a result of coagulation with other particles and condensation of secondary aerosol species. This evolution is challenging to represent in traditional aerosol models since they require the representation of a multi-dimensional particle distribution. While modal or sectional aerosol representations cannot practically resolve the aerosol mixing state for more than a few species, particle-resolved models store the composition of many individual aerosol particles directly. They thus sample the high-dimensional composition state space very efficiently and so can deal with tens of species, fully resolving the mixing state. Here we use the capabilities of the particle-resolved model PartMC-MOSAIC to simulate the evolution of particulate matter emitted from marine diesel engines and compare the results to aircraft measurements made in the English Channel in 2007 as part of the European campaign QUANTIFY. The model was initialized with values of gas concentrations and particle size distributions and compositions representing fresh ship emissions. These values were obtained from a test rig study in the European project HERCULES in 2006 using a serial four-stroke marine diesel engine operating on high-sulfur heavy fuel oil. The freshly emitted particles consisted of sulfate, black carbon, organic carbon and ash. We then tracked the particle population for several hours as it evolved undergoing coagulation, dilution with the background air, and chemical transformations in the aerosol and gas phase. This simulation was used to compute the evolution of CCN properties and optical properties of the plume on a per-particle basis. We compared our results to size-resolved data of aged ship plumes from the QUANTIFY Study in 2007 and showed that the model was able to reproduce

  7. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    Science.gov (United States)

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  8. FIREPLUME model for plume dispersion from fires: Application to uranium hexafluoride cylinder fires

    International Nuclear Information System (INIS)

    Brown, D.F.; Dunn, W.E.

    1997-06-01

    This report provides basic documentation of the FIREPLUME model and discusses its application to the prediction of health impacts resulting from releases of uranium hexafluoride (UF 6 ) in fires. The model application outlined in this report was conducted for the Draft Programmatic Environmental Impact Statement for Alternative Strategies for the Long-Term Management and Use of Depleted UF 6 . The FIREPLUME model is an advanced stochastic model for atmospheric plume dispersion that predicts the downwind consequences of a release of toxic materials from an explosion or a fire. The model is based on the nonbuoyant atmospheric dispersion model MCLDM (Monte Carlo Lagrangian Dispersion Model), which has been shown to be consistent with available laboratory and field data. The inclusion of buoyancy and the addition of a postprocessor to evaluate time-varying concentrations lead to the current model. The FIREPLUME model, as applied to fire-related UF 6 cylinder releases, accounts for three phases of release and dispersion. The first phase of release involves the hydraulic rupture of the cylinder due to heating of the UF 6 in the fire. The second phase involves the emission of material into the burning fire, and the third phase involves the emission of material after the fire has died during the cool-down period. The model predicts the downwind concentration of the material as a function of time at any point downwind at or above the ground. All together, five fire-related release scenarios are examined in this report. For each scenario, downwind concentrations of the UF 6 reaction products, uranyl fluoride and hydrogen fluoride, are provided for two meteorological conditions: (1) D stability with a 4-m/s wind speed, and (2) F stability with a 1-m/s wind speed

  9. Gaussian mixed model in support of semiglobal matching leveraged by ground control points

    Science.gov (United States)

    Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li

    2017-04-01

    Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.

  10. GGRaSP: A R-package for selecting representative genomes using Gaussian mixture models.

    Science.gov (United States)

    Clarke, Thomas H; Brinkac, Lauren M; Sutton, Granger; Fouts, Derrick E

    2018-04-14

    The vast number of available sequenced bacterial genomes occasionally exceeds the facilities of comparative genomic methods or is dominated by a single outbreak strain, and thus a diverse and representative subset is required. Generation of the reduced subset currently requires a priori supervised clustering and sequence-only selection of medoid genomic sequences, independent of any additional genome metrics or strain attributes. The GGRaSP R-package described below generates a reduced subset of genomes that prioritizes maintaining genomes of interest to the user as well as minimizing the loss of genetic variation. The package also allows for unsupervised clustering by modeling the genomic relationships using a Gaussian Mixture Model to select an appropriate cluster threshold. We demonstrate the capabilities of GGRaSP by generating a reduced list of 315 genomes from a genomic dataset of 4600 Escherichia coli genomes, prioritizing selection by type strain and by genome completeness. GGRaSP is available at https://github.com/JCVenterInstitute/ggrasp/. tclarke@jcvi.org. Supplementary data are available at the GitHub site.

  11. Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression

    Science.gov (United States)

    Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli

    2018-06-01

    Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.

  12. Corrective Measures Study Modeling Results for the Southwest Plume - Burial Ground Complex/Mixed Waste Management Facility

    International Nuclear Information System (INIS)

    Harris, M.K.

    1999-01-01

    Groundwater modeling scenarios were performed to support the Corrective Measures Study and Interim Action Plan for the southwest plume of the Burial Ground Complex/Mixed Waste Management Facility. The modeling scenarios were designed to provide data for an economic analysis of alternatives, and subsequently evaluate the effectiveness of the selected remedial technologies for tritium reduction to Fourmile Branch. Modeling scenarios assessed include no action, vertical barriers, pump, treat, and reinject; and vertical recirculation wells

  13. Improving the modelling of redshift-space distortions - I. A bivariate Gaussian description for the galaxy pairwise velocity distributions

    Science.gov (United States)

    Bianchi, Davide; Chiesa, Matteo; Guzzo, Luigi

    2015-01-01

    As a step towards a more accurate modelling of redshift-space distortions (RSD) in galaxy surveys, we develop a general description of the probability distribution function of galaxy pairwise velocities within the framework of the so-called streaming model. For a given galaxy separation r, such function can be described as a superposition of virtually infinite local distributions. We characterize these in terms of their moments and then consider the specific case in which they are Gaussian functions, each with its own mean μ and dispersion σ. Based on physical considerations, we make the further crucial assumption that these two parameters are in turn distributed according to a bivariate Gaussian, with its own mean and covariance matrix. Tests using numerical simulations explicitly show that with this compact description one can correctly model redshift-space distortions on all scales, fully capturing the overall linear and non-linear dynamics of the galaxy flow at different separations. In particular, we naturally obtain Gaussian/exponential, skewed/unskewed distribution functions, depending on separation as observed in simulations and data. Also, the recently proposed single-Gaussian description of RSD is included in this model as a limiting case, when the bivariate Gaussian is collapsed to a two-dimensional Dirac delta function. We also show how this description naturally allows for the Taylor expansion of 1 + ξS(s) around 1 + ξR(r), which leads to the Kaiser linear formula when truncated to second order, explicating its connection with the moments of the velocity distribution functions. More work is needed, but these results indicate a very promising path to make definitive progress in our programme to improve RSD estimators.

  14. Hydrogen chloride heterogeneous chemistry on frozen water particles in subsonic aircraft plume. Laboratory studies and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Persiantseva, N.V.; Popovitcheva, O.B.; Rakhimova, T.V. [Moscow State Univ. (Russian Federation)

    1997-12-31

    Heterogeneous chemistry of HCl, as a main reservoir of chlorine content gases, has been considered after plume cooling and ice particle formation. The HCl, HNO{sub 3}, N{sub 2}O{sub 5} uptake efficiencies by frozen water were obtained in a Knudsen-cell flow reactor at the subsonic cruise conditions. The formation of ice particles in the plume of subsonic aircraft is simulated to describe the kinetics of gaseous HCl loss due to heterogeneous processes. It is shown that the HCl uptake by frozen water particles may play an important role in the gaseous HCl depletion in the aircraft plume. (author) 14 refs.

  15. Hydrogen chloride heterogeneous chemistry on frozen water particles in subsonic aircraft plume. Laboratory studies and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Persiantseva, N V; Popovitcheva, O B; Rakhimova, T V [Moscow State Univ. (Russian Federation)

    1998-12-31

    Heterogeneous chemistry of HCl, as a main reservoir of chlorine content gases, has been considered after plume cooling and ice particle formation. The HCl, HNO{sub 3}, N{sub 2}O{sub 5} uptake efficiencies by frozen water were obtained in a Knudsen-cell flow reactor at the subsonic cruise conditions. The formation of ice particles in the plume of subsonic aircraft is simulated to describe the kinetics of gaseous HCl loss due to heterogeneous processes. It is shown that the HCl uptake by frozen water particles may play an important role in the gaseous HCl depletion in the aircraft plume. (author) 14 refs.

  16. Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2010-02-01

    Full Text Available This document describes the new features in version 2.x of the tgp package for R, implementing treed Gaussian process (GP models. The topics covered include methods for dealing with categorical inputs and excluding inputs from the tree or GP part of the model; fully Bayesian sensitivity analysis for inputs/covariates; sequential optimization of black-box functions; and a new Monte Carlo method for inference in multi-modal posterior distributions that combines simulated tempering and importance sampling. These additions extend the functionality of tgp across all models in the hierarchy: from Bayesian linear models, to classification and regression trees (CART, to treed Gaussian processes with jumps to the limiting linear model. It is assumed that the reader is familiar with the baseline functionality of the package, outlined in the first vignette (Gramacy 2007.

  17. Implementing the Stochastic Model of Safety Assessment for the T.D.P. Plume Model

    International Nuclear Information System (INIS)

    Hurtado, A.; Eguilior, S.; Recreo, F.

    2015-01-01

    In this paper, it is continued with the first phase of the implementation of the ABACO2G methodology (Application of Bayes to CO2 Geological Storage ) which is a new methodological approach to solving the problem related with the determination of probabilistic component of CO2 geological storage risk assessment, through the application of Bayesian networks and MonteCarlo probability. This in itself is a fundalmental but complex task, inasmuch as the risk assessment requires the identification of those system performance influence variables and the probability distribution of occurrence of defined events, which is highly difficult when there is no significant casuistry and/or fully developed events. On this, at this stage it has been studied the CO2 plume stochastic evolution implementation during the injection period. This is an essential task because the CO2 geologic storage risks will rely primarily on the convolution of the CO2 plume wavefront probability function with the probability functions of system risk elements, such as wells or faults, that define the event space to estimate the CO2 geologic storage risks.

  18. Evaluation of remedial alternative of a LNAPL plume utilizing groundwater modeling

    International Nuclear Information System (INIS)

    Johnson, T.; Way, S.; Powell, G.

    1997-01-01

    The TIMES model was utilized to evaluate remedial options for a large LNAPL spill that was impacting the North Platte River in Glenrock, Wyoming. LNAPL was found discharging into the river from the adjoining alluvial aquifer. Subsequent investigations discovered an 18 hectare plume extended across the alluvium and into a sandstone bedrock outcrop to the south of the river. The TIMES model was used to estimate the LNAPL volume and to evaluate options for optimizing LNAPL recovery. Data collected from recovery and monitoring wells were used for model calibration. A LNAPL volume of 5.5 million L was estimated, over 3.0 million L of which is in the sandstone bedrock. An existing product recovery system was evaluated for its effectiveness. Three alternative recovery scenarios were also evaluated to aid in selecting the most cost-effective and efficient recovery system for the site. An active wellfield hydraulically upgradient of the existing recovery system was selected as most appropriate to augment the existing system in recovering LNAPL efficiently

  19. Sensitivity experiments with a one-dimensional coupled plume - iceflow model

    Science.gov (United States)

    Beckmann, Johanna; Perette, Mahé; Alexander, David; Calov, Reinhard; Ganopolski, Andrey

    2016-04-01

    Over the last few decades Greenland Ice sheet mass balance has become increasingly negative, caused by enhanced surface melting and speedup of the marine-terminating outlet glaciers at the ice sheet margins. Glaciers speedup has been related, among other factors, to enhanced submarine melting, which in turn is caused by warming of the surrounding ocean and less obviously, by increased subglacial discharge. While ice-ocean processes potentially play an important role in recent and future mass balance changes of the Greenland Ice Sheet, their physical understanding remains poorly understood. In this work we performed numerical experiments with a one-dimensional plume model coupled to a one-dimensional iceflow model. First we investigated the sensitivity of submarine melt rate to changes in ocean properties (ocean temperature and salinity), to the amount of subglacial discharge and to the glacier's tongue geometry itself. A second set of experiments investigates the response of the coupled model, i.e. the dynamical response of the outlet glacier to altered submarine melt, which results in new glacier geometry and updated melt rates.

  20. Integration of plume and puff diffusion models/application of CFD

    Science.gov (United States)

    Mori, Akira

    The clinical symptoms of patients and other evidences of a gas poisoning accident inside an industrial building strongly suggested an abrupt influx of engine exhaust from a construction vehicle which was operating outside in the open air. But the obviously high level of gas concentration could not be well explained by any conventional steady-state gas diffusion models. The author used an unsteady-state continuous Puff Model to simulate the time-wise changes in air stream with the pollutant gas being continuously emitted, and successfully reproduced the observed phenomena. The author demonstrates that this diffusion formula can be solved analytically by the use of error function as long as the change in wind velocity is stepwise, and clarifies the accurate differences between the unsteady- and steady-states and their convergence profiles. Also, the relationship between the Puff and Plume Models is discussed. The case study included a computational fluid dynamics (CFD) analysis to estimate the steady-state air stream and the gas concentration pattern in the affected area. It is well known that clear definition of the boundary conditions is key to successful CFD analysis. The author describes a two-step use of CFD: the first step to define the boundary conditions and the second to determine the steady-state air stream and the gas concentration pattern.

  1. Modelling the transport of suspended particulate matter by the Rhone River plume (France). Implications for pollutant dispersion

    International Nuclear Information System (INIS)

    Perianez, R.

    2005-01-01

    A model to simulate the transport of suspended particulate matter by the Rhone River plume has been developed. The model solves the 3D hydrodynamic equations, including baroclinic terms and a 1-equation turbulence model, and the suspended matter equations including advection/diffusion of particles, settling and deposition. Four particle classes are considered simultaneously according to observations in the Rhone. Computed currents, salinity and particle distributions are, in general, in good agreement with observations or previous calculations. The model also provides sedimentation rates and the distribution of different particle classes over the sea bed. It has been found that high sedimentation rates close to the river mouth are due to coarse particles that sink rapidly. Computed sedimentation rates are also similar to those derived from observations. The model has been applied to simulate the transport of radionuclides by the plume, since suspended matter is the main vector for them. The radionuclide transport model, previously described and validated, includes exchanges of radionuclides between water, suspended matter and bottom sediment described in terms of kinetic rates. A new feature is the explicit inclusion of the dependence of kinetic rates upon salinity. The model has been applied to 137 Cs and 239,240 Pu. Results are, in general, in good agreement with observations. - A model has been developed to simulate transport of suspended particulate matter in the Rhone River plume

  2. Interactions Between Mantle Plumes and Mid-Ocean Ridges: Constraints from Geophysics, Geochemistry, and Geodynamical Modeling

    National Research Council Canada - National Science Library

    Georgen, Jennifer

    2001-01-01

    This thesis studies interactions between mid-ocean ridges and mantle plumes. Chapter 1 investigates the effects of the Marion and Bouvet hotspots on the ultra-slow spreading, highly-segmented Southwest Indian Ridge (SWIR...

  3. Damage Detection of Refractory Based on Principle Component Analysis and Gaussian Mixture Model

    Directory of Open Access Journals (Sweden)

    Changming Liu

    2018-01-01

    Full Text Available Acoustic emission (AE technique is a common approach to identify the damage of the refractories; however, there is a complex problem since there are as many as fifteen involved parameters, which calls for effective data processing and classification algorithms to reduce the level of complexity. In this paper, experiments involving three-point bending tests of refractories were conducted and AE signals were collected. A new data processing method of merging the similar parameters in the description of the damage and reducing the dimension was developed. By means of the principle component analysis (PCA for dimension reduction, the fifteen related parameters can be reduced to two parameters. The parameters were the linear combinations of the fifteen original parameters and taken as the indexes for damage classification. Based on the proposed approach, the Gaussian mixture model was integrated with the Bayesian information criterion to group the AE signals into two damage categories, which accounted for 99% of all damage. Electronic microscope scanning of the refractories verified the two types of damage.

  4. Fast Kalman-like filtering for large-dimensional linear and Gaussian state-space models

    KAUST Repository

    Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim

    2015-01-01

    This paper considers the filtering problem for linear and Gaussian state-space models with large dimensions, a setup in which the optimal Kalman Filter (KF) might not be applicable owing to the excessive cost of manipulating huge covariance matrices. Among the most popular alternatives that enable cheaper and reasonable computation is the Ensemble KF (EnKF), a Monte Carlo-based approximation. In this paper, we consider a class of a posteriori distributions with diagonal covariance matrices and propose fast approximate deterministic-based algorithms based on the Variational Bayesian (VB) approach. More specifically, we derive two iterative KF-like algorithms that differ in the way they operate between two successive filtering estimates; one involves a smoothing estimate and the other involves a prediction estimate. Despite its iterative nature, the prediction-based algorithm provides a computational cost that is, on the one hand, independent of the number of iterations in the limit of very large state dimensions, and on the other hand, always much smaller than the cost of the EnKF. The cost of the smoothing-based algorithm depends on the number of iterations that may, in some situations, make this algorithm slower than the EnKF. The performances of the proposed filters are studied and compared to those of the KF and EnKF through a numerical example.

  5. Spectrum recovery method based on sparse representation for segmented multi-Gaussian model

    Science.gov (United States)

    Teng, Yidan; Zhang, Ye; Ti, Chunli; Su, Nan

    2016-09-01

    Hyperspectral images can realize crackajack features discriminability for supplying diagnostic characteristics with high spectral resolution. However, various degradations may generate negative influence on the spectral information, including water absorption, bands-continuous noise. On the other hand, the huge data volume and strong redundancy among spectrums produced intense demand on compressing HSIs in spectral dimension, which also leads to the loss of spectral information. The reconstruction of spectral diagnostic characteristics has irreplaceable significance for the subsequent application of HSIs. This paper introduces a spectrum restoration method for HSIs making use of segmented multi-Gaussian model (SMGM) and sparse representation. A SMGM is established to indicating the unsymmetrical spectral absorption and reflection characteristics, meanwhile, its rationality and sparse property are discussed. With the application of compressed sensing (CS) theory, we implement sparse representation to the SMGM. Then, the degraded and compressed HSIs can be reconstructed utilizing the uninjured or key bands. Finally, we take low rank matrix recovery (LRMR) algorithm for post processing to restore the spatial details. The proposed method was tested on the spectral data captured on the ground with artificial water absorption condition and an AVIRIS-HSI data set. The experimental results in terms of qualitative and quantitative assessments demonstrate that the effectiveness on recovering the spectral information from both degradations and loss compression. The spectral diagnostic characteristics and the spatial geometry feature are well preserved.

  6. Ghost imaging and its visibility with partially coherent elliptical Gaussian Schell-model beams

    International Nuclear Information System (INIS)

    Luo, Meilan; Zhu, Weiting; Zhao, Daomu

    2015-01-01

    The performances of the ghost image and the visibility with partially coherent elliptical Gaussian Schell-model beams have been studied. In that case we have derived the condition under which the goal ghost image is achievable. Furthermore, the visibility is assessed in terms of the parameters related to the source to find that the visibility reduces with the increase of the beam size, while it is a monotonic increasing function of the transverse coherence length. More specifically, it is found that the inequalities of the source sizes in x and y directions, as well as the transverse coherence lengths, play an important role in the ghost image and the visibility. - Highlights: • We studied the ghost image and visibility with partially coherent EGSM beams. • We derived the condition under which the goal ghost image is achievable. • The visibility is assessed in terms of the parameters related to the source. • The source sizes and coherence lengths play role in the ghost image and visibility.

  7. Hierarchical heuristic search using a Gaussian mixture model for UAV coverage planning.

    Science.gov (United States)

    Lin, Lanny; Goodrich, Michael A

    2014-12-01

    During unmanned aerial vehicle (UAV) search missions, efficient use of UAV flight time requires flight paths that maximize the probability of finding the desired subject. The probability of detecting the desired subject based on UAV sensor information can vary in different search areas due to environment elements like varying vegetation density or lighting conditions, making it likely that the UAV can only partially detect the subject. This adds another dimension of complexity to the already difficult (NP-Hard) problem of finding an optimal search path. We present a new class of algorithms that account for partial detection in the form of a task difficulty map and produce paths that approximate the payoff of optimal solutions. The algorithms use the mode goodness ratio heuristic that uses a Gaussian mixture model to prioritize search subregions. The algorithms search for effective paths through the parameter space at different levels of resolution. We compare the performance of the new algorithms against two published algorithms (Bourgault's algorithm and LHC-GW-CONV algorithm) in simulated searches with three real search and rescue scenarios, and show that the new algorithms outperform existing algorithms significantly and can yield efficient paths that yield payoffs near the optimal.

  8. Identification of damage in composite structures using Gaussian mixture model-processed Lamb waves

    Science.gov (United States)

    Wang, Qiang; Ma, Shuxian; Yue, Dong

    2018-04-01

    Composite materials have comprehensively better properties than traditional materials, and therefore have been more and more widely used, especially because of its higher strength-weight ratio. However, the damage of composite structures is usually varied and complicated. In order to ensure the security of these structures, it is necessary to monitor and distinguish the structural damage in a timely manner. Lamb wave-based structural health monitoring (SHM) has been proved to be effective in online structural damage detection and evaluation; furthermore, the characteristic parameters of the multi-mode Lamb wave varies in response to different types of damage in the composite material. This paper studies the damage identification approach for composite structures using the Lamb wave and the Gaussian mixture model (GMM). The algorithm and principle of the GMM, and the parameter estimation, is introduced. Multi-statistical characteristic parameters of the excited Lamb waves are extracted, and the parameter space with reduced dimensions is adopted by principal component analysis (PCA). The damage identification system using the GMM is then established through training. Experiments on a glass fiber-reinforced epoxy composite laminate plate are conducted to verify the feasibility of the proposed approach in terms of damage classification. The experimental results show that different types of damage can be identified according to the value of the likelihood function of the GMM.

  9. Fast Kalman-like filtering for large-dimensional linear and Gaussian state-space models

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2015-08-13

    This paper considers the filtering problem for linear and Gaussian state-space models with large dimensions, a setup in which the optimal Kalman Filter (KF) might not be applicable owing to the excessive cost of manipulating huge covariance matrices. Among the most popular alternatives that enable cheaper and reasonable computation is the Ensemble KF (EnKF), a Monte Carlo-based approximation. In this paper, we consider a class of a posteriori distributions with diagonal covariance matrices and propose fast approximate deterministic-based algorithms based on the Variational Bayesian (VB) approach. More specifically, we derive two iterative KF-like algorithms that differ in the way they operate between two successive filtering estimates; one involves a smoothing estimate and the other involves a prediction estimate. Despite its iterative nature, the prediction-based algorithm provides a computational cost that is, on the one hand, independent of the number of iterations in the limit of very large state dimensions, and on the other hand, always much smaller than the cost of the EnKF. The cost of the smoothing-based algorithm depends on the number of iterations that may, in some situations, make this algorithm slower than the EnKF. The performances of the proposed filters are studied and compared to those of the KF and EnKF through a numerical example.

  10. Vehicle speed detection based on gaussian mixture model using sequential of images

    Science.gov (United States)

    Setiyono, Budi; Ratna Sulistyaningrum, Dwi; Soetrisno; Fajriyah, Farah; Wahyu Wicaksono, Danang

    2017-09-01

    Intelligent Transportation System is one of the important components in the development of smart cities. Detection of vehicle speed on the highway is supporting the management of traffic engineering. The purpose of this study is to detect the speed of the moving vehicles using digital image processing. Our approach is as follows: The inputs are a sequence of frames, frame rate (fps) and ROI. The steps are following: First we separate foreground and background using Gaussian Mixture Model (GMM) in each frames. Then in each frame, we calculate the location of object and its centroid. Next we determine the speed by computing the movement of centroid in sequence of frames. In the calculation of speed, we only consider frames when the centroid is inside the predefined region of interest (ROI). Finally we transform the pixel displacement into a time unit of km/hour. Validation of the system is done by comparing the speed calculated manually and obtained by the system. The results of software testing can detect the speed of vehicles with the highest accuracy is 97.52% and the lowest accuracy is 77.41%. And the detection results of testing by using real video footage on the road is included with real speed of the vehicle.

  11. Multi-atlas segmentation for abdominal organs with Gaussian mixture models

    Science.gov (United States)

    Burke, Ryan P.; Xu, Zhoubing; Lee, Christopher P.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Abramson, Richard G.; Landman, Bennett A.

    2015-03-01

    Abdominal organ segmentation with clinically acquired computed tomography (CT) is drawing increasing interest in the medical imaging community. Gaussian mixture models (GMM) have been extensively used through medical segmentation, most notably in the brain for cerebrospinal fluid / gray matter / white matter differentiation. Because abdominal CT exhibit strong localized intensity characteristics, GMM have recently been incorporated in multi-stage abdominal segmentation algorithms. In the context of variable abdominal anatomy and rich algorithms, it is difficult to assess the marginal contribution of GMM. Herein, we characterize the efficacy of an a posteriori framework that integrates GMM of organ-wise intensity likelihood with spatial priors from multiple target-specific registered labels. In our study, we first manually labeled 100 CT images. Then, we assigned 40 images to use as training data for constructing target-specific spatial priors and intensity likelihoods. The remaining 60 images were evaluated as test targets for segmenting 12 abdominal organs. The overlap between the true and the automatic segmentations was measured by Dice similarity coefficient (DSC). A median improvement of 145% was achieved by integrating the GMM intensity likelihood against the specific spatial prior. The proposed framework opens the opportunities for abdominal organ segmentation by efficiently using both the spatial and appearance information from the atlases, and creates a benchmark for large-scale automatic abdominal segmentation.

  12. Speech Enhancement Using Gaussian Mixture Models, Explicit Bayesian Estimation and Wiener Filtering

    Directory of Open Access Journals (Sweden)

    M. H. Savoji

    2014-09-01

    Full Text Available Gaussian Mixture Models (GMMs of power spectral densities of speech and noise are used with explicit Bayesian estimations in Wiener filtering of noisy speech. No assumption is made on the nature or stationarity of the noise. No voice activity detection (VAD or any other means is employed to estimate the input SNR. The GMM mean vectors are used to form sets of over-determined system of equations whose solutions lead to the first estimates of speech and noise power spectra. The noise source is also identified and the input SNR estimated in this first step. These first estimates are then refined using approximate but explicit MMSE and MAP estimation formulations. The refined estimates are then used in a Wiener filter to reduce noise and enhance the noisy speech. The proposed schemes show good results. Nevertheless, it is shown that the MAP explicit solution, introduced here for the first time, reduces the computation time to less than one third with a slight higher improvement in SNR and PESQ score and also less distortion in comparison to the MMSE solution.

  13. Monitoring the trajectory of urban nighttime light hotspots using a Gaussian volume model

    Science.gov (United States)

    Zheng, Qiming; Jiang, Ruowei; Wang, Ke; Huang, Lingyan; Ye, Ziran; Gan, Muye; Ji, Biyong

    2018-03-01

    Urban nighttime light hotspot is an ideal representation of the spatial heterogeneity of human activities within a city, which is sensitive to regional urban expansion pattern. However, most of previous studies related to nighttime light imageries focused on extracting urban extent, leaving the spatial variation of radiance intensity insufficiently explored. With the help of global radiance calibrated DMSP-OLS datasets (NTLgrc), we proposed an innovative framework to explore the spatio-temporal trajectory of polycentric urban nighttime light hotspots. Firstly, NTLgrc was inter-annually calibrated to improve the consistency. Secondly, multi-resolution segmentation and region-growing SVM classification were employed to remove blooming effect and to extract potential clusters. At last, the urban hotspots were identified by a Gaussian volume model, and the resulting parameters were used to quantitatively depict hotspot features (i.e., intensity, morphology and centroid dynamics). The result shows that our framework successfully captures hotspots in polycentric urban area, whose Ra2 are over 0.9. Meanwhile, the spatio-temporal dynamics of the hotspot features intuitively reveal the impact of the regional urban growth pattern and planning strategies on human activities. Compared to previous studies, our framework is more robust and offers an effective way to describe hotspot pattern. Also, it provides a more comprehensive and spatial-explicit understanding regarding the interaction between urbanization pattern and human activities. Our findings are expected to be beneficial to governors in term of sustainable urban planning and decision making.

  14. Gaussian Mixture Random Coefficient model based framework for SHM in structures with time-dependent dynamics under uncertainty

    Science.gov (United States)

    Avendaño-Valencia, Luis David; Fassois, Spilios D.

    2017-12-01

    The problem of vibration-based damage diagnosis in structures characterized by time-dependent dynamics under significant environmental and/or operational uncertainty is considered. A stochastic framework consisting of a Gaussian Mixture Random Coefficient model of the uncertain time-dependent dynamics under each structural health state, proper estimation methods, and Bayesian or minimum distance type decision making, is postulated. The Random Coefficient (RC) time-dependent stochastic model with coefficients following a multivariate Gaussian Mixture Model (GMM) allows for significant flexibility in uncertainty representation. Certain of the model parameters are estimated via a simple procedure which is founded on the related Multiple Model (MM) concept, while the GMM weights are explicitly estimated for optimizing damage diagnostic performance. The postulated framework is demonstrated via damage detection in a simple simulated model of a quarter-car active suspension with time-dependent dynamics and considerable uncertainty on the payload. Comparisons with a simpler Gaussian RC model based method are also presented, with the postulated framework shown to be capable of offering considerable improvement in diagnostic performance.

  15. Sediment plume model-a comparison between use of measured turbidity data and satellite images for model calibration.

    Science.gov (United States)

    Sadeghian, Amir; Hudson, Jeff; Wheater, Howard; Lindenschmidt, Karl-Erich

    2017-08-01

    In this study, we built a two-dimensional sediment transport model of Lake Diefenbaker, Saskatchewan, Canada. It was calibrated by using measured turbidity data from stations along the reservoir and satellite images based on a flood event in 2013. In June 2013, there was heavy rainfall for two consecutive days on the frozen and snow-covered ground in the higher elevations of western Alberta, Canada. The runoff from the rainfall and the melted snow caused one of the largest recorded inflows to the headwaters of the South Saskatchewan River and Lake Diefenbaker downstream. An estimated discharge peak of over 5200 m 3 /s arrived at the reservoir inlet with a thick sediment front within a few days. The sediment plume moved quickly through the entire reservoir and remained visible from satellite images for over 2 weeks along most of the reservoir, leading to concerns regarding water quality. The aims of this study are to compare, quantitatively and qualitatively, the efficacy of using turbidity data and satellite images for sediment transport model calibration and to determine how accurately a sediment transport model can simulate sediment transport based on each of them. Both turbidity data and satellite images were very useful for calibrating the sediment transport model quantitatively and qualitatively. Model predictions and turbidity measurements show that the flood water and suspended sediments entered upstream fairly well mixed and moved downstream as overflow with a sharp gradient at the plume front. The model results suggest that the settling and resuspension rates of sediment are directly proportional to flow characteristics and that the use of constant coefficients leads to model underestimation or overestimation unless more data on sediment formation become available. Hence, this study reiterates the significance of the availability of data on sediment distribution and characteristics for building a robust and reliable sediment transport model.

  16. Electrical Evolution of a Dust Plume from a Low Energy Lunar Impact: A Model Analog to LCROSS

    Science.gov (United States)

    Farrell, W. M.; Stubbs, T. J.; Jackson, T. L.; Colaprete, A.; Heldmann, J. L.; Schultz, P. H.; Killen, R. M.; Delory, G. T.; Halekas, J. S.; Marshall, J. R.; hide

    2011-01-01

    A Monte Carlo test particle model was developed that simulates the charge evolution of micron and sub-micron sized dust grains ejected upon low-energy impact of a moderate-size object onto a lunar polar crater floor. Our analog is the LCROSS impact into Cabeus crater. Our primary objective is to model grain discharging as the plume propagates upwards from shadowed crater into sunlight.

  17. Subthreshold Current and Swing Modeling of Gate Underlap DG MOSFETs with a Source/Drain Lateral Gaussian Doping Profile

    Science.gov (United States)

    Singh, Kunal; Kumar, Sanjay; Goel, Ekta; Singh, Balraj; Kumar, Mirgender; Dubey, Sarvesh; Jit, Satyabrata

    2017-01-01

    This paper proposes a new model for the subthreshold current and swing of the short-channel symmetric underlap ultrathin double gate metal oxide field effect transistors with a source/drain lateral Gaussian doping profile. The channel potential model already reported earlier has been utilized to formulate the closed form expression for the subthreshold current and swing of the device. The effects of the lateral straggle and geometrical parameters such as the channel length, channel thickness, and oxide thickness on the off current and subthreshold slope have been demonstrated. The devices with source/drain lateral Gaussian doping profiles in the underlap structure are observed to be highly resistant to short channel effects while improving the current drive. The proposed model is validated by comparing the results with the numerical simulation data obtained by using the commercially available ATLAS™, a two-dimensional (2-D) device simulator from SILVACO.

  18. Dispersion of a Passive Scalar Fluctuating Plume in a Turbulent Boundary Layer. Part III: Stochastic Modelling

    Science.gov (United States)

    Marro, Massimo; Salizzoni, Pietro; Soulhac, Lionel; Cassiani, Massimo

    2018-01-01

    We analyze the reliability of the Lagrangian stochastic micromixing method in predicting higher-order statistics of the passive scalar concentration induced by an elevated source (of varying diameter) placed in a turbulent boundary layer. To that purpose we analyze two different modelling approaches by testing their results against the wind-tunnel measurements discussed in Part I (Nironi et al., Boundary-Layer Meteorology, 2015, Vol. 156, 415-446). The first is a probability density function (PDF) micromixing model that simulates the effects of the molecular diffusivity on the concentration fluctuations by taking into account the background particles. The second is a new model, named VPΓ, conceived in order to minimize the computational costs. This is based on the volumetric particle approach providing estimates of the first two concentration moments with no need for the simulation of the background particles. In this second approach, higher-order moments are computed based on the estimates of these two moments and under the assumption that the concentration PDF is a Gamma distribution. The comparisons concern the spatial distribution of the first four moments of the concentration and the evolution of the PDF along the plume centreline. The novelty of this work is twofold: (i) we perform a systematic comparison of the results of micro-mixing Lagrangian models against experiments providing profiles of the first four moments of the concentration within an inhomogeneous and anisotropic turbulent flow, and (ii) we show the reliability of the VPΓ model as an operational tool for the prediction of the PDF of the concentration.

  19. Dispersion of a Passive Scalar Fluctuating Plume in a Turbulent Boundary Layer. Part III: Stochastic Modelling

    Science.gov (United States)

    Marro, Massimo; Salizzoni, Pietro; Soulhac, Lionel; Cassiani, Massimo

    2018-06-01

    We analyze the reliability of the Lagrangian stochastic micromixing method in predicting higher-order statistics of the passive scalar concentration induced by an elevated source (of varying diameter) placed in a turbulent boundary layer. To that purpose we analyze two different modelling approaches by testing their results against the wind-tunnel measurements discussed in Part I (Nironi et al., Boundary-Layer Meteorology, 2015, Vol. 156, 415-446). The first is a probability density function (PDF) micromixing model that simulates the effects of the molecular diffusivity on the concentration fluctuations by taking into account the background particles. The second is a new model, named VPΓ, conceived in order to minimize the computational costs. This is based on the volumetric particle approach providing estimates of the first two concentration moments with no need for the simulation of the background particles. In this second approach, higher-order moments are computed based on the estimates of these two moments and under the assumption that the concentration PDF is a Gamma distribution. The comparisons concern the spatial distribution of the first four moments of the concentration and the evolution of the PDF along the plume centreline. The novelty of this work is twofold: (i) we perform a systematic comparison of the results of micro-mixing Lagrangian models against experiments providing profiles of the first four moments of the concentration within an inhomogeneous and anisotropic turbulent flow, and (ii) we show the reliability of the VPΓ model as an operational tool for the prediction of the PDF of the concentration.

  20. Poisson-Gaussian Noise Reduction Using the Hidden Markov Model in Contourlet Domain for Fluorescence Microscopy Images

    Science.gov (United States)

    Yang, Sejung; Lee, Byung-Uk

    2015-01-01

    In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach. PMID:26352138

  1. Modelling exhaust plume mixing in the near field of an aircraft

    Directory of Open Access Journals (Sweden)

    F. Garnier

    Full Text Available A simplified approach has been applied to analyse the mixing and entrainment processes of the engine exhaust through their interaction with the vortex wake of an aircraft. Our investigation is focused on the near field, extending from the exit nozzle until about 30 s after the wake is generated, in the vortex phase. This study was performed by using an integral model and a numerical simulation for two large civil aircraft: a two-engine Airbus 330 and a four-engine Boeing 747. The influence of the wing-tip vortices on the dilution ratio (defined as a tracer concentration shown. The mixing process is also affected by the buoyancy effect, but only after the jet regime, when the trapping in the vortex core has occurred. In the early wake, the engine jet location (i.e. inboard or outboard engine jet has an important influence on the mixing rate. The plume streamlines inside the vortices are subject to distortion and stretching, and the role of the descent of the vortices on the maximum tracer concentration is discussed. Qualitative comparison with contrail photograph shows similar features. Finally, tracer concentration of inboard engine centreline of B-747 are compared with other theoretical analyses and measured data.

  2. A variational EM method for pole-zero modeling of speech with mixed block sparse and Gaussian excitation

    DEFF Research Database (Denmark)

    Shi, Liming; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    The modeling of speech can be used for speech synthesis and speech recognition. We present a speech analysis method based on pole-zero modeling of speech with mixed block sparse and Gaussian excitation. By using a pole-zero model, instead of the all-pole model, a better spectral fitting can...... be expected. Moreover, motivated by the block sparse glottal flow excitation during voiced speech and the white noise excitation for unvoiced speech, we model the excitation sequence as a combination of block sparse signals and white noise. A variational EM (VEM) method is proposed for estimating...... in reconstructing of the block sparse excitation....

  3. Hydrogeological modeling constraints provided by geophysical and geochemical mapping of a chlorinated ethenes plume in northern France

    Science.gov (United States)

    Razafindratsima, Stephen; Guérin, Roger; Bendjoudi, Hocine; de Marsily, Ghislain

    2014-09-01

    A methodological approach is described which combines geophysical and geochemical data to delineate the extent of a chlorinated ethenes plume in northern France; the methodology was used to calibrate a hydrogeological model of the contaminants' migration and degradation. The existence of strong reducing conditions in some parts of the aquifer is first determined by measuring in situ the redox potential and dissolved oxygen, dissolved ferrous iron and chloride concentrations. Electrical resistivity imaging and electromagnetic mapping, using the Slingram method, are then used to determine the shape of the pollutant plume. A decreasing empirical exponential relation between measured chloride concentrations in the water and aquifer electrical resistivity is observed; the resistivity formation factor calculated at a few points also shows a major contribution of chloride concentration in the resistivity of the saturated porous medium. MODFLOW software and MT3D99 first-order parent-daughter chain reaction and the RT3D aerobic-anaerobic model for tetrachloroethene (PCE)/trichloroethene (TCE) dechlorination are finally used for a first attempt at modeling the degradation of the chlorinated ethenes. After calibration, the distribution of the chlorinated ethenes and their degradation products simulated with the model approximately reflects the mean measured values in the observation wells, confirming the data-derived image of the plume.

  4. Experimentally Identify the Effective Plume Chimney over a Natural Draft Chimney Model

    Science.gov (United States)

    Rahman, M. M.; Chu, C. M.; Tahir, A. M.; Ismail, M. A. bin; Misran, M. S. bin; Ling, L. S.

    2017-07-01

    The demands of energy are in increasing order due to rapid industrialization and urbanization. The researchers and scientists are working hard to improve the performance of the industry so that the energy consumption can be reduced significantly. Industries like power plant, timber processing plant, oil refinery, etc. performance mainly depend on the cooling tower chimney’s performance, either natural draft or forced draft. Chimney is used to create sufficient draft, so that air can flow through it. Cold inflow or flow reversal at chimney exit is one of the main identified problems that may alter the overall plant performance. The presence Effective Plume Chimney (EPC) is an indication of cold inflow free operation of natural draft chimney. Different mathematical model equations are used to estimate the EPC height over the heat exchanger or hot surface. In this paper, it is aim to identify the EPC experimentally. In order to do that, horizontal temperature profiling is done at the exit of the chimneys of face area 0.56m2, 1.00m2 and 2.25m2. A wire mesh screen is installed at chimneys exit to ensure cold inflow chimney operation. It is found that EPC exists in all modified chimney models and the heights of EPC varied from 1 cm to 9 cm. The mathematical models indicate that the estimated heights of EPC varied from 1 cm to 2.3 cm. Smoke test is also conducted to ensure the existence of EPC and cold inflow free option of chimney. Smoke test results confirmed the presence of EPC and cold inflow free operation of chimney. The performance of the cold inflow free chimney is increased by 50% to 90% than normal chimney.

  5. Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven

    2009-07-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.

  6. PRECISION MEASUREMENTS OF THE CLUSTER RED SEQUENCE USING AN ERROR-CORRECTED GAUSSIAN MIXTURE MODEL

    International Nuclear Information System (INIS)

    Hao Jiangang; Annis, James; Koester, Benjamin P.; Mckay, Timothy A.; Evrard, August; Gerdes, David; Rykoff, Eli S.; Rozo, Eduardo; Becker, Matthew; Busha, Michael; Wechsler, Risa H.; Johnston, David E.; Sheldon, Erin

    2009-01-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error-corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically based cluster cosmology.

  7. The Research of Indoor Positioning Based on Double-peak Gaussian Model

    Directory of Open Access Journals (Sweden)

    Lina Chen

    2014-04-01

    Full Text Available Location fingerprinting using Wi-Fi signals has been very popular and is a well accepted indoor positioning method. The key issue of the fingerprinting approach is generating the fingerprint radio map. Limited by the practical workload, only a few samples of the received signal strength are collected at each reference point. Unfortunately, fewer samples cannot accurately represent the actual distribution of the signal strength from each access point. This study finds most Wi- Fi signals have two peaks. According to the new finding, a double-peak Gaussian arithmetic is proposed to generate a fingerprint radio map. This approach requires little time to receive WiFi signals and it easy to estimate the parameters of the double-peak Gaussian function. Compared to the Gaussian function and histogram method to generate a fingerprint radio map, this method better approximates the occurrence signal distribution. This paper also compared the positioning accuracy using K-Nearest Neighbour theory for three radio maps, the test results show that the positioning distance error utilizing the double-peak Gaussian function is better than the other two methods.

  8. Understanding and Modeling the Evolution of Critical Points under Gaussian Blurring

    NARCIS (Netherlands)

    Kuijper, A.; Florack, L.M.J.; Heyden, A.; Sparr, G.; Nielsen, M.; Johansen, P.

    2002-01-01

    In order to investigate the deep structure of Gaussian scale space images, one needs to understand the behaviour of critical points under the influence of parameter-driven blurring. During this evolution two different types of special points are encountered, the so-called scale space saddles and the

  9. Equivariant Gröbner bases and the Gaussian two-factor model

    NARCIS (Netherlands)

    Brouwer, A.E.; Draisma, J.

    2009-01-01

    We show that the kernel I of the ring homomorphism R[yij | I, j ¿ N, i > j] ¿ R[si, ti | i ¿ N] determined by yij ¿ sisj +titj is generated by two types of polynomials: off-diagonal 3 x 3-minors and pentads. This confirms a conjecture by Drton, Sturmfels, and Sullivant on the Gaussian two-factor

  10. Assessing clustering strategies for Gaussian mixture filtering a subsurface contaminant model

    KAUST Repository

    Liu, Bo; El Gharamti, Mohamad; Hoteit, Ibrahim

    2016-01-01

    An ensemble-based Gaussian mixture (GM) filtering framework is studied in this paper in term of its dependence on the choice of the clustering method to construct the GM. In this approach, a number of particles sampled from the posterior

  11. Bounded Gaussian process regression

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Nielsen, Jens Brehm; Larsen, Jan

    2013-01-01

    We extend the Gaussian process (GP) framework for bounded regression by introducing two bounded likelihood functions that model the noise on the dependent variable explicitly. This is fundamentally different from the implicit noise assumption in the previously suggested warped GP framework. We...... with the proposed explicit noise-model extension....

  12. Differences in Gaussian diffusion tensor imaging and non-Gaussian diffusion kurtosis imaging model-based estimates of diffusion tensor invariants in the human brain.

    Science.gov (United States)

    Lanzafame, S; Giannelli, M; Garaci, F; Floris, R; Duggento, A; Guerrisi, M; Toschi, N

    2016-05-01

    /RK/AK values, indicating substantial anatomical variability of these discrepancies. In the HCP dataset, the median voxelwise percentage differences across the whole white matter skeleton were (nonlinear least squares algorithm) 14.5% (8.2%-23.1%) for MD, 4.3% (1.4%-17.3%) for FA, -5.2% (-48.7% to -0.8%) for MO, 12.5% (6.4%-21.2%) for RD, and 16.1% (9.9%-25.6%) for AD (all ranges computed as 0.01 and 0.99 quantiles). All differences/trends were consistent between the discovery (HCP) and replication (local) datasets and between estimation algorithms. However, the relationships between such trends, estimated diffusion tensor invariants, and kurtosis estimates were impacted by the choice of fitting routine. Model-dependent differences in the estimation of conventional indexes of MD/FA/MO/RD/AD can be well beyond commonly seen disease-related alterations. While estimating diffusion tensor-derived indexes using the DKI model may be advantageous in terms of mitigating b-value dependence of diffusivity estimates, such estimates should not be referred to as conventional DTI-derived indexes in order to avoid confusion in interpretation as well as multicenter comparisons. In order to assess the potential and advantages of DKI with respect to DTI as well as to standardize diffusion-weighted imaging methods between centers, both conventional DTI-derived indexes and diffusion tensor invariants derived by fitting the non-Gaussian DKI model should be separately estimated and analyzed using the same combination of fitting routines.

  13. Inferring transcriptional gene regulation network of starch metabolism in Arabidopsis thaliana leaves using graphical Gaussian model

    Directory of Open Access Journals (Sweden)

    Ingkasuwan Papapit

    2012-08-01

    Full Text Available Abstract Background Starch serves as a temporal storage of carbohydrates in plant leaves during day/night cycles. To study transcriptional regulatory modules of this dynamic metabolic process, we conducted gene regulation network analysis based on small-sample inference of graphical Gaussian model (GGM. Results Time-series significant analysis was applied for Arabidopsis leaf transcriptome data to obtain a set of genes that are highly regulated under a diurnal cycle. A total of 1,480 diurnally regulated genes included 21 starch metabolic enzymes, 6 clock-associated genes, and 106 transcription factors (TF. A starch-clock-TF gene regulation network comprising 117 nodes and 266 edges was constructed by GGM from these 133 significant genes that are potentially related to the diurnal control of starch metabolism. From this network, we found that β-amylase 3 (b-amy3: At4g17090, which participates in starch degradation in chloroplast, is the most frequently connected gene (a hub gene. The robustness of gene-to-gene regulatory network was further analyzed by TF binding site prediction and by evaluating global co-expression of TFs and target starch metabolic enzymes. As a result, two TFs, indeterminate domain 5 (AtIDD5: At2g02070 and constans-like (COL: At2g21320, were identified as positive regulators of starch synthase 4 (SS4: At4g18240. The inference model of AtIDD5-dependent positive regulation of SS4 gene expression was experimentally supported by decreased SS4 mRNA accumulation in Atidd5 mutant plants during the light period of both short and long day conditions. COL was also shown to positively control SS4 mRNA accumulation. Furthermore, the knockout of AtIDD5 and COL led to deformation of chloroplast and its contained starch granules. This deformity also affected the number of starch granules per chloroplast, which increased significantly in both knockout mutant lines. Conclusions In this study, we utilized a systematic approach of microarray

  14. Modelling of Far-Field Mixing of Industrial Effluent Plume in Ambient ...

    African Journals Online (AJOL)

    This study sought to describe the dynamics of advective and dispersive transport of the effluent plume in the river and also ascertain the extent of its effect from discharge location to downstream far-field region. A homogenous differential equation was used as analytics to describe the physical process that describes the ...

  15. Modelling of transport and biogeochemical processes in pollution plumes: Vejen landfill, Denmark

    DEFF Research Database (Denmark)

    Brun, A.; Engesgaard, Peter Knudegaard; Christensen, Thomas Højlund

    2002-01-01

    A biogeochemical transport code is used to simulate leachate attenuation. biogeochemical processes. and development of redox zones in a pollution plume downstream of the Vejen landfill in Denmark. Calibration of the degradation parameters resulted in a good agreement with the observed distribution...

  16. Modeling and validation of Ku-band signal attenuation through rocket plumes

    NARCIS (Netherlands)

    Veek, van der B.J.; Chintalapati, S.; Kirk, D.R.; Gutierrez, H.; Bun, R.F.

    2013-01-01

    Communications to and from a launch vehicle during ascent are of critical importance to the success of rocket-launch operations. During ascent, the rocket's exhaust plume causes significant interference in the radio communications between the vehicle and ground station. This paper presents an

  17. Modelling of N2-Thruster Plumes Based on Experiments in STG

    National Research Council Canada - National Science Library

    Plaehn, Klaus

    2000-01-01

    ... (no chemical reactions, constant ratio of specific heats). The essential parameter to be varied was the nozzle flow Reynolds number, the quantities to be measured were the Pitot pressure at the nozzle exit and the molecule number flux in the plume...

  18. Implementation of Chaotic Gaussian Particle Swarm Optimization for Optimize Learning-to-Rank Software Defect Prediction Model Construction

    Science.gov (United States)

    Buchari, M. A.; Mardiyanto, S.; Hendradjaya, B.

    2018-03-01

    Finding the existence of software defect as early as possible is the purpose of research about software defect prediction. Software defect prediction activity is required to not only state the existence of defects, but also to be able to give a list of priorities which modules require a more intensive test. Therefore, the allocation of test resources can be managed efficiently. Learning to rank is one of the approach that can provide defect module ranking data for the purposes of software testing. In this study, we propose a meta-heuristic chaotic Gaussian particle swarm optimization to improve the accuracy of learning to rank software defect prediction approach. We have used 11 public benchmark data sets as experimental data. Our overall results has demonstrated that the prediction models construct using Chaotic Gaussian Particle Swarm Optimization gets better accuracy on 5 data sets, ties in 5 data sets and gets worse in 1 data sets. Thus, we conclude that the application of Chaotic Gaussian Particle Swarm Optimization in Learning-to-Rank approach can improve the accuracy of the defect module ranking in data sets that have high-dimensional features.

  19. A Monte Carlo simulation model for stationary non-Gaussian processes

    DEFF Research Database (Denmark)

    Grigoriu, M.; Ditlevsen, Ove Dalager; Arwade, S. R.

    2003-01-01

    includes translation processes and is useful for both Monte Carlo simulation and analytical studies. As for translation processes, the mixture of translation processes can have a wide range of marginal distributions and correlation functions. Moreover, these processes can match a broader range of second...... athe proposed Monte Carlo algorithm and compare features of translation processes and mixture of translation processes. Keywords: Monte Carlo simulation, non-Gaussian processes, sampling theorem, stochastic processes, translation processes......A class of stationary non-Gaussian processes, referred to as the class of mixtures of translation processes, is defined by their finite dimensional distributions consisting of mixtures of finite dimensional distributions of translation processes. The class of mixtures of translation processes...

  20. Bayesian model averaging using particle filtering and Gaussian mixture modeling : Theory, concepts, and simulation experiments

    NARCIS (Netherlands)

    Rings, J.; Vrugt, J.A.; Schoups, G.; Huisman, J.A.; Vereecken, H.

    2012-01-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive

  1. Approximate bandpass and frequency response models of the difference of Gaussian filter

    Science.gov (United States)

    Birch, Philip; Mitra, Bhargav; Bangalore, Nagachetan M.; Rehman, Saad; Young, Rupert; Chatwin, Chris

    2010-12-01

    The Difference of Gaussian (DOG) filter is widely used in optics and image processing as, among other things, an edge detection and correlation filter. It has important biological applications and appears to be part of the mammalian vision system. In this paper we analyse the filter and provide details of the full width half maximum, bandwidth and frequency response in order to aid the full characterisation of its performance.

  2. Model calculations of the chemical processes occurring in the plume of a coal-fired power plant

    Energy Technology Data Exchange (ETDEWEB)

    Meagher, J F; Luria, M

    1982-02-01

    Computer simulations of the homogeneous, gas phase chemical reactions which occur in the plume of a coal-fired power plant were conducted in an effort to understand the influence of various environmental parameters on the production of secondary pollutants. Input data for the model were selected to reproduce the dilution of a plume from a medium-sized power plant. The environmental conditions chosen were characteristic of those found during mid-August in the south-eastern United States. Under most conditions examined, it was found that hydroxyl radicals were the most important species in the homogeneous conversion of stack gases into secondary pollutants. Other free radicals, such as HO/sub 2/ and CH/sub 3/O/sub 2/, exceeded the contribution of HO radicals only when high background hydrocarbon concentrations are used. The conversion rates calculated for the oxidation of SO/sub 2/ to SO/sub 4//sup 2 -/ in these plumes were consistent with those determined experimentally. The concentrations and relative proportions of NO/sub x/ (from the power plant) and reactive hydrocarbons (from the background air) determine, to a large extent, the plume reactivity. Free radical production is suppressed during the initial stages of dilution due to the high NO/sub x/ levels. Significant dilution is required before a suitable mix is attained which can sustain the free radical chain processes common to smog chemistry. In most cases, the free radical concentrations were found to pass through maxima and return to background levels. Under typical summertime conditions, the hyroxyl radical concentration was found to reach a maximum at a HC/NO/sub x/ ratio of approximately 20.

  3. Modeling the South American regional smoke plume: aerosol optical depth variability and surface shortwave flux perturbation

    Directory of Open Access Journals (Sweden)

    N. E. Rosário

    2013-03-01

    . This highlights the need to improve modelling of the regional smoke plume in order to enhance the accuracy of the radiative energy budget. An aerosol optical model based on the mean intensive properties of smoke from the southern part of the Amazon basin produced a radiative flux perturbation efficiency (RFPE of −158 Wm−2/AOD550 nm at noon. This value falls between −154 Wm−2/AOD550 nm and −187 Wm−2/AOD550 nm, the range obtained when spatially varying optical models were considered. The 24 h average surface radiative flux perturbation over the biomass burning season varied from −55 Wm−2 close to smoke sources in the southern part of the Amazon basin and cerrado to −10 Wm−2 in remote regions of the southeast Brazilian coast.

  4. Non-Gaussian statistics, classical field theory, and realizable Langevin models

    International Nuclear Information System (INIS)

    Krommes, J.A.

    1995-11-01

    The direct-interaction approximation (DIA) to the fourth-order statistic Z ∼ left-angle λψ 2 ) 2 right-angle, where λ is a specified operator and ψ is a random field, is discussed from several points of view distinct from that of Chen et al. [Phys. Fluids A 1, 1844 (1989)]. It is shown that the formula for Z DIA already appeared in the seminal work of Martin, Siggia, and Rose (Phys. Rev. A 8, 423 (1973)] on the functional approach to classical statistical dynamics. It does not follow from the original generalized Langevin equation (GLE) of Leith [J. Atmos. Sd. 28, 145 (1971)] and Kraichnan [J. Fluid Mech. 41, 189 (1970)] (frequently described as an amplitude representation for the DIA), in which the random forcing is realized by a particular superposition of products of random variables. The relationship of that GLE to renormalized field theories with non-Gaussian corrections (''spurious vertices'') is described. It is shown how to derive an improved representation, that realizes cumulants through O(ψ 4 ), by adding to the GLE a particular non-Gaussian correction. A Markovian approximation Z DIA M to Z DIA is derived. Both Z DIA and Z DIA M incorrectly predict a Gaussian kurtosis for the steady state of a solvable three-mode example

  5. Learning conditional Gaussian networks

    DEFF Research Database (Denmark)

    Bøttcher, Susanne Gammelgaard

    This paper considers conditional Gaussian networks. The parameters in the network are learned by using conjugate Bayesian analysis. As conjugate local priors, we apply the Dirichlet distribution for discrete variables and the Gaussian-inverse gamma distribution for continuous variables, given...... a configuration of the discrete parents. We assume parameter independence and complete data. Further, to learn the structure of the network, the network score is deduced. We then develop a local master prior procedure, for deriving parameter priors in these networks. This procedure satisfies parameter...... independence, parameter modularity and likelihood equivalence. Bayes factors to be used in model search are introduced. Finally the methods derived are illustrated by a simple example....

  6. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Holoien, Thomas W.-S.; /Ohio State U., Dept. Astron. /Ohio State U., CCAPP /KIPAC, Menlo Park /SLAC; Marshall, Philip J.; Wechsler, Risa H.; /KIPAC, Menlo Park /SLAC

    2017-05-11

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  7. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-06-01

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  8. Data and Model-Driven Decision Support for Environmental Management of a Chromium Plume at Los Alamos National Laboratory - 13264

    Energy Technology Data Exchange (ETDEWEB)

    Vesselinov, Velimir V.; Broxton, David; Birdsell, Kay; Reneau, Steven; Harp, Dylan; Mishra, Phoolendra [Computational Earth Science - EES-16, Earth and Environmental Sciences, Los Alamos National Laboratory, Los Alamos NM 87545 (United States); Katzman, Danny; Goering, Tim [Environmental Programs (ADEP), Los Alamos National Laboratory, Los Alamos NM 87545 (United States); Vaniman, David; Longmire, Pat; Fabryka-Martin, June; Heikoop, Jeff; Ding, Mei; Hickmott, Don; Jacobs, Elaine [Earth Systems Observations - EES-14, Earth and Environmental Sciences, Los Alamos National Laboratory, Los Alamos NM 87545 (United States)

    2013-07-01

    A series of site investigations and decision-support analyses have been performed related to a chromium plume in the regional aquifer beneath the Los Alamos National Laboratory (LANL). Based on the collected data and site information, alternative conceptual and numerical models representing governing subsurface processes with different complexity and resolution have been developed. The current conceptual model is supported by multiple lines of evidence based on comprehensive analyses of the available data and modeling results. The model is applied for decision-support analyses related to estimation of contaminant- arrival locations and chromium mass flux reaching the regional aquifer, and to optimization of a site monitoring-well network. Plume characterization is a challenging and non-unique problem because multiple models and contamination scenarios are consistent with the site data and conceptual knowledge. To solve this complex problem, an advanced methodology based on model calibration and uncertainty quantification has been developed within the computational framework MADS (http://mads.lanl.gov). This work implements high-performance computing and novel, efficient and robust model analysis techniques for optimization and uncertainty quantification (ABAGUS, Squads, multi-try (multi-start) techniques), which allow for solving problems with large degrees of freedom. (authors)

  9. Numerically Modeling the Erosion of Lunar Soil by Rocket Exhaust Plumes

    Science.gov (United States)

    2008-01-01

    In preparation for the Apollo program, Leonard Roberts of the NASA Langley Research Center developed a remarkable analytical theory that predicts the blowing of lunar soil and dust beneath a rocket exhaust plume. Roberts assumed that the erosion rate was determined by the excess shear stress in the gas (the amount of shear stress greater than what causes grains to roll). The acceleration of particles to their final velocity in the gas consumes a portion of the shear stress. The erosion rate continues to increase until the excess shear stress is exactly consumed, thus determining the erosion rate. Roberts calculated the largest and smallest particles that could be eroded based on forces at the particle scale, but the erosion rate equation assumed that only one particle size existed in the soil. He assumed that particle ejection angles were determined entirely by the shape of the terrain, which acts like a ballistic ramp, with the particle aerodynamics being negligible. The predicted erosion rate and the upper limit of particle size appeared to be within an order of magnitude of small-scale terrestrial experiments but could not be tested more quantitatively at the time. The lower limit of particle size and the predictions of ejection angle were not tested. We observed in the Apollo landing videos that the ejection angles of particles streaming out from individual craters were time-varying and correlated to the Lunar Module thrust, thus implying that particle aerodynamics dominate. We modified Roberts theory in two ways. First, we used ad hoc the ejection angles measured in the Apollo landing videos, in lieu of developing a more sophisticated method. Second, we integrated Roberts equations over the lunar-particle size distribution and obtained a compact expression that could be implemented in a numerical code. We also added a material damage model that predicts the number and size of divots which the impinging particles will cause in hardware surrounding the landing

  10. Export of reactive nitrogen from coal-fired power plants in the U.S.: Estimates from a plume-in-grid modeling study - article no. D04308

    Energy Technology Data Exchange (ETDEWEB)

    Vijayaraghavan, K.; Zhang, Y.; Seigneur, C.; Karamchandani, P.; Snell, H.E.

    2009-02-15

    The export of reactive nitrogen (nitrogen oxides and their oxidation products, collectively referred to as NOy) from coal-fired power plants in the U.S. to the rest of the world could have a significant global contribution to ozone. Traditional Eulerian gridded air quality models cannot characterize accurately the chemistry and transport of plumes from elevated point sources such as power plant stacks. A state-of-the-science plume-in-grid (PinG) air quality model, a reactive plume model embedded in an Eulerian gridded model, is used to estimate the export of NOy from 25 large coal-fired power plants in the U. S. (in terms of NOx and SO{sub 2} emissions) in July 2001 to the global atmosphere. The PinG model used is the Community Multiscale Air Quality Model with Advanced Plume Treatment (CMAQ-APT). A benchmark simulation with only the gridded model, CMAQ, is also conducted for comparison purposes. The simulations with and without advanced plume treatment show differences in the calculated export of NOy from the 25 plants considered reflecting the effect of using a detailed and explicit treatment of plume transport and chemistry. The advanced plume treatment results in 31% greater simulated export of NOy compared to the purely grid-based modeling approach. The export efficiency of NOy (the fraction of NOy emitted that is exported) is predicted to be 21% without APT and 27% with APT. When considering only export through the eastern boundary across the Atlantic, CMAQ-APT predicts that the export efficiency is 24% and that 2% of NOy is exported as NOx, 49% as inorganic nitrate, and 25% as PAN. These results are in reasonably good agreement with an analysis reported in the literature of aircraft measurements over the North Atlantic.

  11. Buoyant plume calculations

    International Nuclear Information System (INIS)

    Penner, J.E.; Haselman, L.C.; Edwards, L.L.

    1985-01-01

    Smoke from raging fires produced in the aftermath of a major nuclear exchange has been predicted to cause large decreases in surface temperatures. However, the extent of the decrease and even the sign of the temperature change, depend on how the smoke is distributed with altitude. We present a model capable of evaluating the initial distribution of lofted smoke above a massive fire. Calculations are shown for a two-dimensional slab version of the model and a full three-dimensional version. The model has been evaluated by simulating smoke heights for the Hamburg firestorm of 1943 and a smaller scale oil fire which occurred in Long Beach in 1958. Our plume heights for these fires are compared to those predicted by the classical Morton-Taylor-Turner theory for weakly buoyant plumes. We consider the effect of the added buoyancy caused by condensation of water-laden ground level air being carried to high altitude with the convection column as well as the effects of background wind on the calculated smoke plume heights for several fire intensities. We find that the rise height of the plume depends on the assumed background atmospheric conditions as well as the fire intensity. Little smoke is injected into the stratosphere unless the fire is unusually intense, or atmospheric conditions are more unstable than we have assumed. For intense fires significant amounts of water vapor are condensed raising the possibility of early scavenging of smoke particles by precipitation. 26 references, 11 figures

  12. Modelling of coastal current and thermal plume dispersion - A case study off Nagapattinam, east coast of India

    Digital Repository Service at National Institute of Oceanography (India)

    Babu, M.T.; Vethamony, P.; Suryanarayana, A.; Gouveia, A.D.

    representing the monsoons and the transition periods are selected to study the seasonal variability of simulated currents and thermal plumes. The plume showed northward spreading during March and July and southward during December. During October the spreading...

  13. Modelling of transport and biogeochemical processes in pollution plumes: Literature review of model development

    DEFF Research Database (Denmark)

    Brun, A.; Engesgaard, Peter Knudegaard

    2002-01-01

    A literature survey shows how biogeochemical (coupled organic and inorganic reaction processes) transport models are based on considering the complete biodegradation process as either a single- or as a two-step process. It is demonstrated that some two-step process models rely on the Partial...... Equilibrium Approach (PEA). The PEA assumes the organic degradation step, and not the electron acceptor consumption step, is rate limiting. This distinction is not possible in one-step process models, where consumption of both the electron donor and acceptor are treated kinetically. A three-dimensional, two......-step PEA model is developed. The model allows for Monod kinetics and biomass growth, features usually included only in one-step process models. The biogeochemical part of the model is tested for a batch system with degradation of organic matter under the consumption of a sequence of electron acceptors...

  14. Modelling present-day basal melt rates for Antarctic ice shelves using a parametrization of buoyant meltwater plumes

    Science.gov (United States)

    Lazeroms, Werner M. J.; Jenkins, Adrian; Hilmar Gudmundsson, G.; van de Wal, Roderik S. W.

    2018-01-01

    Basal melting below ice shelves is a major factor in mass loss from the Antarctic Ice Sheet, which can contribute significantly to possible future sea-level rise. Therefore, it is important to have an adequate description of the basal melt rates for use in ice-dynamical models. Most current ice models use rather simple parametrizations based on the local balance of heat between ice and ocean. In this work, however, we use a recently derived parametrization of the melt rates based on a buoyant meltwater plume travelling upward beneath an ice shelf. This plume parametrization combines a non-linear ocean temperature sensitivity with an inherent geometry dependence, which is mainly described by the grounding-line depth and the local slope of the ice-shelf base. For the first time, this type of parametrization is evaluated on a two-dimensional grid covering the entire Antarctic continent. In order to apply the essentially one-dimensional parametrization to realistic ice-shelf geometries, we present an algorithm that determines effective values for the grounding-line depth and basal slope in any point beneath an ice shelf. Furthermore, since detailed knowledge of temperatures and circulation patterns in the ice-shelf cavities is sparse or absent, we construct an effective ocean temperature field from observational data with the purpose of matching (area-averaged) melt rates from the model with observed present-day melt rates. Our results qualitatively replicate large-scale observed features in basal melt rates around Antarctica, not only in terms of average values, but also in terms of the spatial pattern, with high melt rates typically occurring near the grounding line. The plume parametrization and the effective temperature field presented here are therefore promising tools for future simulations of the Antarctic Ice Sheet requiring a more realistic oceanic forcing.

  15. Gaussian mixture models for detection of autism spectrum disorders (ASD) in magnetic resonance imaging

    Science.gov (United States)

    Almeida, Javier; Velasco, Nelson; Alvarez, Charlens; Romero, Eduardo

    2017-11-01

    Autism Spectrum Disorder (ASD) is a complex neurological condition characterized by a triad of signs: stereotyped behaviors, verbal and non-verbal communication problems. The scientific community has been interested on quantifying anatomical brain alterations of this disorder. Several studies have focused on measuring brain cortical and sub-cortical volumes. This article presents a fully automatic method which finds out differences among patients diagnosed with autism and control patients. After the usual pre-processing, a template (MNI152) is registered to an evaluated brain which becomes then a set of regions. Each of these regions is the represented by the normalized histogram of intensities which is approximated by mixture of Gaussian (GMM). The gray and white matter are separated to calculate the mean and standard deviation of each Gaussian. These features are then used to train, region per region, a binary SVM classifier. The method was evaluated in an adult population aged from 18 to 35 years, from the public database Autism Brain Imaging Data Exchange (ABIDE). Highest discrimination values were found for the Right Middle Temporal Gyrus, with an Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) the curve of 0.72.

  16. Methane Emission Estimates from Landfills Obtained with Dynamic Plume Measurements

    International Nuclear Information System (INIS)

    Hensen, A.; Scharff, H.

    2001-01-01

    Methane emissions from 3 different landfills in the Netherlands were estimated using a mobile Tuneable Diode Laser system (TDL). The methane concentration in the cross section of the plume is measured downwind of the source on a transect perpendicular to the wind direction. A gaussian plume model was used to simulate the concentration levels at the transect. The emission from the source is calculated from the measured and modelled concentration levels.Calibration of the plume dispersion model is done using a tracer (N 2 O) that is released from the landfill and measured simultaneously with the TDL system. The emission estimates for the different locations ranged from 3.6 to 16 m 3 ha -1 hr -1 for the different sites. The emission levels were compared to emission estimates based on the landfill gas production models. This comparison suggests oxidation rates that are up to 50% in spring and negligible in November. At one of the three sites measurements were performed in campaigns in 3 consecutive years. Comparison of the emission levels in the first and second year showed a reduction of the methane emission of about 50% due to implementation of a gas extraction system. From the second to the third year emissions increased by a factor of 4 due to new land filling. Furthermore measurements were performed in winter when oxidation efficiency was reduced. This paper describes the measurement technique used, and discusses the results of the experimental sessions that were performed

  17. Analytical modeling of subthreshold current and subthreshold swing of Gaussian-doped strained-Si-on-insulator MOSFETs

    International Nuclear Information System (INIS)

    Rawat, Gopal; Kumar, Sanjay; Goel, Ekta; Kumar, Mirgender; Jit, S.; Dubey, Sarvesh

    2014-01-01

    This paper presents the analytical modeling of subthreshold current and subthreshold swing of short-channel fully-depleted (FD) strained-Si-on-insulator (SSOI) MOSFETs having vertical Gaussian-like doping profile in the channel. The subthreshold current and subthreshold swing have been derived using the parabolic approximation method. In addition to the effect of strain on silicon layer, various other device parameters such as channel length (L), gate-oxide thickness (t ox ), strained-Si channel thickness (t s-Si ), peak doping concentration (N P ), project range (R p ) and straggle (σ p ) of the Gaussian profile have been considered while predicting the device characteristics. The present work may help to overcome the degradation in subthreshold characteristics with strain engineering. These subthreshold current and swing models provide valuable information for strained-Si MOSFET design. Accuracy of the proposed models is verified using the commercially available ATLAS™, a two-dimensional (2D) device simulator from SILVACO. (semiconductor devices)

  18. An algorithm for automatic crystal identification in pixelated scintillation detectors using thin plate splines and Gaussian mixture models.

    Science.gov (United States)

    Schellenberg, Graham; Stortz, Greg; Goertzen, Andrew L

    2016-02-07

    A typical positron emission tomography detector is comprised of a scintillator crystal array coupled to a photodetector array or other position sensitive detector. Such detectors using light sharing to read out crystal elements require the creation of a crystal lookup table (CLUT) that maps the detector response to the crystal of interaction based on the x-y position of the event calculated through Anger-type logic. It is vital for system performance that these CLUTs be accurate so that the location of events can be accurately identified and so that crystal-specific corrections, such as energy windowing or time alignment, can be applied. While using manual segmentation of the flood image to create the CLUT is a simple and reliable approach, it is both tedious and time consuming for systems with large numbers of crystal elements. In this work we describe the development of an automated algorithm for CLUT generation that uses a Gaussian mixture model paired with thin plate splines (TPS) to iteratively fit a crystal layout template that includes the crystal numbering pattern. Starting from a region of stability, Gaussians are individually fit to data corresponding to crystal locations while simultaneously updating a TPS for predicting future Gaussian locations at the edge of a region of interest that grows as individual Gaussians converge to crystal locations. The algorithm was tested with flood image data collected from 16 detector modules, each consisting of a 409 crystal dual-layer offset LYSO crystal array readout by a 32 pixel SiPM array. For these detector flood images, depending on user defined input parameters, the algorithm runtime ranged between 17.5-82.5 s per detector on a single core of an Intel i7 processor. The method maintained an accuracy above 99.8% across all tests, with the majority of errors being localized to error prone corner regions. This method can be easily extended for use with other detector types through adjustment of the initial

  19. An algorithm for automatic crystal identification in pixelated scintillation detectors using thin plate splines and Gaussian mixture models

    International Nuclear Information System (INIS)

    Schellenberg, Graham; Goertzen, Andrew L; Stortz, Greg

    2016-01-01

    A typical positron emission tomography detector is comprised of a scintillator crystal array coupled to a photodetector array or other position sensitive detector. Such detectors using light sharing to read out crystal elements require the creation of a crystal lookup table (CLUT) that maps the detector response to the crystal of interaction based on the x–y position of the event calculated through Anger-type logic. It is vital for system performance that these CLUTs be accurate so that the location of events can be accurately identified and so that crystal-specific corrections, such as energy windowing or time alignment, can be applied. While using manual segmentation of the flood image to create the CLUT is a simple and reliable approach, it is both tedious and time consuming for systems with large numbers of crystal elements. In this work we describe the development of an automated algorithm for CLUT generation that uses a Gaussian mixture model paired with thin plate splines (TPS) to iteratively fit a crystal layout template that includes the crystal numbering pattern. Starting from a region of stability, Gaussians are individually fit to data corresponding to crystal locations while simultaneously updating a TPS for predicting future Gaussian locations at the edge of a region of interest that grows as individual Gaussians converge to crystal locations. The algorithm was tested with flood image data collected from 16 detector modules, each consisting of a 409 crystal dual-layer offset LYSO crystal array readout by a 32 pixel SiPM array. For these detector flood images, depending on user defined input parameters, the algorithm runtime ranged between 17.5–82.5 s per detector on a single core of an Intel i7 processor. The method maintained an accuracy above 99.8% across all tests, with the majority of errors being localized to error prone corner regions. This method can be easily extended for use with other detector types through adjustment of the initial

  20. An algorithm for automatic crystal identification in pixelated scintillation detectors using thin plate splines and Gaussian mixture models

    Science.gov (United States)

    Schellenberg, Graham; Stortz, Greg; Goertzen, Andrew L.

    2016-02-01

    A typical positron emission tomography detector is comprised of a scintillator crystal array coupled to a photodetector array or other position sensitive detector. Such detectors using light sharing to read out crystal elements require the creation of a crystal lookup table (CLUT) that maps the detector response to the crystal of interaction based on the x-y position of the event calculated through Anger-type logic. It is vital for system performance that these CLUTs be accurate so that the location of events can be accurately identified and so that crystal-specific corrections, such as energy windowing or time alignment, can be applied. While using manual segmentation of the flood image to create the CLUT is a simple and reliable approach, it is both tedious and time consuming for systems with large numbers of crystal elements. In this work we describe the development of an automated algorithm for CLUT generation that uses a Gaussian mixture model paired with thin plate splines (TPS) to iteratively fit a crystal layout template that includes the crystal numbering pattern. Starting from a region of stability, Gaussians are individually fit to data corresponding to crystal locations while simultaneously updating a TPS for predicting future Gaussian locations at the edge of a region of interest that grows as individual Gaussians converge to crystal locations. The algorithm was tested with flood image data collected from 16 detector modules, each consisting of a 409 crystal dual-layer offset LYSO crystal array readout by a 32 pixel SiPM array. For these detector flood images, depending on user defined input parameters, the algorithm runtime ranged between 17.5-82.5 s per detector on a single core of an Intel i7 processor. The method maintained an accuracy above 99.8% across all tests, with the majority of errors being localized to error prone corner regions. This method can be easily extended for use with other detector types through adjustment of the initial

  1. Scale dependence of the halo bias in general local-type non-Gaussian models I: analytical predictions and consistency relations

    International Nuclear Information System (INIS)

    Nishimichi, Takahiro

    2012-01-01

    The large-scale clustering pattern of biased tracers is known to be a powerful probe of the non-Gaussianities in the primordial fluctuations. The so-called scale-dependent bias has been reported in various type of models of primordial non-Gaussianities. We focus on local-type non-Gaussianities, and unify the derivations in the literature of the scale-dependent bias in the presence of multiple Gaussian source fields as well as higher-order coupling to cover the models described by frequently-discussed f NL , g NL and t NL parameterization. We find that the resultant power spectrum is characterized by two parameters responsible for the shape and the amplitude of the scale-dependent bias in addition to the Gaussian bias factor. We show how (a generalized version of) Suyama-Yamaguchi inequality between f NL and t NL can directly be accessible from the observed power spectrum through the dependence on our new parameter which controls the shape of the scale-dependent bias. The other parameter for the amplitude of the scale-dependent bias is shown to be useful to distinguish the simplest quadratic non-Gaussianities (i.e., f NL -type) from higher-order ones (g NL and higher), if one measures it from multiple species of galaxies or clusters of galaxies. We discuss the validity and limitations of our analytic results by comparison with numerical simulations in an accompanying paper

  2. Dirichlet Process Gaussian-mixture model: An application to localizing coalescing binary neutron stars with gravitational-wave observations

    Science.gov (United States)

    Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.

    2018-06-01

    We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.

  3. The impact of covariance misspecification in multivariate Gaussian mixtures on estimation and inference: an application to longitudinal modeling.

    Science.gov (United States)

    Heggeseth, Brianna C; Jewell, Nicholas P

    2013-07-20

    Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Experimental Melting Study of Basalt-Peridotite Hybrid Source: Melting model of Hawaiian plume

    Science.gov (United States)

    Takahashi, E.; Gao, S.

    2015-12-01

    Eclogite component entrained in ascending plume is considered to be essentially important in producing flood basalts (e.g., Columbia River basalt, Takahashi et al., 1998 EPSL), alkalic OIBs (e.g., Kogiso et al.,2003), ferro-picrites (Tuff et al.,2005) and Hawaiian shield lavas (e.g., Hauri, 1996; Takahashi & Nakajima, 2002, Sobolev et al.,2005). Size of the entrained eclogite, which controls the reaction rates with ambient peridotite, however, is very difficult to constrain using geophysical observation. Among Hawaiian shield volcanoes, Koolau is the most enriched end-member in eclogite component (Frey et al, 1994). Reconstruction of Koolau volcano based on submarine study on Nuuanu landslide (AGU Monograph vol.128, 2002, Takahashi Garcia Lipman eds.) revealed that silica-rich tholeiite appeared only at the last stage (Makapuu stage) of Koolau volcano. Chemical compositions of lavas as well as isotopes change abruptly and coherently across a horizon (Shinozaki et al. and Tanaka et al. ibid.). Based on these observation, Takahashi & Nakajima (2002 ibid) proposed that the Makapuu stage lava in Koolau volcano was supplied from a single large eclogite block. In order to study melting process in Hawaiian plume, high-pressure melting experiments were carried out under dry and hydrous conditions with layered eclogite/peridotite starting materials. Detail of our experiments will be given by Gao et al (2015 AGU). Combined previous field observation with new set of experiments, we propose that variation in SiO2 among Hawaiian tholeiites represent varying degree of wall-rock interaction between eclogite and ambient peridotite. Makapuu stage lavas in Koolau volcano represents eclogite partial melts formed at ~3 GPa with various amount of xenocrystic olivines derived from Pacific plate. In other words, we propose that "primary magma" in the melting column of Hawaiian plume ranges from basaltic andesite to ferro-picrite depending on the lithology of the source. Solidus of

  5. Gaussian process regression analysis for functional data

    CERN Document Server

    Shi, Jian Qing

    2011-01-01

    Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

  6. Generalization of the Gaussian electrostatic model: Extension to arbitrary angular momentum, distributed multipoles, and speedup with reciprocal space methods

    Science.gov (United States)

    Cisneros, G. Andrés; Piquemal, Jean-Philip; Darden, Thomas A.

    2006-11-01

    The simulation of biological systems by means of current empirical force fields presents shortcomings due to their lack of accuracy, especially in the description of the nonbonded terms. We have previously introduced a force field based on density fitting termed the Gaussian electrostatic model-0 (GEM-0) J.-P. Piquemal et al. [J. Chem. Phys. 124, 104101 (2006)] that improves the description of the nonbonded interactions. GEM-0 relies on density fitting methodology to reproduce each contribution of the constrained space orbital variation (CSOV) energy decomposition scheme, by expanding the electronic density of the molecule in s-type Gaussian functions centered at specific sites. In the present contribution we extend the Coulomb and exchange components of the force field to auxiliary basis sets of arbitrary angular momentum. Since the basis functions with higher angular momentum have directionality, a reference molecular frame (local frame) formalism is employed for the rotation of the fitted expansion coefficients. In all cases the intermolecular interaction energies are calculated by means of Hermite Gaussian functions using the McMurchie-Davidson [J. Comput. Phys. 26, 218 (1978)] recursion to calculate all the required integrals. Furthermore, the use of Hermite Gaussian functions allows a point multipole decomposition determination at each expansion site. Additionally, the issue of computational speed is investigated by reciprocal space based formalisms which include the particle mesh Ewald (PME) and fast Fourier-Poisson (FFP) methods. Frozen-core (Coulomb and exchange-repulsion) intermolecular interaction results for ten stationary points on the water dimer potential-energy surface, as well as a one-dimensional surface scan for the canonical water dimer, formamide, stacked benzene, and benzene water dimers, are presented. All results show reasonable agreement with the corresponding CSOV calculated reference contributions, around 0.1 and 0.15kcal/mol error for

  7. An Improved Mixture-of-Gaussians Background Model with Frame Difference and Blob Tracking in Video Stream

    Directory of Open Access Journals (Sweden)

    Li Yao

    2014-01-01

    Full Text Available Modeling background and segmenting moving objects are significant techniques for computer vision applications. Mixture-of-Gaussians (MoG background model is commonly used in foreground extraction in video steam. However considering the case that the objects enter the scenery and stay for a while, the foreground extraction would fail as the objects stay still and gradually merge into the background. In this paper, we adopt a blob tracking method to cope with this situation. To construct the MoG model more quickly, we add frame difference method to the foreground extracted from MoG for very crowded situations. What is more, a new shadow removal method based on RGB color space is proposed.

  8. Radiative modeling and characterization of aerosol plumes hyper-spectral imagery; Modelisation radiative et caracterisation des panaches d'aerosols en imagerie hyperspectrale

    Energy Technology Data Exchange (ETDEWEB)

    Alakian, A

    2008-03-15

    This thesis aims at characterizing aerosols from plumes (biomass burning, industrial discharges, etc.) with hyper-spectral imagery. We want to estimate the optical properties of emitted particles and also their micro-physical properties such as number, size distribution and composition. To reach our goal, we have built a forward semi-analytical model, named APOM (Aerosol Plume Optical Model), which allows to simulate the radiative effects of aerosol plumes in the spectral range [0,4-2,5 {mu}m] for nadir viewing sensors. Mathematical formulation and model coefficients are obtained from simulations performed with the radiative transfer code COMANCHE. APOM is assessed on simulated data and proves to be accurate with modeling errors between 1% and 3%. Three retrieval methods using APOM have been developed: L-APOM, M-APOM and A-APOM. These methods take advantage of spectral and spatial dimensions in hyper-spectral images. L-APOM and M-APOM assume a priori knowledge on particles but can estimate their optical and micro-physical properties. Their performances on simulated data are quite promising. A-APOM method does not require any a priori knowledge on particles but only estimates their optical properties. However, it still needs improvements before being usable. On real images, inversion provides satisfactory results for plumes above water but meets some difficulties for plumes above vegetation, which underlines some possibilities of improvement for the retrieval algorithm. (author)

  9. Plume and Dose Modeling Performed to Assess Waste Management Enhancements Associated with Envirocare's Decision to Purchase of an Engineered Rail Rollover Facility Enclosure

    International Nuclear Information System (INIS)

    Rogers, T.; Clayman, B.

    2003-01-01

    This paper describes the modeling performed on a proposed enclosure for the existing railcar rollover facility located in Clive, Utah at a radioactive waste disposal site owned and operated by Envirocare of Utah, Inc. (Envirocare). The dose and plume modeling information was used as a tool to justify the decision to make the capital purchase and realize the modeled performance enhancements

  10. Event rate and reaction time performance in ADHD: Testing predictions from the state regulation deficit hypothesis using an ex-Gaussian model.

    Science.gov (United States)

    Metin, Baris; Wiersema, Jan R; Verguts, Tom; Gasthuys, Roos; van Der Meere, Jacob J; Roeyers, Herbert; Sonuga-Barke, Edmund

    2014-12-06

    According to the state regulation deficit (SRD) account, ADHD is associated with a problem using effort to maintain an optimal activation state under demanding task settings such as very fast or very slow event rates. This leads to a prediction of disrupted performance at event rate extremes reflected in higher Gaussian response variability that is a putative marker of activation during motor preparation. In the current study, we tested this hypothesis using ex-Gaussian modeling, which distinguishes Gaussian from non-Gaussian variability. Twenty-five children with ADHD and 29 typically developing controls performed a simple Go/No-Go task under four different event-rate conditions. There was an accentuated quadratic relationship between event rate and Gaussian variability in the ADHD group compared to the controls. The children with ADHD had greater Gaussian variability at very fast and very slow event rates but not at moderate event rates. The results provide evidence for the SRD account of ADHD. However, given that this effect did not explain all group differences (some of which were independent of event rate) other cognitive and/or motivational processes are also likely implicated in ADHD performance deficits.

  11. Gaussian curvature elasticity determined from global shape transformations and local stress distributions: a comparative study using the MARTINI model.

    Science.gov (United States)

    Hu, Mingyang; de Jong, Djurre H; Marrink, Siewert J; Deserno, Markus

    2013-01-01

    We calculate the Gaussian curvature modulus kappa of a systematically coarse-grained (CG) one-component lipid membrane by applying the method recently proposed by Hu et al. [Biophys. J., 2012, 102, 1403] to the MARTINI representation of 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC). We find the value kappa/kappa = -1.04 +/- 0.03 for the elastic ratio between the Gaussian and the mean curvature modulus and deduce kappa(m)/kappa(m) = -0.98 +/- 0.09 for the monolayer elastic ratio, where the latter is based on plausible assumptions for the distance z0 of the monolayer neutral surface from the bilayer midplane and the spontaneous lipid curvature K(0m). By also analyzing the lateral stress profile sigma0(z) of our system, two other lipid types and pertinent data from the literature, we show that determining K(0m) and kappa through the first and second moment of sigma0(z) gives rise to physically implausible values for these observables. This discrepancy, which we previously observed for a much simpler CG model, suggests that the moment conditions derived from simple continuum assumptions miss the effect of physically important correlations in the lipid bilayer.

  12. Small rocket exhaust plume data

    Science.gov (United States)

    Chirivella, J. E.; Moynihan, P. I.; Simon, W.

    1972-01-01

    During recent cryodeposit tests with an 0.18-N thruster, the mass flux in the plume back field was measured for the first time for nitrogen, carbon dioxide, and a mixture of nitrogen, hydrogen, and ammonia at various inlet pressures. This mixture simulated gases that would be generated by a hydrazine plenum attitude propulsion system. The measurements furnish a base upon which to build a mathematical model of plume back flow that will be used in predicting the mass distribution in the boundary region of other plumes. The results are analyzed and compared with existing analytical predictions.

  13. On the significance of contaminant plume-scale and dose-response models in defining hydrogeological characterization needs

    Science.gov (United States)

    de Barros, F.; Rubin, Y.; Maxwell, R.; Bai, H.

    2007-12-01

    Defining rational and effective hydrogeological data acquisition strategies is of crucial importance since financial resources available for such efforts are always limited. Usually such strategies are developed with the goal of reducing uncertainty, but less often they are developed in the context of the impacts of uncertainty. This paper presents an approach for determining site characterization needs based on human health risk factors. The main challenge is in striking a balance between improved definition of hydrogeological, behavioral and physiological parameters. Striking this balance can provide clear guidance on setting priorities for data acquisition and for better estimating adverse health effects in humans. This paper addresses this challenge through theoretical developments and numerical testing. We will report on a wide range of factors that affect the site characterization needs including contaminant plume's dimensions, travel distances and other length scales that characterize the transport problem, as well as health risk models. We introduce a new graphical tool that allows one to investigate the relative impact of hydrogeological and physiological parameters in risk. Results show that the impact of uncertainty reduction in the risk-related parameters decreases with increasing distances from the contaminant source. Also, results indicate that human health risk becomes less sensitive to hydrogeological measurements when dealing with ergodic plumes. This indicates that under ergodic conditions, uncertainty reduction in human health risk may benefit from better understanding of the physiological component as opposed to a detailed hydrogeological characterization

  14. Frequency Characteristics of Surface Wave Generated by Single-Line Pulsed Laser Beam with Two Kinds of Spatial Energy Profile Models: Gaussian and Square-Like

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Ho Geon; Kim, Myung Hwan; Choi, Sung Ho; Kim, Chung Seok; Jhang, Kyung Young [Hanyang University, Seoul (Korea, Republic of)

    2012-08-15

    Using a single-line pulsed laser beam is well known as a useful noncontact method to generate a directional surface acoustic wave. In this method, different laser beam energy profiles produce different waveforms and frequency characteristics. In this paper, we considered two typical kinds of laser beam energy profiles, Gaussian and square-like, to find out a difference in the frequency characteristics. To achieve this, mathematical models were proposed first for Gaussian laser beam profile and square-like respectively, both of which depended on the laser beam width. To verify the theoretical models, experimental setups with a cylindrical lens and a line-slit mask were respectively designed to produce a line laser beam with Gaussian spatial energy profile and square-like. The frequency responses of the theoretical models showed good agreement with experimental results in terms of the existence of harmonic frequency components and the shift of the first peak frequencies to low.

  15. Effects of wildland fire smoke on a tree-roosting bat: integrating a plume model, field measurements, and mammalian dose-response relationships

    Science.gov (United States)

    M.B. Dickinson; J.C. Norris; A.S. Bova; R.L. Kremens; V. Young; M.J. Lacki

    2010-01-01

    Faunal injury and mortality in wildland fires is a concern for wildlife and fire management although little work has been done on the mechanisms by which exposures cause their effects. In this paper, we use an integral plume model, field measurements, and models of carbon monoxide and heat effects to explore risk to tree-roosting bats during prescribed fires in mixed-...

  16. Modeling Multiple-Core Updraft Plume Rise for an Aerial Ignition Prescribed Burn by Coupling Daysmoke with a Cellular Automata Fire Model

    Science.gov (United States)

    G. L Achtemeier; S. L. Goodrick; Y. Liu

    2012-01-01

    Smoke plume rise is critically dependent on plume updraft structure. Smoke plumes from landscape burns (forest and agricultural burns) are typically structured into “sub-plumes” or multiple-core updrafts with the number of updraft cores depending on characteristics of the landscape, fire, fuels, and weather. The number of updraft cores determines the efficiency of...

  17. Gaussian-Based Smooth Dielectric Function: A Surface-Free Approach for Modeling Macromolecular Binding in Solvents

    Directory of Open Access Journals (Sweden)

    Arghya Chakravorty

    2018-03-01

    Full Text Available Conventional modeling techniques to model macromolecular solvation and its effect on binding in the framework of Poisson-Boltzmann based implicit solvent models make use of a geometrically defined surface to depict the separation of macromolecular interior (low dielectric constant from the solvent phase (high dielectric constant. Though this simplification saves time and computational resources without significantly compromising the accuracy of free energy calculations, it bypasses some of the key physio-chemical properties of the solute-solvent interface, e.g., the altered flexibility of water molecules and that of side chains at the interface, which results in dielectric properties different from both bulk water and macromolecular interior, respectively. Here we present a Gaussian-based smooth dielectric model, an inhomogeneous dielectric distribution model that mimics the effect of macromolecular flexibility and captures the altered properties of surface bound water molecules. Thus, the model delivers a smooth transition of dielectric properties from the macromolecular interior to the solvent phase, eliminating any unphysical surface separating the two phases. Using various examples of macromolecular binding, we demonstrate its utility and illustrate the comparison with the conventional 2-dielectric model. We also showcase some additional abilities of this model, viz. to account for the effect of electrolytes in the solution and to render the distribution profile of water across a lipid membrane.

  18. An Improved Gaussian Mixture Model for Damage Propagation Monitoring of an Aircraft Wing Spar under Changing Structural Boundary Conditions

    Science.gov (United States)

    Qiu, Lei; Yuan, Shenfang; Mei, Hanfei; Fang, Fang

    2016-01-01

    Structural Health Monitoring (SHM) technology is considered to be a key technology to reduce the maintenance cost and meanwhile ensure the operational safety of aircraft structures. It has gradually developed from theoretic and fundamental research to real-world engineering applications in recent decades. The problem of reliable damage monitoring under time-varying conditions is a main issue for the aerospace engineering applications of SHM technology. Among the existing SHM methods, Guided Wave (GW) and piezoelectric sensor-based SHM technique is a promising method due to its high damage sensitivity and long monitoring range. Nevertheless the reliability problem should be addressed. Several methods including environmental parameter compensation, baseline signal dependency reduction and data normalization, have been well studied but limitations remain. This paper proposes a damage propagation monitoring method based on an improved Gaussian Mixture Model (GMM). It can be used on-line without any structural mechanical model and a priori knowledge of damage and time-varying conditions. With this method, a baseline GMM is constructed first based on the GW features obtained under time-varying conditions when the structure under monitoring is in the healthy state. When a new GW feature is obtained during the on-line damage monitoring process, the GMM can be updated by an adaptive migration mechanism including dynamic learning and Gaussian components split-merge. The mixture probability distribution structure of the GMM and the number of Gaussian components can be optimized adaptively. Then an on-line GMM can be obtained. Finally, a best match based Kullback-Leibler (KL) divergence is studied to measure the migration degree between the baseline GMM and the on-line GMM to reveal the weak cumulative changes of the damage propagation mixed in the time-varying influence. A wing spar of an aircraft is used to validate the proposed method. The results indicate that the crack

  19. Integrating a street-canyon model with a regional Gaussian dispersion model for improved characterisation of near-road air pollution

    Science.gov (United States)

    Fallah-Shorshani, Masoud; Shekarrizfard, Maryam; Hatzopoulou, Marianne

    2017-03-01

    The development and use of dispersion models that simulate traffic-related air pollution in urban areas has risen significantly in support of air pollution exposure research. In order to accurately estimate population exposure, it is important to generate concentration surfaces that take into account near-road concentrations as well as the transport of pollutants throughout an urban region. In this paper, an integrated modelling chain was developed to simulate ambient Nitrogen Dioxide (NO2) in a dense urban neighbourhood while taking into account traffic emissions, the regional background, and the transport of pollutants within the urban canopy. For this purpose, we developed a hybrid configuration including 1) a street canyon model, which simulates pollutant transfer along streets and intersections, taking into account the geometry of buildings and other obstacles, and 2) a Gaussian puff model, which resolves the transport of contaminants at the top of the urban canopy and accounts for regional meteorology. Each dispersion model was validated against measured concentrations and compared against the hybrid configuration. Our results demonstrate that the hybrid approach significantly improves the output of each model on its own. An underestimation appears clearly for the Gaussian model and street-canyon model compared to observed data. This is due to ignoring the building effect by the Gaussian model and undermining the contribution of other roads by the canyon model. The hybrid approach reduced the RMSE (of observed vs. predicted concentrations) by 16%-25% compared to each model on its own, and increased FAC2 (fraction of predictions within a factor of two of the observations) by 10%-34%.

  20. Using ASCEM Modeling and Visualization to Inform Stakeholders of Contaminant Plume Evolution and Remediation Efficacy at F-Basin Savannah River, SC – 15156

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wainwright, H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Molins, S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Davis, J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Arora, B. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Faybishenko, B. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Krishnan, H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hubbard, S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Denham, M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Eddy-Dilek, C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Moulton, D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lipnikov, K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gable, C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Miller, T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Freshley, M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-01-28

    Communication with stakeholders, regulatory agencies, and the public is an essential part of implementing different remediation and monitoring activities, and developing site closure strategies at contaminated sites. Modeling of contaminant plume evolution plays a critical role in estimating the benefit, cost, and risk of particular options. At the same time, effective visualization of monitoring data and modeling results are particularly important for conveying the significance of the results and observations. In this paper, we present the results of the Advanced Simulation Capability for Environmental Management (ASCEM) project, including the discussion of the capabilities of newly developed ASCEM software package, along with its application to the F-Area Seepage Basins located in the U.S. Department of Energy Savannah River Site (SRS). ASCEM software includes state-of-the-art numerical methods for simulating complex flow and reactive transport, as well as various toolsets such as a graphical user interface (GUI), visualization, data management, uncertainty quantification, and parameter estimation. Using this software, we have developed an advanced visualization of tritium plume migration coupled with a data management system, and simulated a three-dimensional model of flow and plume evolution on a high-performance computing platform. We evaluated the effect of engineered flow barriers on a nonreactive tritium plume, through advanced plume visualization and modeling of tritium plume migration. In addition, we developed a geochemical reaction network to describe complex geochemical processes at the site, and evaluated the impact of coupled hydrological and geochemical heterogeneity. These results are expected to support SRS’s monitoring activities and operational decisions.

  1. Fiber-coupling efficiency of Gaussian-Schell model beams through an ocean to fiber optical communication link

    Science.gov (United States)

    Hu, Beibei; Shi, Haifeng; Zhang, Yixin

    2018-06-01

    We theoretically study the fiber-coupling efficiency of Gaussian-Schell model beams propagating through oceanic turbulence. The expression of the fiber-coupling efficiency is derived based on the spatial power spectrum of oceanic turbulence and the cross-spectral density function. Our work shows that the salinity fluctuation has a greater impact on the fiber-coupling efficiency than temperature fluctuation does. We can select longer λ in the "ocean window" and higher spatial coherence of light source to improve the fiber-coupling efficiency of the communication link. We also can achieve the maximum fiber-coupling efficiency by choosing design parameter according specific oceanic turbulence condition. Our results are able to help the design of optical communication link for oceanic turbulence to fiber sensor.

  2. FPGA Implementation of Gaussian Mixture Model Algorithm for 47 fps Segmentation of 1080p Video

    Directory of Open Access Journals (Sweden)

    Mariangela Genovese

    2013-01-01

    Full Text Available Circuits and systems able to process high quality video in real time are fundamental in nowadays imaging systems. The circuit proposed in the paper, aimed at the robust identification of the background in video streams, implements the improved formulation of the Gaussian Mixture Model (GMM algorithm that is included in the OpenCV library. An innovative, hardware oriented, formulation of the GMM equations, the use of truncated binary multipliers, and ROM compression techniques allow reduced hardware complexity and increased processing capability. The proposed circuit has been designed having commercial FPGA devices as target and provides speed and logic resources occupation that overcome previously proposed implementations. The circuit, when implemented on Virtex6 or StratixIV, processes more than 45 frame per second in 1080p format and uses few percent of FPGA logic resources.

  3. A New Multi-Gaussian Auto-Correlation Function for the Modeling of Realistic Shot Peened Random Rough Surfaces

    International Nuclear Information System (INIS)

    Hassan, W.; Blodgett, M.

    2006-01-01

    Shot peening is the primary surface treatment used to create a uniform, consistent, and reliable sub-surface compressive residual stress layer in aero engine components. A by-product of the shot peening process is random surface roughness that can affect the measurements of the resulting residual stresses and therefore impede their NDE assessment. High frequency eddy current conductivity measurements have the potential to assess these residual stresses in Ni-base super alloys. However, the effect of random surface roughness is expected to become significant in the desired measurement frequency range of 10 to 100 MHz. In this paper, a new Multi-Gaussian (MG) auto-correlation function is proposed for modeling the resulting pseudo-random rough profiles. Its use in the calculation of the Apparent Eddy Current Conductivity (AECC) loss due to surface roughness is demonstrated. The numerical results presented need to be validated with experimental measurements

  4. Novel active contour model based on multi-variate local Gaussian distribution for local segmentation of MR brain images

    Science.gov (United States)

    Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong

    2017-12-01

    Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.

  5. Particle Simulation of Pulsed Plasma Thruster Plumes

    National Research Council Canada - National Science Library

    Boyd, Ian

    2002-01-01

    .... Our modeling had made progress in al aspects of simulating these complex devices including Teflon ablation, plasma formation, electro-magnetic acceleration, plume expansion, and particulate transport...

  6. Additivity of statistical moments in the exponentially modified Gaussian model of chromatography

    International Nuclear Information System (INIS)

    Howerton, Samuel B.; Lee Chomin; McGuffin, Victoria L.

    2002-01-01

    A homologous series of saturated fatty acids ranging from C 10 to C 22 was separated by reversed-phase capillary liquid chromatography. The resultant zone profiles were found to be fit best by an exponentially modified Gaussian (EMG) function. To compare the EMG function and statistical moments for the analysis of the experimental zone profiles, a series of simulated profiles was generated by using fixed values for retention time and different values for the symmetrical (σ) and asymmetrical (τ) contributions to the variance. The simulated profiles were modified with respect to the integration limits, the number of points, and the signal-to-noise ratio. After modification, each profile was analyzed by using statistical moments and an iteratively fit EMG equation. These data indicate that the statistical moment method is much more susceptible to error when the degree of asymmetry is large, when the integration limits are inappropriately chosen, when the number of points is small, and when the signal-to-noise ratio is small. The experimental zone profiles were then analyzed by using the statistical moment and EMG methods. Although care was taken to minimize the sources of error discussed above, significant differences were found between the two methods. The differences in the second moment suggest that the symmetrical and asymmetrical contributions to broadening in the experimental zone profiles are not independent. As a consequence, the second moment is not equal to the sum of σ 2 and τ 2 , as is commonly assumed. This observation has important implications for the elucidation of thermodynamic and kinetic information from chromatographic zone profiles

  7. Gaussian mixture modeling of hemispheric lateralization for language in a large sample of healthy individuals balanced for handedness.

    Science.gov (United States)

    Mazoyer, Bernard; Zago, Laure; Jobard, Gaël; Crivello, Fabrice; Joliot, Marc; Perchey, Guy; Mellet, Emmanuel; Petit, Laurent; Tzourio-Mazoyer, Nathalie

    2014-01-01

    Hemispheric lateralization for language production and its relationships with manual preference and manual preference strength were studied in a sample of 297 subjects, including 153 left-handers (LH). A hemispheric functional lateralization index (HFLI) for language was derived from fMRI acquired during a covert sentence generation task as compared with a covert word list recitation. The multimodal HFLI distribution was optimally modeled using a mixture of 3 and 4 Gaussian functions in right-handers (RH) and LH, respectively. Gaussian function parameters helped to define 3 types of language hemispheric lateralization, namely "Typical" (left hemisphere dominance with clear positive HFLI values, 88% of RH, 78% of LH), "Ambilateral" (no dominant hemisphere with HFLI values close to 0, 12% of RH, 15% of LH) and "Strongly-atypical" (right-hemisphere dominance with clear negative HFLI values, 7% of LH). Concordance between dominant hemispheres for hand and for language did not exceed chance level, and most of the association between handedness and language lateralization was explained by the fact that all Strongly-atypical individuals were left-handed. Similarly, most of the relationship between language lateralization and manual preference strength was explained by the fact that Strongly-atypical individuals exhibited a strong preference for their left hand. These results indicate that concordance of hemispheric dominance for hand and for language occurs barely above the chance level, except in a group of rare individuals (less than 1% in the general population) who exhibit strong right hemisphere dominance for both language and their preferred hand. They call for a revisit of models hypothesizing common determinants for handedness and for language dominance.

  8. Gaussian mixture modeling of hemispheric lateralization for language in a large sample of healthy individuals balanced for handedness.

    Directory of Open Access Journals (Sweden)

    Bernard Mazoyer

    Full Text Available Hemispheric lateralization for language production and its relationships with manual preference and manual preference strength were studied in a sample of 297 subjects, including 153 left-handers (LH. A hemispheric functional lateralization index (HFLI for language was derived from fMRI acquired during a covert sentence generation task as compared with a covert word list recitation. The multimodal HFLI distribution was optimally modeled using a mixture of 3 and 4 Gaussian functions in right-handers (RH and LH, respectively. Gaussian function parameters helped to define 3 types of language hemispheric lateralization, namely "Typical" (left hemisphere dominance with clear positive HFLI values, 88% of RH, 78% of LH, "Ambilateral" (no dominant hemisphere with HFLI values close to 0, 12% of RH, 15% of LH and "Strongly-atypical" (right-hemisphere dominance with clear negative HFLI values, 7% of LH. Concordance between dominant hemispheres for hand and for language did not exceed chance level, and most of the association between handedness and language lateralization was explained by the fact that all Strongly-atypical individuals were left-handed. Similarly, most of the relationship between language lateralization and manual preference strength was explained by the fact that Strongly-atypical individuals exhibited a strong preference for their left hand. These results indicate that concordance of hemispheric dominance for hand and for language occurs barely above the chance level, except in a group of rare individuals (less than 1% in the general population who exhibit strong right hemisphere dominance for both language and their preferred hand. They call for a revisit of models hypothesizing common determinants for handedness and for language dominance.

  9. Wintertime Overnight NOx Removal in a Southeastern United States Coal-fired Power Plant Plume: A Model for Understanding Winter NOx Processing and its Implications

    Science.gov (United States)

    Fibiger, Dorothy L.; McDuffie, Erin E.; Dubé, William P.; Aikin, Kenneth C.; Lopez-Hilfiker, Felipe D.; Lee, Ben H.; Green, Jaime R.; Fiddler, Marc N.; Holloway, John S.; Ebben, Carlena; Sparks, Tamara L.; Wooldridge, Paul; Weinheimer, Andrew J.; Montzka, Denise D.; Apel, Eric C.; Hornbrook, Rebecca S.; Hills, Alan J.; Blake, Nicola J.; DiGangi, Josh P.; Wolfe, Glenn M.; Bililign, Solomon; Cohen, Ronald C.; Thornton, Joel A.; Brown, Steven S.

    2018-01-01

    Nitric oxide (NO) is emitted in large quantities from coal-burning power plants. During the day, the plumes from these sources are efficiently mixed into the boundary layer, while at night, they may remain concentrated due to limited vertical mixing during which they undergo horizontal fanning. At night, the degree to which NO is converted to HNO3 and therefore unable to participate in next-day ozone (O3) formation depends on the mixing rate of the plume, the composition of power plant emissions, and the composition of the background atmosphere. In this study, we use observed plume intercepts from the Wintertime INvestigation of Transport, Emissions and Reactivity campaign to test sensitivity of overnight NOx removal to the N2O5 loss rate constant, plume mixing rate, background O3, and background levels of volatile organic compounds using a 2-D box model of power plant plume transport and chemistry. The factor that exerted the greatest control over NOx removal was the loss rate constant of N2O5. At the lowest observed N2O5 loss rate constant, no other combination of conditions converts more than 10% of the initial NOx to HNO3. The other factors did not influence NOx removal to the same degree.

  10. Wintertime Overnight NOx Removal in a Southeastern United States Coal-Fired Power Plant Plume: A Model for Understanding Winter NOx Processing and Its Implications

    Science.gov (United States)

    Fibiger, Dorothy L.; McDuffie, Erin E.; Dube, William P.; Aikin, Kenneth C.; Lopez-Hilifiker, Felipe D.; Lee, Ben H.; Green, Jaime R.; Fiddler, Marc N.; Holloway, John S.; Ebben, Carlena; hide

    2018-01-01

    Nitric oxide (NO) is emitted in large quantities from coal-�burning power plants. During the day, the plumes from these sources are efficiently mixed into the boundary layer, while at night, they may remain concentrated due to limited vertical mixing during which they undergo horizontal fanning. At night, the degree to which NO is converted to HNO3 and therefore unable to participate in next-�day ozone (O3) formation depends on the mixing rate of the plume, the composition of power plant emissions, and the composition of the background atmosphere. In this study, we use observed plume intercepts from the Wintertime INvestigation of Transport, Emissions and Reactivity (WINTER) campaign to test sensitivity of overnight NOx removal to the N2O5 loss rate constant, plume mixing rate, background O3, and background levels of volatile organic compounds using a 2-�D box model of power plant plume transport and chemistry. The factor that exerted the greatest control over NOx removal was the loss rate constant of N2O5. At the lowest observed N2O5 loss rate constant, no other combination of conditions converts more than 10 percent of the initial NOx to HNO3. The other factors did not influence NOx removal to the same degree.

  11. A simple modeling approach to study the regional impact of a Mediterranean forest isoprene emission on anthropogenic plumes

    Directory of Open Access Journals (Sweden)

    J. Cortinovis

    2005-01-01

    Full Text Available Research during the past decades has outlined the importance of biogenic isoprene emission in tropospheric chemistry and regional ozone photo-oxidant pollution. The first part of this article focuses on the development and validation of a simple biogenic emission scheme designed for regional studies. Experimental data sets relative to Boreal, Tropical, Temperate and Mediterranean ecosystems are used to estimate the robustness of the scheme at the canopy scale, and over contrasted climatic and ecological conditions. A good agreement is generally found when comparing field measurements and simulated emission fluxes, encouraging us to consider the model suitable for regional application. Limitations of the scheme are nevertheless outlined as well as further on-going improvements. In the second part of the article, the emission scheme is used on line in the broader context of a meso-scale atmospheric chemistry model. Dynamically idealized simulations are carried out to study the chemical interactions of pollutant plumes with realistic isoprene emissions coming from a Mediterranean oak forest. Two types of anthropogenic sources, respectively representative of the Marseille (urban and Martigues (industrial French Mediterranean sites, and both characterized by different VOC/NOx are considered. For the Marseille scenario, the impact of biogenic emission on ozone production is larger when the forest is situated in a sub-urban configuration (i.e. downwind distance TOWN-FOREST -1. In this case the enhancement of ozone production due to isoprene can reach +37% in term of maximum surface concentrations and +11% in term of total ozone production. The impact of biogenic emission decreases quite rapidly when the TOWN-FOREST distance increases. For the Martigues scenario, the biogenic impact on the plume is significant up to TOWN-FOREST distance of 90km where the ozone maximum surface concentration enhancement can still reach +30%. For both cases, the

  12. Plume rise measurements at Turbigo

    Energy Technology Data Exchange (ETDEWEB)

    Anfossi, D

    1982-01-01

    This paper presents analyses of plume measurements obtained during that campaign by the ENEL ground-based Lidar. The five stacks of Turbigo Power Plant have different heights and emission parameters and their plumes usually combine, so a model for multiple sources was used to predict the plume rises. These predictions are compared with the observations. Measurements of sigma/sub v/ and sigma/sub z/ over the first 1000 m are compared with the curves derived from other observations in the Po Valley, using the no-lift balloon technique over the same range of downwind distance. Skewness and kurtosis distributions are shown, both along the vertical and the horizontal directions. In order to show the plume structure in more detail, we present two examples of Lidar-derived cross sections and the corresponding vertically and horizontally integrated concentration profiles.

  13. Reforging the Wedding Ring: Exploring a Semi-Artificial Model of Population for the United Kingdom with Gaussian process emulators

    Directory of Open Access Journals (Sweden)

    Viet Dung Cao

    2013-10-01

    Full Text Available Background: We extend the "Wedding Ring‟ agent-based model of marriage formation to include some empirical information on the natural population change for the United Kingdom together with behavioural explanations that drive the observed nuptiality trends. Objective: We propose a method to explore statistical properties of agent-based demographic models. By coupling rule-based explanations driving the agent-based model with observed data we wish to bring agent-based modelling and demographic analysis closer together. Methods: We present a Semi-Artificial Model of Population, which aims to bridge demographic micro-simulation and agent-based traditions. We then utilise a Gaussian process emulator - a statistical model of the base model - to analyse the impact of selected model parameters on two key model outputs: population size and share of married agents. A sensitivity analysis is attempted, aiming to assess the relative importance of different inputs. Results: The resulting multi-state model of population dynamics has enhanced predictive capacity as compared to the original specification of the Wedding Ring, but there are some trade-offs between the outputs considered. The sensitivity analysis allows identification of the most important parameters in the modelled marriage formation process. Conclusions: The proposed methods allow for generating coherent, multi-level agent-based scenarios aligned with some aspects of empirical demographic reality. Emulators permit a statistical analysis of their properties and help select plausible parameter values. Comments: Given non-linearities in agent-based models such as the Wedding Ring, and the presence of feedback loops, the uncertainty in the model may not be directly computable by using traditional statistical methods. The use of statistical emulators offers a way forward.

  14. East Asian SO2 pollution plume over Europe – Part 1: Airborne trace gas measurements and source identification by particle dispersion model simulations

    Directory of Open Access Journals (Sweden)

    A. Stohl

    2009-07-01

    Full Text Available A large SO2-rich pollution plume of East Asian origin was detected by aircraft based CIMS (Chemical Ionization Mass Spectrometry measurements at 3–7.5 km altitude over the North Atlantic. The measurements, which took place on 3 May 2006 aboard of the German research aircraft Falcon, were part of the INTEX-B (Intercontinental Chemical Transport Experiment-B campaign. Additional trace gases (NO, NOy, CO, H2O were measured and used for comparison and source identification. The atmospheric SO2 mole fraction was markedly increased inside the plume and reached up to 900 pmol/mol. Accompanying lagrangian FLEXPART particle dispersion model simulations indicate that the probed pollution plume originated at low altitudes from densely populated and industrialized regions of East Asia, primarily China, about 8–12 days prior to the measurements.

  15. Optimized Field Sampling and Monitoring of Airborne Hazardous Transport Plumes; A Geostatistical Simulation Approach

    International Nuclear Information System (INIS)

    Chen, DI-WEN

    2001-01-01

    Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian/geostatistical methodology, quantitatively combines soft information

  16. Review of air quality modeling techniques. Volume 8

    International Nuclear Information System (INIS)

    Rosen, L.C.

    1977-01-01

    Air transport and diffusion models which are applicable to the assessment of the environmental effects of nuclear, geothermal, and fossil-fuel electric generation are reviewed. The general classification of models and model inputs are discussed. A detailed examination of the statistical, Gaussian plume, Gaussian puff, one-box and species-conservation-of-mass models is given. Representative models are discussed with attention given to the assumptions, input data requirement, advantages, disadvantages and applicability of each

  17. Evaluation of the influence of double and triple Gaussian proton kernel models on accuracy of dose calculations for spot scanning technique.

    Science.gov (United States)

    Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki

    2016-03-01

    The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference

  18. Modelling pollutants dispersion and plume rise from large hydrocarbon tank fires in neutrally stratified atmosphere

    Science.gov (United States)

    Argyropoulos, C. D.; Sideris, G. M.; Christolis, M. N.; Nivolianitou, Z.; Markatos, N. C.

    2010-02-01

    Petrochemical industries normally use storage tanks containing large amounts of flammable and hazardous substances. Therefore, the occurrence of a tank fire, such as the large industrial accident on 11th December 2005 at Buncefield Oil Storage Depots, is possible and usually leads to fire and explosions. Experience has shown that the continuous production of black smoke from these fires due to the toxic gases from the combustion process, presents a potential environmental and health problem that is difficult to assess. The goals of the present effort are to estimate the height of the smoke plume, the ground-level concentrations of the toxic pollutants (smoke, SO 2, CO, PAHs, VOCs) and to characterize risk zones by comparing the ground-level concentrations with existing safety limits. For the application of the numerical procedure developed, an external floating-roof tank has been selected with dimensions of 85 m diameter and 20 m height. Results are presented and discussed. It is concluded that for all scenarios considered, the ground-level concentrations of smoke, SO 2, CO, PAHs and VOCs do not exceed the safety limit of IDLH and there are no "death zones" due to the pollutant concentrations.

  19. On the Pollutant Plume Dispersion in the Urban Canopy Layer over 2D Idealized Street Canyons: A Large-Eddy Simulation Approach

    Science.gov (United States)

    Wong, Colman C. C.; Liu, Chun-Ho

    2010-05-01

    Anthropogenic emissions are the major sources of air pollutants in urban areas. To improve the air quality in dense and mega cities, a simple but reliable prediction method is necessary. In the last five decades, the Gaussian pollutant plume model has been widely used for the estimation of air pollutant distribution in the atmospheric boundary layer (ABL) in an operational manner. Whereas, it was originally designed for rural areas with rather open and flat terrain. The recirculating flows below the urban canopy layer substantially modify the near-ground urban wind environment and so does the pollutant distribution. Though the plume height and dispersion are often adjusted empirically, the accuracy of applying the Gaussian pollutant plume model in urban areas, of which the bottom of the flow domain consists of numerous inhomogeneous buildings, is unclear. To elucidate the flow and pollutant transport, as well as to demystify the uncertainty of employing the Gaussian pollutant plume model over urban roughness, this study was performed to examine how the Gaussian-shape pollutant plume in the urban canopy layer is modified by the idealized two-dimensional (2D) street canyons at the bottom of the ABL. The specific objective is to develop a parameterization so that the geometric effects of urban morphology on the operational pollutant plume dispersion models could be taken into account. Because atmospheric turbulence is the major means of pollutant removal from street canyons to the ABL, the large-eddy simulation (LES) was adopted to calculate explicitly the flows and pollutant transport in the urban canopy layer. The subgrid-scale (SGS) turbulent kinetic energy (TKE) conservation was used to model the SGS processes in the incompressible, isothermal conditions. The computational domain consists of 12 identical idealized street canyons of unity aspect ratio which were placed evenly in the streamwise direction. Periodic boundary conditions (BCs) for the flow were applied

  20. Implementation and use of Gaussian process meta model for sensitivity analysis of numerical models: application to a hydrogeological transport computer code

    International Nuclear Information System (INIS)

    Marrel, A.

    2008-01-01

    In the studies of environmental transfer and risk assessment, numerical models are used to simulate, understand and predict the transfer of pollutant. These computer codes can depend on a high number of uncertain input parameters (geophysical variables, chemical parameters, etc.) and can be often too computer time expensive. To conduct uncertainty propagation studies and to measure the importance of each input on the response variability, the computer code has to be approximated by a meta model which is build on an acceptable number of simulations of the code and requires a negligible calculation time. We focused our research work on the use of Gaussian process meta model to make the sensitivity analysis of the code. We proposed a methodology with estimation and input selection procedures in order to build the meta model in the case of a high number of inputs and with few simulations available. Then, we compared two approaches to compute the sensitivity indices with the meta model and proposed an algorithm to build prediction intervals for these indices. Afterwards, we were interested in the choice of the code simulations. We studied the influence of different sampling strategies on the predictiveness of the Gaussian process meta model. Finally, we extended our statistical tools to a functional output of a computer code. We combined a decomposition on a wavelet basis with the Gaussian process modelling before computing the functional sensitivity indices. All the tools and statistical methodologies that we developed were applied to the real case of a complex hydrogeological computer code, simulating radionuclide transport in groundwater. (author) [fr

  1. Tachyon mediated non-Gaussianity

    International Nuclear Information System (INIS)

    Dutta, Bhaskar; Leblond, Louis; Kumar, Jason

    2008-01-01

    We describe a general scenario where primordial non-Gaussian curvature perturbations are generated in models with extra scalar fields. The extra scalars communicate to the inflaton sector mainly through the tachyonic (waterfall) field condensing at the end of hybrid inflation. These models can yield significant non-Gaussianity of the local shape, and both signs of the bispectrum can be obtained. These models have cosmic strings and a nearly flat power spectrum, which together have been recently shown to be a good fit to WMAP data. We illustrate with a model of inflation inspired from intersecting brane models.

  2. A non-Gaussian generalisation of the Airline model for robust Seasonal Adjustment

    NARCIS (Netherlands)

    Aston, J.; Koopman, S.J.

    2006-01-01

    In their seminal book Time Series Analysis: Forecasting and Control, Box and Jenkins (1976) introduce the Airline model, which is still routinely used for the modelling of economic seasonal time series. The Airline model is for a differenced time series (in levels and seasons) and constitutes a

  3. Predictive geochemical modeling of contaminant concentrations in laboratory columns and in plumes migrating from uranium mill tailings waste impoundments

    International Nuclear Information System (INIS)

    Peterson, S.R.; Martin, W.J.; Serne, R.J.

    1986-04-01

    A computer-based conceptual chemical model was applied to predict contaminant concentrations in plumes migrating from a uranium mill tailings waste impoundment. The solids chosen for inclusion in the conceptual model were selected based on reviews of the literature, on ion speciation/solubility calculations performed on the column effluent solutions and on mineralogical characterization of the contacted and uncontacted sediments. The mechanism of adsorption included in the conceptual chemical model was chosen based on results from semiselective extraction experiments and from mineralogical characterization procedures performed on the sediments. This conceptual chemical model was further developed and partially validated in laboratory experiments where assorted acidic uranium mill tailings solutions percolated through various sediments. This document contains the results of a partial field and laboratory validation (i.e., test of coherence) of this chemical model. Macro constituents (e.g., Ca, SO 4 , Al, Fe, and Mn) of the tailings solution were predicted closely by considering their concentrations to be controlled by the precipitation/dissolution of solid phases. Trace elements, however, were generally predicted to be undersaturated with respect to plausible solid phase controls. The concentration of several of the trace elements were closely predicted by considering their concentrations to be controlled by adsorption onto the amorphous iron oxyhydroxides that precipitated

  4. Adaptive wiener filter based on Gaussian mixture distribution model for denoising chest X-ray CT image

    International Nuclear Information System (INIS)

    Tabuchi, Motohiro; Yamane, Nobumoto; Morikawa, Yoshitaka

    2008-01-01

    In recent decades, X-ray CT imaging has become more important as a result of its high-resolution performance. However, it is well known that the X-ray dose is insufficient in the techniques that use low-dose imaging in health screening or thin-slice imaging in work-up. Therefore, the degradation of CT images caused by the streak artifact frequently becomes problematic. In this study, we applied a Wiener filter (WF) using the universal Gaussian mixture distribution model (UNI-GMM) as a statistical model to remove streak artifact. In designing the WF, it is necessary to estimate the statistical model and the precise co-variances of the original image. In the proposed method, we obtained a variety of chest X-ray CT images using a phantom simulating a chest organ, and we estimated the statistical information using the images for training. The results of simulation showed that it is possible to fit the UNI-GMM to the chest X-ray CT images and reduce the specific noise. (author)

  5. Implicit Treatment of Technical Specification and Thermal Hydraulic Parameter Uncertainties in Gaussian Process Model to Estimate Safety Margin

    Directory of Open Access Journals (Sweden)

    Douglas A. Fynan

    2016-06-01

    Full Text Available The Gaussian process model (GPM is a flexible surrogate model that can be used for nonparametric regression for multivariate problems. A unique feature of the GPM is that a prediction variance is automatically provided with the regression function. In this paper, we estimate the safety margin of a nuclear power plant by performing regression on the output of best-estimate simulations of a large-break loss-of-coolant accident with sampling of safety system configuration, sequence timing, technical specifications, and thermal hydraulic parameter uncertainties. The key aspect of our approach is that the GPM regression is only performed on the dominant input variables, the safety injection flow rate and the delay time for AC powered pumps to start representing sequence timing uncertainty, providing a predictive model for the peak clad temperature during a reflood phase. Other uncertainties are interpreted as contributors to the measurement noise of the code output and are implicitly treated in the GPM in the noise variance term, providing local uncertainty bounds for the peak clad temperature. We discuss the applicability of the foregoing method to reduce the use of conservative assumptions in best estimate plus uncertainty (BEPU and Level 1 probabilistic safety assessment (PSA success criteria definitions while dealing with a large number of uncertainties.

  6. A Gaussian mixture model based adaptive classifier for fNIRS brain-computer interfaces and its testing via simulation

    Science.gov (United States)

    Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe

    2017-08-01

    Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus  brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.

  7. A novel Gaussian process regression model for state-of-health estimation of lithium-ion battery using charging curve

    Science.gov (United States)

    Yang, Duo; Zhang, Xu; Pan, Rui; Wang, Yujie; Chen, Zonghai

    2018-04-01

    The state-of-health (SOH) estimation is always a crucial issue for lithium-ion batteries. In order to provide an accurate and reliable SOH estimation, a novel Gaussian process regression (GPR) model based on charging curve is proposed in this paper. Different from other researches where SOH is commonly estimated by cycle life, in this work four specific parameters extracted from charging curves are used as inputs of the GPR model instead of cycle numbers. These parameters can reflect the battery aging phenomenon from different angles. The grey relational analysis method is applied to analyze the relational grade between selected features and SOH. On the other hand, some adjustments are made in the proposed GPR model. Covariance function design and the similarity measurement of input variables are modified so as to improve the SOH estimate accuracy and adapt to the case of multidimensional input. Several aging data from NASA data repository are used for demonstrating the estimation effect by the proposed method. Results show that the proposed method has high SOH estimation accuracy. Besides, a battery with dynamic discharging profile is used to verify the robustness and reliability of this method.

  8. A direct derivation of the exact Fisther information matrix of Gaussian vector state space models

    NARCIS (Netherlands)

    Klein, A.A.B.; Neudecker, H.

    2000-01-01

    This paper deals with a direct derivation of Fisher's information matrix of vector state space models for the general case, by which is meant the establishment of the matrix as a whole and not element by element. The method to be used is matrix differentiation, see [4]. We assume the model to be

  9. Analysis of the anomalous mean-field like properties of Gaussian core model in terms of entropy

    Science.gov (United States)

    Nandi, Manoj Kumar; Maitra Bhattacharyya, Sarika

    2018-01-01

    Studies of the Gaussian core model (GCM) have shown that it behaves like a mean-field model and the properties are quite different from standard glass former. In this work, we investigate the entropies, namely, the excess entropy (Sex) and the configurational entropy (Sc) and their different components to address these anomalies. Our study corroborates most of the earlier observations and also sheds new light on the high and low temperature dynamics. We find that unlike in standard glass former where high temperature dynamics is dominated by two-body correlation and low temperature by many-body correlations, in the GCM both high and low temperature dynamics are dominated by many-body correlations. We also find that the many-body entropy which is usually positive at low temperatures and is associated with activated dynamics is negative in the GCM suggesting suppression of activation. Interestingly despite the suppression of activation, the Adam-Gibbs (AG) relation that describes activated dynamics holds in the GCM, thus suggesting a non-activated contribution in AG relation. We also find an overlap between the AG relation and mode coupling power law regime leading to a power law behavior of Sc. From our analysis of this power law behavior, we predict that in the GCM the high temperature dynamics will disappear at dynamical transition temperature and below that there will be a transition to the activated regime. Our study further reveals that the activated regime in the GCM is quite narrow.

  10. State-space models’ dirty little secrets: even simple linear Gaussian models can have estimation problems

    DEFF Research Database (Denmark)

    Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer Moesgaard

    2016-01-01

    problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter......State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible...

  11. Quantifying uncertainty for predictions with model error in non-Gaussian systems with intermittency

    International Nuclear Information System (INIS)

    Branicki, Michal; Majda, Andrew J

    2012-01-01

    This paper discusses a range of important mathematical issues arising in applications of a newly emerging stochastic-statistical framework for quantifying and mitigating uncertainties associated with prediction of partially observed and imperfectly modelled complex turbulent dynamical systems. The need for such a framework is particularly severe in climate science where the true climate system is vastly more complicated than any conceivable model; however, applications in other areas, such as neural networks and materials science, are just as important. The mathematical tools employed here rely on empirical information theory and fluctuation–dissipation theorems (FDTs) and it is shown that they seamlessly combine into a concise systematic framework for measuring and optimizing consistency and sensitivity of imperfect models. Here, we utilize a simple statistically exactly solvable ‘perfect’ system with intermittent hidden instabilities and with time-periodic features to address a number of important issues encountered in prediction of much more complex dynamical systems. These problems include the role and mitigation of model error due to coarse-graining, moment closure approximations, and the memory of initial conditions in producing short, medium and long-range predictions. Importantly, based on a suite of increasingly complex imperfect models of the perfect test system, we show that the predictive skill of the imperfect models and their sensitivity to external perturbations is improved by ensuring their consistency on the statistical attractor (i.e. the climate) with the perfect system. Furthermore, the discussed link between climate fidelity and sensitivity via the FDT opens up an enticing prospect of developing techniques for improving imperfect model sensitivity based on specific tests carried out in the training phase of the unperturbed statistical equilibrium/climate. (paper)

  12. PHREEQC integrated modelling of plutonium migration from alpha ILL radwaste: organic complexes, concrete degradation and alkaline plume

    International Nuclear Information System (INIS)

    Cochepin, B.; Munier, I.; Giffaut, E.; Grive, M.

    2010-01-01

    Document available in extended abstract form only. The effects of organic compounds derived from Long-Lived Intermediate-Level waste (LLILW) degradation need to be precisely described regarding the radionuclide migration in storage cells and argillites. These evaluations play a major role in the final decision for accepting these waste products in the future storage facility. The evaluation process engaged by Andra implies the use of coupled chemical transport tools able to take into account the linking of processes occurring in storage conditions as well as the different cell components (containers, packages, lining...). The relevance of this approach must fundamentally be based on a consistent characterization concerning (i) the waste packages, (ii) their degradation products (nature and kinetics), (iii) the chemical evolution of these products in the storage disposal (for cement material particularly) and in the argillites, and (iv) the correlation with the radionuclide behavior regarding to these sequestering agents (chemistry and transport). Andra is now involved in an internal evaluation of technological organic waste packages contaminated by plutonium coming from the MOX fuel plant called MELOX located at Marcoule (France), considering the consistent work undertaken to characterize these organic agents. This evaluation is based on an integrated representation in terms of chemical processes and transport of the concrete chemical degradation (e.g. waste packages, backfill and lining), the alkaline plume in the near field of argillites and the plutonium-organic complexes. The conceptual model is based on the following assessments: On the chemical aspect: (i) Both concrete and argillites undertake chemical perturbations with dissolution/precipitation, ion exchanges, surface and aqueous complexation. (ii) The degradation products of the MELOX waste organic compounds are limited to five major organic acids: iso-saccharinic (ISA), acetic, phtalic, adipic and

  13. Models of Hawaiian volcano growth and plume structure: Implications of results from the Hawaii Scientific Drilling Project

    OpenAIRE

    DePaolo, D. J.; Stolper, E. M.

    1996-01-01

    The shapes of typical Hawaiian volcanoes are simply parameterized, and a relationship is derived for the dependence of lava accumulation rates on volcano volume and volumetric growth rate. The dependence of lava accumulation rate on time is derived by estimating the eruption rate of a volcano as it traverses the Hawaiian plume, with the eruption rate determined from a specified radial dependence of magma generation in the plume and assuming that a volcano captures melt from a circular area ce...

  14. Flexible Mixture-Amount Models for Business and Industry Using Gaussian Processes

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste); D. Fok (Dennis); P.P. Goos (Peter)

    2016-01-01

    markdownabstractMany products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture

  15. Reconstructing gene regulatory networks from knock-out data using Gaussian Noise Model and Pearson Correlation Coefficient.

    Science.gov (United States)

    Mohamed Salleh, Faridah Hani; Arif, Shereena Mohd; Zainudin, Suhaila; Firdaus-Raih, Mohd

    2015-12-01

    A gene regulatory network (GRN) is a large and complex network consisting of interacting elements that, over time, affect each other's state. The dynamics of complex gene regulatory processes are difficult to understand using intuitive approaches alone. To overcome this problem, we propose an algorithm for inferring the regulatory interactions from knock-out data using a Gaussian model combines with Pearson Correlation Coefficient (PCC). There are several problems relating to GRN construction that have been outlined in this paper. We demonstrated the ability of our proposed method to (1) predict the presence of regulatory interactions between genes, (2) their directionality and (3) their states (activation or suppression). The algorithm was applied to network sizes of 10 and 50 genes from DREAM3 datasets and network sizes of 10 from DREAM4 datasets. The predicted networks were evaluated based on AUROC and AUPR. We discovered that high false positive values were generated by our GRN prediction methods because the indirect regulations have been wrongly predicted as true relationships. We achieved satisfactory results as the majority of sub-networks achieved AUROC values above 0.5. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    Science.gov (United States)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  17. Exploring the roles of cannot-link constraint in community detection via Multi-variance Mixed Gaussian Generative Model

    Science.gov (United States)

    Ge, Meng; Jin, Di; He, Dongxiao; Fu, Huazhu; Wang, Jing; Cao, Xiaochun

    2017-01-01

    Due to the demand for performance improvement and the existence of prior information, semi-supervised community detection with pairwise constraints becomes a hot topic. Most existing methods have been successfully encoding the must-link constraints, but neglect the opposite ones, i.e., the cannot-link constraints, which can force the exclusion between nodes. In this paper, we are interested in understanding the role of cannot-link constraints and effectively encoding pairwise constraints. Towards these goals, we define an integral generative process jointly considering the network topology, must-link and cannot-link constraints. We propose to characterize this process as a Multi-variance Mixed Gaussian Generative (MMGG) Model to address diverse degrees of confidences that exist in network topology and pairwise constraints and formulate it as a weighted nonnegative matrix factorization problem. The experiments on artificial and real-world networks not only illustrate the superiority of our proposed MMGG, but also, most importantly, reveal the roles of pairwise constraints. That is, though the must-link is more important than cannot-link when either of them is available, both must-link and cannot-link are equally important when both of them are available. To the best of our knowledge, this is the first work on discovering and exploring the importance of cannot-link constraints in semi-supervised community detection. PMID:28678864

  18. Thermomechanical Modeling of the Formation of a Multilevel, Crustal-Scale Magmatic System by the Yellowstone Plume

    Science.gov (United States)

    Colón, D. P.; Bindeman, I. N.; Gerya, T. V.

    2018-05-01

    Geophysical imaging of the Yellowstone supervolcano shows a broad zone of partial melt interrupted by an amagmatic gap at depths of 15-20 km. We reproduce this structure through a series of regional-scale magmatic-thermomechanical forward models which assume that magmatic dikes stall at rheologic discontinuities in the crust. We find that basaltic magmas accumulate at the Moho and at the brittle-ductile transition, which naturally forms at depths of 5-10 km. This leads to the development of a 10- to 15-km thick midcrustal sill complex with a top at a depth of approximately 10 km, consistent with geophysical observations of the pre-Yellowstone hot spot track. We show a linear relationship between melting rates in the mantle and rhyolite eruption rates along the hot spot track. Finally, melt production rates from our models suggest that the Yellowstone plume is 175°C hotter than the surrounding mantle and that the thickness of the overlying lithosphere is 80 km.

  19. Phased Array Noise Source Localization Measurements of an F404 Nozzle Plume at Both Full and Model Scale

    Science.gov (United States)

    Podboy, Gary G.; Bridges, James E.; Henderson, Brenda S.

    2010-01-01

    A 48-microphone planar phased array system was used to acquire jet noise source localization data on both a full-scale F404-GE-F400 engine and on a 1/4th scale model of a F400 series nozzle. The full-scale engine test data show the location of the dominant noise sources in the jet plume as a function of frequency for the engine in both baseline (no chevron) and chevron configurations. Data are presented for the engine operating both with and without afterburners. Based on lessons learned during this test, a set of recommendations are provided regarding how the phased array measurement system could be modified in order to obtain more useful acoustic source localization data on high-performance military engines in the future. The data obtained on the 1/4th scale F400 series nozzle provide useful insights regarding the full-scale engine jet noise source mechanisms, and document some of the differences associated with testing at model-scale versus fullscale.

  20. Pollutant Plume Dispersion over Hypothetical Urban Areas based on Wind Tunnel Measurements

    Science.gov (United States)

    Mo, Ziwei; Liu, Chun-Ho

    2017-04-01

    Gaussian plume model is commonly adopted for pollutant concentration prediction in the atmospheric boundary layer (ABL). However, it has a number of limitations being applied to pollutant dispersion over complex land-surface morphology. In this study, the friction factor (f), as a measure of aerodynamic resistance induced by rough surfaces in the engineering community, was proposed to parameterize the vertical dispersion coefficient (σz) in the Gaussian model. A series of wind tunnel experiments were carried out to verify the mathematical hypothesis and to characterize plume dispersion as a function of surface roughness as well. Hypothetical urban areas, which were assembled in the form of idealized street canyons of different aspect (building-height-to-street-width) ratios (AR = 1/2, 1/4, 1/8 and 1/12), were fabricated by aligning identical square aluminum bars at different separation apart in cross flows. Pollutant emitted from a ground-level line source into the turbulent boundary layer (TBL) was simulated using water vapour generated by ultrasonic atomizer. The humidity and the velocity (mean and fluctuating components) were measured, respectively, by humidity sensors and hot-wire anemometry (HWA) with X-wire probes in streamwise and vertical directions. Wind tunnel results showed that the pollutant concentration exhibits the conventional Gaussian distribution, suggesting the feasibility of using water vapour as a passive scalar in wind tunnel experiments. The friction factor increased with decreasing aspect ratios (widening the building separation). It was peaked at AR = 1/8 and decreased thereafter. Besides, a positive correlation between σz/xn (x is the distance from the pollutant source) and f1/4 (correlation coefficient r2 = 0.61) was observed, formulating the basic parameterization of plume dispersion over urban areas.

  1. Gaussian limit of compact spin systems

    International Nuclear Information System (INIS)

    Bellissard, J.; Angelis, G.F. de

    1981-01-01

    It is shown that the Wilson and Wilson-Villain U(1) models reproduce, in the low coupling limit, the gaussian lattice approximation of the Euclidean electromagnetic field. By the same methods it is also possible to prove that the plane rotator and the Villain model share a common gaussian behaviour in the low temperature limit. (Auth.)

  2. Particle-in-cell vs straight-line airflow Gaussian calculations of concentration and deposition of airborne emissions out to 70 km for two sites of differing meteorological and topographical character

    International Nuclear Information System (INIS)

    Lange, R.; Dickerson, M.A.; Peterson, K.R.; Sherman, C.A.; Sullivan, T.J.

    1976-01-01

    Two numerical models for the calculation of air concentration and ground deposition of airborne effluent releases are compared. The Particle-in-Cell (PIC) model and the Straight-Line Airflow Gaussian model were used for the simulation. Two sites were selected for comparison: the Hudson River Valley, New York, and the area around the Savannah River Plant, South Carolina. Input for the models was synthesized from meteorological data gathered in previous studies by various investigators. It was found that the PIC model more closely simulated the three-dimensional effects of the meteorology and topography. Overall, the Gaussian model calculated higher concentrations under stable conditions with better agreement between the two methods during neutral to unstable conditions. In addition, because of its consideration of exposure from the returning plume after flow reversal, the PIC model calculated air concentrations over larger areas than did the Gaussian model

  3. Short communication: Alteration of priors for random effects in Gaussian linear mixed model

    DEFF Research Database (Denmark)

    Vandenplas, Jérémie; Christensen, Ole Fredslund; Gengler, Nicholas

    2014-01-01

    such alterations. Therefore, the aim of this study was to propose a method to alter both the mean and (co)variance of the prior multivariate normal distributions of random effects of linear mixed models while using currently available software packages. The proposed method was tested on simulated examples with 3......, multiple-trait predictions of lactation yields, and Bayesian approaches integrating external information into genetic evaluations) need to alter both the mean and (co)variance of the prior distributions and, to our knowledge, most software packages available in the animal breeding community do not permit...... different software packages available in animal breeding. The examples showed the possibility of the proposed method to alter both the mean and (co)variance of the prior distributions with currently available software packages through the use of an extended data file and a user-supplied (co)variance matrix....

  4. ARCON96, Radioactive Plume Concentration in Reactor Control Rooms

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Simonen, C.A.

    2003-01-01

    1 - Description of program or function: ARCON96 was developed to calculate relative concentrations in plumes from nuclear power plants at control room air intakes in the vicinity of the release point. 2 - Methods: ARCON96 implements a straight-line Gaussian dispersion model with dispersion coefficients that are modified to account for low wind meander and building wake effects. Hourly, normalized concentrations (X/Q) are calculated from hourly meteorological data. The hourly values are averaged to form X/Qs for periods ranging from 2 to 720 hours in duration. The calculated values for each period are used to form cumulative frequency distributions. 3 - Restriction on the complexity of the problem: ARCON96 is a single user program. If expanded output is selected by the user, the file includes the hourly input and X/Qs and the intermediate computational results. The output file may exceed a megabyte size

  5. Computerized prediction of intensive care unit discharge after cardiac surgery: development and validation of a Gaussian processes model

    Directory of Open Access Journals (Sweden)

    Meyfroidt Geert

    2011-10-01

    Full Text Available Abstract Background The intensive care unit (ICU length of stay (LOS of patients undergoing cardiac surgery may vary considerably, and is often difficult to predict within the first hours after admission. The early clinical evolution of a cardiac surgery patient might be predictive for his LOS. The purpose of the present study was to develop a predictive model for ICU discharge after non-emergency cardiac surgery, by analyzing the first 4 hours of data in the computerized medical record of these patients with Gaussian processes (GP, a machine learning technique. Methods Non-interventional study. Predictive modeling, separate development (n = 461 and validation (n = 499 cohort. GP models were developed to predict the probability of ICU discharge the day after surgery (classification task, and to predict the day of ICU discharge as a discrete variable (regression task. GP predictions were compared with predictions by EuroSCORE, nurses and physicians. The classification task was evaluated using aROC for discrimination, and Brier Score, Brier Score Scaled, and Hosmer-Lemeshow test for calibration. The regression task was evaluated by comparing median actual and predicted discharge, loss penalty function (LPF ((actual-predicted/actual and calculating root mean squared relative errors (RMSRE. Results Median (P25-P75 ICU length of stay was 3 (2-5 days. For classification, the GP model showed an aROC of 0.758 which was significantly higher than the predictions by nurses, but not better than EuroSCORE and physicians. The GP had the best calibration, with a Brier Score of 0.179 and Hosmer-Lemeshow p-value of 0.382. For regression, GP had the highest proportion of patients with a correctly predicted day of discharge (40%, which was significantly better than the EuroSCORE (p Conclusions A GP model that uses PDMS data of the first 4 hours after admission in the ICU of scheduled adult cardiac surgery patients was able to predict discharge from the ICU as a

  6. Prediction error variance and expected response to selection, when selection is based on the best predictor - for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    DEFF Research Database (Denmark)

    Andersen, Anders Holst; Korsgaard, Inge Riis; Jensen, Just

    2002-01-01

    In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed...... or random effects). In the different models, expressions are given (when these can be found - otherwise unbiased estimates are given) for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non...... Gaussian traits are generalisations of the well-known formulas for Gaussian traits - and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part...

  7. Modification of Gaussian mixture models for data classification in high energy physics

    Science.gov (United States)

    Štěpánek, Michal; Franc, Jiří; Kůs, Václav

    2015-01-01

    In high energy physics, we deal with demanding task of signal separation from background. The Model Based Clustering method involves the estimation of distribution mixture parameters via the Expectation-Maximization algorithm in the training phase and application of Bayes' rule in the testing phase. Modifications of the algorithm such as weighting, missing data processing, and overtraining avoidance will be discussed. Due to the strong dependence of the algorithm on initialization, genetic optimization techniques such as mutation, elitism, parasitism, and the rank selection of individuals will be mentioned. Data pre-processing plays a significant role for the subsequent combination of final discriminants in order to improve signal separation efficiency. Moreover, the results of the top quark separation from the Tevatron collider will be compared with those of standard multivariate techniques in high energy physics. Results from this study has been used in the measurement of the inclusive top pair production cross section employing DØ Tevatron full Runll data (9.7 fb-1).

  8. On moistening of ash particles in smoke plumes of industrial sources

    International Nuclear Information System (INIS)

    Geints, Yu.E.; Zemlyanov, A.A.

    1992-01-01

    Moistening of ash particles occurring in the humid atmosphere is one of the main factors decreasing the accuracy of the lidar measurements of thickness of smoke emissions. Theoretical investigation of the growth of water coating of smoke particles under different meteorological conditions within the zone of emission has been carried out based on the Gaussian model of smoke plume with slant axis and its parameters. Numerical calculations have shown that in the case of high initial moisture content of the emissions near the source in the smoke plume the zone appears in which water vapor is supersaturated and the effect of particle moistening is significant. Seasonal trends and diurnal variations in temperature and humidity in the surface layer of the atmosphere also substantially affect moistening. Length of the zone of moistening of ash particles is maximum at night in winter under conditions of light breeze. The possibility of retrieving the initial mass concentration of the dry aerosol in the smoke plume has been shown based on lidar measurements of the scattering coefficient within the zone of maximum degree of moistening of smoke plume. 10 refs., 5 figs

  9. Plume dispersion and deposition processes of tracer gas and aerosols in short-distance experiments

    International Nuclear Information System (INIS)

    Taeschner, M.; Bunnenberg, C.

    1988-01-01

    Data used in this paper were extracted from field experiments carried out in France and Canada to study the pathway of elementary tritium after possible emissions from future fusion reactors and from short-range experiments with nutrient aerosols performed in a German forest in view of a therapy of damaged coniferous trees by foliar nutrition. Comparisons of dispersion parameters evaluated from the tritium field experiments show that in the case of the 30-min release the variations of the wind directions represent the dominant mechanism of lateral plume dispersion under unstable weather conditions. This corresponds with the observation that for the short 2-min emission the plume remains more concentrated during propagation, and the small lateral dispersion parameters typical for stable conditions have to be applied. The investigations on the dispersion of aerosol plumes into a forest boundary layer show that the Gaussian plume model can be modified by a windspeed factor to be valid for predictions on aerosol concentrations and depositions even in a structured topography like a forest

  10. Measuring Treasury Bond Portfolio Risk and Portfolio Optimization with a Non-Gaussian Multivariate Model

    Science.gov (United States)

    Dong, Yijun

    The research about measuring the risk of a bond portfolio and the portfolio optimization was relatively rare previously, because the risk factors of bond portfolios are not very volatile. However, this condition has changed recently. The 2008 financial crisis brought high volatility to the risk factors and the related bond securities, even if the highly rated U.S. treasury bonds. Moreover, the risk factors of bond portfolios show properties of fat-tailness and asymmetry like risk factors of equity portfolios. Therefore, we need to use advanced techniques to measure and manage risk of bond portfolios. In our paper, we first apply autoregressive moving average generalized autoregressive conditional heteroscedasticity (ARMA-GARCH) model with multivariate normal tempered stable (MNTS) distribution innovations to predict risk factors of U.S. treasury bonds and statistically demonstrate that MNTS distribution has the ability to capture the properties of risk factors based on the goodness-of-fit tests. Then based on empirical evidence, we find that the VaR and AVaR estimated by assuming normal tempered stable distribution are more realistic and reliable than those estimated by assuming normal distribution, especially for the financial crisis period. Finally, we use the mean-risk portfolio optimization to minimize portfolios' potential risks. The empirical study indicates that the optimized bond portfolios have better risk-adjusted performances than the benchmark portfolios for some periods. Moreover, the optimized bond portfolios obtained by assuming normal tempered stable distribution have improved performances in comparison to the optimized bond portfolios obtained by assuming normal distribution.

  11. AERMOD as a Gaussian dispersion model for planning tracer gas dispersion tests for landfill methane emission quantification

    DEFF Research Database (Denmark)

    Matacchiera, F.; Manes, C.; Beaven, R. P.

    2018-01-01

    that measurements are taken where the plumes of a released tracer-gas and landfill-gas are well-mixed. However, the distance at which full mixing of the gases occurs is generally unknown prior to any experimental campaign. To overcome this problem the present paper demonstrates that, for any specific TDM...

  12. Puff-plume atmospheric deposition model for use at SRP in emergency-response situations

    International Nuclear Information System (INIS)

    Garrett, A.J.; Murphy, C.E. Jr.

    1981-05-01

    An atmospheric transport and diffusion model developed for real-time calculation of the location and concentration of toxic or radioactive materials during an accidental release was improved by including deposition calculations

  13. Coupling of Realistic Rate Estimates with Genomics for Assessing Contaminant Attenuation and Long-Term Plume Containment - Task 4: Modeling - Final Report

    International Nuclear Information System (INIS)

    Robert C. Starr

    2005-01-01

    seven plumes at 24 DOE facilities were screened, and 14 plumes were selected for detailed examination. In the plumes selected for further study, spatial changes in the concentration of a conservative co-contaminant were used to compensate for the effects of mixing and temporal changes in TCE release from the contaminant source. Decline in TCE concentration along a flow path in excess of the co contaminant concentration decline was attributed to cometabolic degradation. This study indicated that TCE was degraded in 9 of the 14 plumes examined, with first order degradation half-lives ranging from about 1 to 12 years. TCE degradation in about two-thirds of the plumes examined suggests that cometabolism of TCE in aerobic groundwater is a common occurrence, in contrast to the conventional wisdom that TCE is recalcitrant in aerobic groundwater. The degradation half-life values calculated in this study are short enough that natural attenuation may be a viable remedy in many aerobic plumes. Computer modeling of groundwater flow and contaminant transport and degradation is frequently used to predict the evolution of groundwater plumes, and for evaluating natural attenuation and other remedial alternatives. An important aspect of a computer model is the mathematical approach for describing degradation kinetics. A common approach is to assume that degradation occurs as a first-order process. First order kinetics are easily incorporated into transport models and require only a single value (a degradation half-life) to describe reaction kinetics. The use of first order kinetics is justified in many cases because more elaborate kinetic equations often closely approximate first order kinetics under typical field conditions. A previous modeling study successfully simulated the INL TCE plume using first order degradation kinetics. TCE cometabolism is the result of TCE reacting with microbial enzymes that were produced for other purposes, such as oxidizing a growth substrate to obtain

  14. Modeling the regional impact of ship emissions on NOx and ozone levels over the Eastern Atlantic and Western Europe using ship plume parameterization

    Directory of Open Access Journals (Sweden)

    P. Pisoft

    2010-07-01

    Full Text Available In general, regional and global chemistry transport models apply instantaneous mixing of emissions into the model's finest resolved scale. In case of a concentrated source, this could result in erroneous calculation of the evolution of both primary and secondary chemical species. Several studies discussed this issue in connection with emissions from ships and aircraft. In this study, we present an approach to deal with the non-linear effects during dispersion of NOx emissions from ships. It represents an adaptation of the original approach developed for aircraft NOx emissions, which uses an exhaust tracer to trace the amount of the emitted species in the plume and applies an effective reaction rate for the ozone production/destruction during the plume's dilution into the background air. In accordance with previous studies examining the impact of international shipping on the composition of the troposphere, we found that the contribution of ship induced surface NOx to the total reaches 90% over remote ocean and makes 10–30% near coastal regions. Due to ship emissions, surface ozone increases by up to 4–6 ppbv making 10% contribution to the surface ozone budget. When applying the ship plume parameterization, we show that the large scale NOx decreases and the ship NOx contribution is reduced by up to 20–25%. A similar decrease was found in the case of O3. The plume parameterization suppressed the ship induced ozone production by 15–30% over large areas of the studied region. To evaluate the presented parameterization, nitrogen monoxide measurements over the English Channel were compared with modeled values and it was found that after activating the parameterization the model accuracy increases.

  15. Survival analysis, the infinite Gaussian mixture model, FDG-PET and non-imaging data in the prediction of progression from mild cognitive impairment

    OpenAIRE

    Li, Rui; Perneczky, Robert; Drzezga, Alexander; Kramer, Stefan; Initiative, for the Alzheimer's Disease Neuroimaging

    2015-01-01

    We present a method to discover interesting brain regions in [18F] fluorodeoxyglucose positron emission tomography (PET) scans, showing also the benefits when PET scans are in combined use with non-imaging variables. The discriminative brain regions facilitate a better understanding of Alzheimer's disease (AD) progression, and they can also be used for predicting conversion from mild cognitive impairment (MCI) to AD. A survival analysis(Cox regression) and infinite Gaussian mixture model (IGM...

  16. Effective leaf area index retrieving from terrestrial point cloud data: coupling computational geometry application and Gaussian mixture model clustering

    Science.gov (United States)

    Jin, S.; Tamura, M.; Susaki, J.

    2014-09-01

    Leaf area index (LAI) is one of the most important structural parameters of forestry studies which manifests the ability of the green vegetation interacted with the solar illumination. Classic understanding about LAI is to consider the green canopy as integration of horizontal leaf layers. Since multi-angle remote sensing technique developed, LAI obliged to be deliberated according to the observation geometry. Effective LAI could formulate the leaf-light interaction virtually and precisely. To retrieve the LAI/effective LAI from remotely sensed data therefore becomes a challenge during the past decades. Laser scanning technique can provide accurate surface echoed coordinates with densely scanned intervals. To utilize the density based statistical algorithm for analyzing the voluminous amount of the 3-D points data is one of the subjects of the laser scanning applications. Computational geometry also provides some mature applications for point cloud data (PCD) processing and analysing. In this paper, authors investigated the feasibility of a new application for retrieving the effective LAI of an isolated broad leaf tree. Simplified curvature was calculated for each point in order to remove those non-photosynthetic tissues. Then PCD were discretized into voxel, and clustered by using Gaussian mixture model. Subsequently the area of each cluster was calculated by employing the computational geometry applications. In order to validate our application, we chose an indoor plant to estimate the leaf area, the correlation coefficient between calculation and measurement was 98.28 %. We finally calculated the effective LAI of the tree with 6 × 6 assumed observation directions.

  17. Fully coupled modeling of radionuclide migration in a clayey rock disturbed by alkaline plume

    International Nuclear Information System (INIS)

    Pellegrni, D.; Windt, L. de; Lee, J.V.D.

    2002-03-01

    The disposal of radioactive wastes in clayey formations may require the use of large amounts of concrete and cement as a barrier to minimize corrosion of steel containers and radionuclide migration and for supporting drifts and disposal vaults. In this context, reactive transport modeling of the interactions between cement or concrete and the argillaceous host rock aims at estimating the evolution in time of the containment properties of the multi-barriers system. The objectives of the paper are to demonstrate that integrating radionuclides migration in the modeling of strongly coupled geochemical processes of cement-clay stone interactions is feasible and that it represents an efficient way to assess the sensitivity and modification of the classical Kd and solubility parameters with respect to the chemical evolutions. Two types of modeling are considered in the paper: i): calculation of intrinsic solubility limits and Kd values backing up on the results of modeling of cement/clay stone interactions (radionuclides are assumed to be present over the whole domain at any time whatever the scenario), ii) full mechanistic modeling which explicitly introduces radionuclides in the calculation with ad hoc assumptions on radionuclide inventory, canister failure, migration pathway, etc. The reactive transport code HYTEC, based on the geochemical code CHESS, is used to simulate both the cement-clay stone interaction processes and the radionuclide migration in 1-D and 2-D configurations. Convective/dispersive and diffuse transport can be simulated for solutes and colloids. A wide range of processes such as aqueous chemistry, redox, dissolution/precipitation, surface complexation and ion exchange can be modeled at equilibrium or with kinetic control. In addition, HYTEC is strongly coupled, i.e. the hydrology (flow and diffusion) may change when mineral precipitation or dissolution changes the local porosity. (authors)

  18. On the validity of a Fickian diffusion model for the spreading of liquid infiltration plumes in partially saturated heterogeneous media

    International Nuclear Information System (INIS)

    Pruess, K.

    1994-01-01

    Localized infiltration of aqueous and -non-aqueous phase liquids (NAPLs) occurs in many circumstances. Examples include leaky underground pipelines and storage tanks, landfill and disposal sites, and surface spills. Because of ever-present heterogeneities on different scales such infiltration plumes are expected to disperse transversally and longitudinally. This paper examines recent suggestions that liquid plumes are being dispersed from medium heterogeneities in a manner that is analogous to Fickian diffusion. Numerical simulation experiments on liquid infiltration in heterogeneous media are performed to study the dispersive effects of small-scale heterogeneity. It is found that plume spreading indeed tends to be diffusive. Our results suggest that, as far as infiltration of liquids is concerned, broad classes of heterogeneous media behave as dispersive media with locally homogeneous (albeit anisotropic) permeability

  19. Modeling crop residue burning experiments to evaluate smoke emissions and plume transport

    Science.gov (United States)

    Crop residue burning is a common land management practice that results in emissions of a variety of pollutants with negative health impacts. Modeling systems are used to estimate air quality impacts of crop residue burning to support retrospective regulatory assessments and also ...

  20. Modeling crop residue burning experiments to evaluate smoke emissions and plume transport

    Science.gov (United States)

    Luxi Zhou; Kirk R. Baker; Sergey L. Napelenok; George Pouliot; Robert Elleman; Susan M. O' Neill; Shawn P. Urbanski; David C. Wong

    2018-01-01

    Crop residue burning is a common land management practice that results in emissions of a variety of pollutants with negative health impacts. Modeling systems are used to estimate air quality impacts of crop residue burning to support retrospective regulatory assessments and also for forecasting purposes. Ground and airborne measurements from a recent field experiment...

  1. Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume

    Science.gov (United States)

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...

  2. Lithosphere erosion atop mantle plumes

    Science.gov (United States)

    Agrusta, R.; Arcay, D.; Tommasi, A.

    2012-12-01

    Mantle plumes are traditionally proposed to play an important role in lithosphere erosion. Seismic images beneath Hawaii and Cape Verde show a lithosphere-asthenosphere-boundary (LAB) up to 50 km shallower than the surroundings. However, numerical models show that unless the plate is stationary the thermo-mechanical erosion of the lithosphere does not exceed 30 km. We use 2D petrological-thermo-mechanical numerical models based on a finite-difference method on a staggered grid and marker in cell method to study the role of partial melting on the plume-lithosphere interaction. A homogeneous peridotite composition with a Newtonian temperature- and pressure-dependent viscosity is used to simulate both the plate and the convective mantle. A constant velocity, ranging from 5 to 12.5 cm/yr, is imposed at the top of the plate. Plumes are created by imposing a thermal anomaly of 150 to 350 K on a 50 km wide domain at the base of the model (700 km depth); the plate right above the thermal anomaly is 40 Myr old. Partial melting is modeled using batch-melting solidus and liquidus in anhydrous conditions. We model the progressive depletion of peridotite and its effect on partial melting by assuming that the melting degree only strictly increases through time. Melt is accumulated until a porosity threshold is reached and the melt in excess is then extracted. The rheology of the partially molten peridotite is determined using viscous constitutive relationship based on a contiguity model, which enables to take into account the effects of grain-scale melt distribution. Above a threshold of 1%, melt is instantaneously extracted. The density varies as a function of partial melting degree and extraction. Besides, we analyze the kinematics of the plume as it impacts a moving plate, the dynamics of time-dependent small-scale convection (SSC) instabilities developing in the low-viscosity layer formed by spreading of hot plume material at the lithosphere base, and the resulting thermal

  3. Gaussian curvature elasticity determined from global shape transformations and local stress distributions : a comparative study using the MARTINI model

    NARCIS (Netherlands)

    Hu, Mingyang; de Jong, Djurre H.; Marrink, Siewert J.; Deserno, Markus

    2013-01-01

    We calculate the Gaussian curvature modulus (k) over bar of a systematically coarse-grained (CG) one-component lipid membrane by applying the method recently proposed by Hu et al. [Biophys. J., 2012, 102, 1403] to the MARTINI representation of 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC). We

  4. On Gaussian conditional independence structures

    Czech Academy of Sciences Publication Activity Database

    Lněnička, Radim; Matúš, František

    2007-01-01

    Roč. 43, č. 3 (2007), s. 327-342 ISSN 0023-5954 R&D Projects: GA AV ČR IAA100750603 Institutional research plan: CEZ:AV0Z10750506 Keywords : multivariate Gaussian distribution * positive definite matrices * determinants * gaussoids * covariance selection models * Markov perfectness Subject RIV: BA - General Mathematics Impact factor: 0.552, year: 2007

  5. On-current modeling of short-channel double-gate (DG) MOSFETs with a vertical Gaussian-like doping profile

    International Nuclear Information System (INIS)

    Dubey, Sarvesh; Jit, S.; Tiwari Pramod Kumar

    2013-01-01

    An analytic drain current model is presented for doped short-channel double-gate MOSFETs with a Gaussian-like doping profile in the vertical direction of the channel. The present model is valid in linear and saturation regions of device operation. The drain current variation with various device parameters has been demonstrated. The model is made more physical by incorporating the channel length modulation effect. Parameters like transconductance and drain conductance that are important in assessing the analog performance of the device have also been formulated. The model results are validated by numerical simulation results obtained by using the commercially available ATLAS™, a two dimensional device simulator from SILVACO. (semiconductor devices)

  6. Implementation of a micro-physical scheme for warm clouds in the meteorological model 'MERCURE': Application to cooling tower plumes and to orographic precipitation

    International Nuclear Information System (INIS)

    Bouzereau, Emmanuel

    2004-01-01

    A two-moment semi-spectral warm micro-physical scheme has been implemented inside the meteorological model 'MERCURE'. A new formulation of the buoyancy flux () is proposed, which is coherent with the corrigendum of Mellor (1977) but differs from Bougeault (1981). The non-precipitating cloud microphysics is validated by comparing the numerical simulations of fifteen cases of cooling tower plumes with data from a measurement campaign in Bugey in 1980. Satisfactory results are obtained on the plumes shape, on the temperature and vertical velocity fields and on the droplets spectrums, although the liquid water contents tend to be overestimated. The precipitating cloud microphysics is tested by reproducing the academical cases of orographic precipitation of Chaumerliac et al. (1987) and Richard and Chaumerliac (1989). The simulations allow a check of the action of different micro-physical terms. (author) [fr

  7. Linking network usage patterns to traffic Gaussianity fit

    NARCIS (Netherlands)

    de Oliveira Schmidt, R.; Sadre, R.; Melnikov, Nikolay; Schönwälder, Jürgen; Pras, Aiko

    Gaussian traffic models are widely used in the domain of network traffic modeling. The central assumption is that traffic aggregates are Gaussian distributed. Due to its importance, the Gaussian character of network traffic has been extensively assessed by researchers in the past years. In 2001,

  8. Gaussian processes for machine learning.

    Science.gov (United States)

    Seeger, Matthias

    2004-04-01

    Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated. Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible non-parametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations.13,78,31 The mathematical literature on GPs is large and often uses deep concepts which are not required to fully understand most machine learning applications. In this tutorial paper, we aim to present characteristics of GPs relevant to machine learning and to show up precise connections to other "kernel machines" popular in the community. Our focus is on a simple presentation, but references to more detailed sources are provided.

  9. Holographic non-Gaussianity

    International Nuclear Information System (INIS)

    McFadden, Paul; Skenderis, Kostas

    2011-01-01

    We investigate the non-Gaussianity of primordial cosmological perturbations within our recently proposed holographic description of inflationary universes. We derive a holographic formula that determines the bispectrum of cosmological curvature perturbations in terms of correlation functions of a holographically dual three-dimensional non-gravitational quantum field theory (QFT). This allows us to compute the primordial bispectrum for a universe which started in a non-geometric holographic phase, using perturbative QFT calculations. Strikingly, for a class of models specified by a three-dimensional super-renormalisable QFT, the primordial bispectrum is of exactly the factorisable equilateral form with f NL equil. = 5/36, irrespective of the details of the dual QFT. A by-product of this investigation is a holographic formula for the three-point function of the trace of the stress-energy tensor along general holographic RG flows, which should have applications outside the remit of this work

  10. Gaussian entanglement revisited

    Science.gov (United States)

    Lami, Ludovico; Serafini, Alessio; Adesso, Gerardo

    2018-02-01

    We present a novel approach to the separability problem for Gaussian quantum states of bosonic continuous variable systems. We derive a simplified necessary and sufficient separability criterion for arbitrary Gaussian states of m versus n modes, which relies on convex optimisation over marginal covariance matrices on one subsystem only. We further revisit the currently known results stating the equivalence between separability and positive partial transposition (PPT) for specific classes of Gaussian states. Using techniques based on matrix analysis, such as Schur complements and matrix means, we then provide a unified treatment and compact proofs of all these results. In particular, we recover the PPT-separability equivalence for: (i) Gaussian states of 1 versus n modes; and (ii) isotropic Gaussian states. In passing, we also retrieve (iii) the recently established equivalence between separability of a Gaussian state and and its complete Gaussian extendability. Our techniques are then applied to progress beyond the state of the art. We prove that: (iv) Gaussian states that are invariant under partial transposition are necessarily separable; (v) the PPT criterion is necessary and sufficient for separability for Gaussian states of m versus n modes that are symmetric under the exchange of any two modes belonging to one of the parties; and (vi) Gaussian states which remain PPT under passive optical operations can not be entangled by them either. This is not a foregone conclusion per se (since Gaussian bound entangled states do exist) and settles a question that had been left unanswered in the existing literature on the subject. This paper, enjoyable by both the quantum optics and the matrix analysis communities, overall delivers technical and conceptual advances which are likely to be useful for further applications in continuous variable quantum information theory, beyond the separability problem.

  11. Groundwater modeling of source terms and contaminant plumes for DOE low-level waste performance assessments

    International Nuclear Information System (INIS)

    McDowell-Boyer, L.M.; Wilson, J.E.

    1994-01-01

    Under US Department of Energy (DOE) Order 5820.2A, all sites within the DOE complex must analyze the performance of planned radioactive waste disposal facilities before disposal takes place through the radiological performance assessment process. These assessments consider both exposures to the public from radionuclides potentially released from disposal facilities and protection of groundwater resources. Compliance with requirements for groundwater protection is often the most difficult to demonstrate as these requirements are generally more restrictive than those for other pathways. Modeling of subsurface unsaturated and saturated flow and transport was conducted for two such assessments for the Savannah River site. The computer code PORFLOW was used to evaluate release and transport of radionuclides from different types of disposal unit configurations: vault disposal and trench disposal. The effectiveness of engineered barriers was evaluated in terms of compliance with groundwater protection requirements. The findings suggest that, due to the limited lifetime of engineered barriers, overdesign of facilities for long-lived radionuclides is likely to occur if compliance must be realized for thousands of years

  12. An Updated Model for the Anomalous Resistivity of LNAPL Plumes in Sandy Environments

    Science.gov (United States)

    Sauck, W. A.; Atekwana, E. A.; Werkema, D. D.

    2006-05-01

    Anomalously low resistivities have been observed at some sites contaminated by light non-aqueous phase liquid (LNAPL) since. The model that has been used to explain this phenomenon was published in 2000. This working hypothesis invokes both physical mixing and bacterial action to explain the low resistivities near the base of the vadose zone and the upper part of the aquifer. The hydrocarbon-degrading bacteria (of which there are numerous species found in soils) produce organic acids and carbonic acids. The acidic pore waters dissolve readily soluble ions from the native soil grains and grain coatings, to produce a leachate high in total dissolved solids. The free product LNAPL is initially a wetting phase, although not generally more than 50% extent, and seasonal water table fluctuations mix the hydrocarbons vertically through the upper water saturated zone and transition zone. This update introduces several new aspects of the conductive model. The first is that, in addition to the acids being produced by the oil-degrading bacteria, they also produce surfactants. Surfactants act similarly to detergents in detaching the oil phase from the solid substrate, and forming an emulsion of oil droplets within the water. This has helped to explain how continuous, high-TDS capillary paths can develop and pass vertically through what appears to be a substantial free product layer, thus providing easy passage for electrical current during electrical resistivity measurements. Further, it has also been shown that the addition of organic acids and biosurfactants to pore fluids can directly contribute to the conductivity of the pore fluids. A second development is that large-diameter column experiments were conducted for nearly two years (8 columns for 4 experiments). The columns had a vertical row of eletrodes for resistivity measurements, ports for extracting water samples with a syringe, and sample tubes for extracting soil samples. Water samples were used for chemical analysis

  13. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    Science.gov (United States)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  14. Measurements on cooling tower plumes. Pt. 3

    International Nuclear Information System (INIS)

    Fortak, H.

    1975-11-01

    In this paper an extended field experiment is described in which cooling tower plumes were investigated by means of three-dimensional in situ measurements. The goal of this program was to obtain input data for numerical models of cooling tower plumes. Data for testing or developing assumptions for sub-grid parametrizations were of special interest. Utilizing modern systems for high-resolution aerology and small aircraft, four measuring campaigns were conducted: two campaigns (1974) at the cooling towers of the RWE power station at Neurath and also two (1975) at the single cooling tower of the RWE power station at Meppen. Because of the broad spectrum of weather situations, it can be assumed that the results are representative with regard to the interrel