State of the art atmospheric dispersion modelling. Should the Gaussian plume model still be used?
Energy Technology Data Exchange (ETDEWEB)
Richter, Cornelia [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Koeln (Germany)
2016-11-15
For regulatory purposes with respect to licensing and supervision of airborne releases of nuclear installations, the Gaussian plume model is still in use in Germany. However, for complex situations the Gaussian plume model is to be replaced by a Lagrangian particle model. Now the new EU basic safety standards for protection against the dangers arising from exposure to ionising radiation (EU BSS) [1] asks for a realistic assessment of doses to the members of the public from authorised practices. This call for a realistic assessment raises the question whether dispersion modelling with the Gaussian plume model is an adequate approach anymore or whether the use of more complex models is mandatory.
Gaussian versus top-hat profile assumptions in integral plume models
Davidson, G. A.
Numerous integral models describing the behaviour of buoyant plumes released into stratified crossflows have been presented in the literature. One of the differences between these models is the form assumed for the self-similar profile: some models assume a top-hat form while others assume a Gaussian. The differences between these two approaches are evaluated by (a) comparing the governing equations on which Gaussian and top-hat models are based; (b) comparing some typical plume predictions generated by each type of model over a range of model parameters. It is shown that, while the profile assumption does lead to differences in the equations which govern plume variables, the effects of these differences on actual plume predictions is small over the range of parameters of practical interest. Since the predictions of Gaussian and top-hat models are essentially equivalent, it can thus be concluded that the additional physical information incorporated into a Gaussian formulation plays only a minor role in mean plume behaviour, and that the tophat approach, which requires the numerical solution of a simpler set of equations, is adequate for most situations where an integral approach would be used.
Optimisation of dispersion parameters of Gaussian plume model for CO₂ dispersion.
Liu, Xiong; Godbole, Ajit; Lu, Cheng; Michal, Guillaume; Venton, Philip
2015-11-01
The carbon capture and storage (CCS) and enhanced oil recovery (EOR) projects entail the possibility of accidental release of carbon dioxide (CO2) into the atmosphere. To quantify the spread of CO2 following such release, the 'Gaussian' dispersion model is often used to estimate the resulting CO2 concentration levels in the surroundings. The Gaussian model enables quick estimates of the concentration levels. However, the traditionally recommended values of the 'dispersion parameters' in the Gaussian model may not be directly applicable to CO2 dispersion. This paper presents an optimisation technique to obtain the dispersion parameters in order to achieve a quick estimation of CO2 concentration levels in the atmosphere following CO2 blowouts. The optimised dispersion parameters enable the Gaussian model to produce quick estimates of CO2 concentration levels, precluding the necessity to set up and run much more complicated models. Computational fluid dynamics (CFD) models were employed to produce reference CO2 dispersion profiles in various atmospheric stability classes (ASC), different 'source strengths' and degrees of ground roughness. The performance of the CFD models was validated against the 'Kit Fox' field measurements, involving dispersion over a flat horizontal terrain, both with low and high roughness regions. An optimisation model employing a genetic algorithm (GA) to determine the best dispersion parameters in the Gaussian plume model was set up. Optimum values of the dispersion parameters for different ASCs that can be used in the Gaussian plume model for predicting CO2 dispersion were obtained.
Pullen, Julie; Boris, Jay P.; Young, Theodore; Patnaik, Gopal; Iselin, John
This paper quantitatively assesses the spatial extent of modeled contaminated regions resulting from hypothetical airborne agent releases in major urban areas. We compare statistics from a release at several different sites in Washington DC and Chicago using a Gaussian puff model (SCIPUFF, version 1.3, with urban parameter settings) and a building-resolving computational fluid dynamics (CFD) model (FAST3D-CT). For a neutrally buoyant gas source term with urban meteorology, we compare near-surface dosage values within several kilometers of the release during the first half hour, before the gas is dispersed beyond the critical lethal level. In particular, using "fine-grain" point-wise statistics such as fractional bias, spatial correlations and the percentage of points lying within a factor of two, we find that dosage distributions from the Gaussian puff and CFD model share few features in common. Yet the "coarse-grain" statistic that compares areas contained within a given contour level reveals that the differences between the models are less pronounced. Most significant among these distinctions is the rapid lofting, leading to enhanced vertical mixing, and projection downwind of the contaminant by the interaction of the winds with the urban landscape in the CFD model. This model-to-model discrepancy is partially ameliorated by supplying the puff model with more detailed information about the urban boundary layer that evolves on the CFD grid. While improving the correspondence of the models when using the "coarse-grain" statistic, the additional information does not lead to quite as substantial an overall agreement between the models when the "fine-grain" statistics are compared. The taller, denser and more variable building landscape of Chicago created increased sensitivity to release site and led to greater divergence in FAST3D-CT and SCIPUFF results relative to the flatter, sparser and more uniform urban morphology of Washington DC.
Energy Technology Data Exchange (ETDEWEB)
Reyes L, C.; Munoz Ledo, C. R. [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)
1992-12-31
The Gaussian Plume Model is an analytical extension to simulate the dispersion of the SO{sub 2} concentration at ground level as a function of the emission changes in the spot sources, as well as the pollutant dispersion in the Wind Rose, when the necessary parameters are fed. The model was elaborated in a personal computer and the results produced are generated in text form. [Espanol] El modelo de pluma gaussiano es una extension analitica para simular la dispersion de las concentraciones de SO{sub 2} a nivel del piso en funcion de los cambios de las emisiones en las fuentes puntuales, asi como, la dispersion del contaminante en la rosa de los vientos cuando se le alimentan los parametros necesarios. El modelo fue elaborado en una computadora personal y los resultados que proporciona los genera en modo texto.
Directory of Open Access Journals (Sweden)
A. A. Ramadan
2008-01-01
Full Text Available In Kuwait, most of the power stations use fuel oil as the prime source of energy. The sulphur content (S% of the fuel used as well as other factors have a direct impact on the ground level concentration of sulphur dioxide (SO2 released by power stations into the atmosphere. The SO2 ground level concentration has to meet the environmental standards set by Kuwait Environment Public Authority (KEPA. In this communication we present results obtained using the Industrial Sources Complex Short Term (ISCST3 model to calculate the SO2 concentration resulting from existing power stations in Kuwait assuming zero background SO2 concentration and entire reliance on Heavy Fuel Oil. 1, 2, 3 and 4S% scenarios were simulated for three emission cycle cases. The computed annual SO2 concentrations were always less than KEPA standards for all scenarios. The daily SO2 concentrations were within KEPA standards for 1S% but violated KEPA standards for higher S%. In general, the concentrations obtained from the combined hourly and seasonal cycle were the lowest and those obtained from the no cycle case were the highest. The comparison between the results of the three cycles revealed that the violation times cannot be solely attributed to the increase in emissions and the meteorological conditions have to be taken into consideration.
Energy Technology Data Exchange (ETDEWEB)
Konopka, P.; Schlager, H.; Schulte, P.; Schumann, U.; Ziereis, H. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Oberpfaffenhofen (Germany). Inst. fuer Physik der Atmosphaere; Hagen, D.; Whitefield, P. [Missouri Univ., Rolla, MO (United States). Lab. for Cloud and Aerosol Science
1997-12-31
Focussed aircraft measurements including NO, NO{sub 2}, O{sub 3}, and aerosols (CN) have been carried out over the Eastern North Atlantic as part of the POLINAT (Pollution from Aircraft Emissions in the North Atlantic Flight Corridor) project to search for small and large scale signals of air traffic emissions in the corridor region. Here, the experimental data measured at cruising altitudes on November, 6, 1994 close to peak traffic hours are considered. Observed peak concentrations in small scale NO{sub x} spikes exceed background level of about 50 pptv by up to two orders of magnitude. The measured NO{sub x} concentration field is compared with simulations obtained with a plume dispersion model using collected air traffic data and wind measurements. Additionally, the measured and calculated NO/NO{sub x} ratios are considered. The comparison with the model shows that the observed (multiple-)peaks can be understood as a superposition of several aircraft plumes with ages up to 3 hours. (author) 12 refs.
Modelling oil plumes from subsurface spills.
Lardner, Robin; Zodiatis, George
2017-07-11
An oil plume model to simulate the behavior of oil from spills located at any given depth below the sea surface is presented, following major modifications to a plume model developed earlier by Malačič (2001) and drawing on ideas in a paper by Yapa and Zheng (1997). The paper presents improvements in those models and numerical testing of the various parameters in the plume model. The plume model described in this paper is one of the numerous modules of the well-established MEDSLIK oil spill model. The deep blowout scenario of the MEDEXPOL 2013 oil spill modelling exercise, organized by REMPEC, has been applied using the improved oil plume module of the MEDSLIK model and inter-comparison with results having the oil spill source at the sea surface are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Numerical modeling of mantle plume diffusion
Krupsky, D.; Ismail-Zadeh, A.
2004-12-01
To clarify the influence of the heat diffusion on the mantle plume evolution, we develop a two-dimensional numerical model of the plume diffusion and relevant efficient numerical algorithm and code to compute the model. The numerical approach is based on the finite-difference method and modified splitting algorithm. We consider both von Neumann and Direchlet conditions at the model boundaries. The thermal diffusivity depends on pressure in the model. Our results show that the plume is disappearing from the bottom up - the plume tail at first and its head later - because of the mantle plume geometry (a thin tail and wide head) and higher heat conductivity in the lower mantle. We study also an effect of a lateral mantle flow associated with the plate motion on the distortion of the diffusing mantle plume. A number of mantle plumes recently identified by seismic tomography seem to disappear in the mid-mantle. We explain this disappearance as the effect of heat diffusion on the evolution of mantle plume.
GPstuff: Bayesian Modeling with Gaussian Processes
Vanhatalo, J.; Riihimaki, J.; Hartikainen, J.; Jylänki, P.P.; Tolvanen, V.; Vehtari, A.
2013-01-01
The GPstuff toolbox is a versatile collection of Gaussian process models and computational tools required for Bayesian inference. The tools include, among others, various inference methods, sparse approximations and model assessment methods.
Gaussian mixture model of heart rate variability.
Directory of Open Access Journals (Sweden)
Tommaso Costa
Full Text Available Heart rate variability (HRV is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters.
Statistical Compressed Sensing of Gaussian Mixture Models
Yu, Guoshen
2011-01-01
A novel framework of compressed sensing, namely statistical compressed sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution, and achieving accurate reconstruction on average, is introduced. SCS based on Gaussian models is investigated in depth. For signals that follow a single Gaussian model, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS based on sparse models, where N is the signal dimension, and with an optimal decoder implemented via linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the best k-term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional sparsity-oriented CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is u...
Wang, I. T.
A general method for determining the effective transport wind speed, overlineu, in the Gaussian plume equation is discussed. Physical arguments are given for using the generalized overlineu instead of the often adopted release-level wind speed with the plume diffusion equation. Simple analytical expressions for overlineu applicable to low-level point releases and a wide range of atmospheric conditions are developed. A non-linear plume kinematic equation is derived using these expressions. Crosswind-integrated SF 6 concentration data from the 1983 PNL tracer experiment are used to evaluate the proposed analytical procedures along with the usual approach of using the release-level wind speed. Results of the evaluation are briefly discussed.
Modelling of aerosol processes in plumes
Energy Technology Data Exchange (ETDEWEB)
Lazaridis, M.; Isukapalli, S.S.; Georgopoulos, P.G. [Norwegian Institute of Air Research, Kjeller (Norway)
2001-07-01
A modelling platform for studying photochemical gaseous and aerosol phase processes from localized (e.g., point) sources has been presented. The current approach employs a reactive plume model which extends the regulatory model RPM-IV by incorporating aerosol processes and heterogeneous chemistry. The physics and chemistry of elemental carbon, organic carbon, sulfate, nitrate, ammonium material of aerosols are treated and attributed to the PM size distribution. A modified version of the carbon bond IV chemical mechanism is included to model the formation of organic aerosol. Aerosol dynamics modeled include mechanisms of nucleation, condensation, dry deposition and gas/particle partitioning of organic matter. The model is first applied to a number of case studies involving emissions from point sources and sulfate particle formation in plumes. Model calculations show that homogeneous nucleation is an efficient process for new particle formation in plumes, in agreement with previous field studies and theoretical predictions. In addition, the model is compared with field data from power plant plumes with satisfactory predictions against gaseous species and total sulphate mass measurements. Finally, the plume model is applied to study secondary organic matter formation due to various emission categories such as vehicles and the oil production sector.
MULTI-SCALE GAUSSIAN PROCESSES MODEL
Institute of Scientific and Technical Information of China (English)
Zhou Yatong; Zhang Taiyi; Li Xiaohe
2006-01-01
A novel model named Multi-scale Gaussian Processes (MGP) is proposed. Motivated by the ideas of multi-scale representations in the wavelet theory, in the new model, a Gaussian process is represented at a scale by a linear basis that is composed of a scale function and its different translations. Finally the distribution of the targets of the given samples can be obtained at different scales. Compared with the standard Gaussian Processes (GP) model, the MGP model can control its complexity conveniently just by adjusting the scale parameter. So it can trade-off the generalization ability and the empirical risk rapidly. Experiments verify the feasibility of the MGP model, and exhibit that its performance is superior to the GP model if appropriate scales are chosen.
Modeling text with generalizable Gaussian mixtures
DEFF Research Database (Denmark)
Hansen, Lars Kai; Sigurdsson, Sigurdur; Kolenda, Thomas
2000-01-01
We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss...
Model selection for Gaussian kernel PCA denoising
DEFF Research Database (Denmark)
Jørgensen, Kasper Winther; Hansen, Lars Kai
2012-01-01
We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...
Non-Gaussianity of Racetrack Inflation Models
Institute of Scientific and Technical Information of China (English)
SUN Cheng-Yi; ZHANG De-Hai
2007-01-01
In this paper, we use the result in [C.Y. Sun and D.H. Zhang, arXiv:astro-ph/0510709] to calculate the non-Gaussianity of the racetrack models in[J.J. Blanco-Pillado, et al., JHEP 0411 (2004) 063; arXiv:hep-th/0406230]and [J.J. Blanco-Pillado, et al., arXiv:hep-th/0603129]. The two models give different non-Gaussianities. Both of them are reasonable. However, we find that, for multi-field inflationary models with the non-trivial metric of the field space,the condition of the slow-roll cannot guarantee small non-Gaussianities.
Review of Gaussian diffusion-deposition models
Energy Technology Data Exchange (ETDEWEB)
Horst, T.W.
1979-01-01
The assumptions and predictions of several Gaussian diffusion-deposition models are compared. A simple correction to the Chamberlain source depletion model is shown to predict ground-level airborne concentrations and dry deposition fluxes in close agreement with the exact solution of Horst.
A collisionless plasma thruster plume expansion model
Merino, Mario; Cichocki, Filippo; Ahedo, Eduardo
2015-06-01
A two-fluid model of the unmagnetized, collisionless far region expansion of the plasma plume for gridded ion thrusters and Hall effect thrusters is presented. The model is integrated into two semi-analytical solutions valid in the hypersonic case. These solutions are discussed and compared against the results from the (exact) method of characteristics; the relative errors in density and velocity increase slowly axially and radially and are of the order of 10-2-10-3 in the cases studied. The plasma density, ion flux and ambipolar electric field are investigated. A sensitivity analysis of the problem parameters and initial conditions is carried out in order to characterize the far plume divergence angle in the range of interest for space electric propulsion. A qualitative discussion of the physics of the secondary plasma plume is also provided.
Modeling the Enceladus plume--plasma interaction
Fleshman, B L; Bagenal, F
2010-01-01
We investigate the chemical interaction between Saturn's corotating plasma and Enceladus' volcanic plumes. We evolve plasma as it passes through a prescribed H2O plume using a physical chemistry model adapted for water-group reactions. The flow field is assumed to be that of a plasma around an electrically-conducting obstacle centered on Enceladus and aligned with Saturn's magnetic field, consistent with Cassini magnetometer data. We explore the effects on the physical chemistry due to: (1) a small population of hot electrons; (2) a plasma flow decelerated in response to the pickup of fresh ions; (3) the source rate of neutral H2O. The model confirms that charge exchange dominates the local chemistry and that H3O+ dominates the water-group composition downstream of the Enceladus plumes. We also find that the amount of fresh pickup ions depends heavily on both the neutral source strength and on the presence of a persistent population of hot electrons.
Simple model of a cooling tower plume
Jan, Cizek; Jiri, Nozicka
2016-06-01
This article discusses the possibilities in the area of modeling of the so called cooling tower plume emergent at operating evaporating cooling systems. As opposed to recent publication, this text focuses on the possibilities of a simplified analytic description of the whole problem where this description shall - in the future - form the base of a calculation algorithms enabling to simulate the efficiency of systems reducing this cooling tower plume. The procedure is based on the application of basic formula for the calculation of the velocity and concentration fields in the area above the cooling tower. These calculation is then used to determine the form and the total volume of the plume. Although this approach does not offer more exact results, it can provide a basic understanding of the impact of individual quantities relating to this problem.
A hybrid plume model for local-scale dispersion
Energy Technology Data Exchange (ETDEWEB)
Nikmo, J.; Tuovinen, J.P.; Kukkonen, J.; Valkama, I.
1997-12-31
The report describes the contribution of the Finnish Meteorological Institute to the project `Dispersion from Strongly Buoyant Sources`, under the `Environment` programme of the European Union. The project addresses the atmospheric dispersion of gases and particles emitted from typical fires in warehouses and chemical stores. In the study only the `passive plume` regime, in which the influence of plume buoyancy is no longer important, is addressed. The mathematical model developed and its numerical testing is discussed. The model is based on atmospheric boundary-layer scaling theory. In the vicinity of the source, Gaussian equations are used in both the horizontal and vertical directions. After a specified transition distance, gradient transfer theory is applied in the vertical direction, while the horizontal dispersion is still assumed to be Gaussian. The dispersion parameters and eddy diffusivity are modelled in a form which facilitates the use of a meteorological pre-processor. Also a new model for the vertical eddy diffusivity (K{sub z}), which is a continuous function of height in the various atmospheric scaling regions is presented. The model includes a treatment of the dry deposition of gases and particulate matter, but wet deposition has been neglected. A numerical solver for the atmospheric diffusion equation (ADE) has been developed. The accuracy of the numerical model was analysed by comparing the model predictions with two analytical solutions of ADE. The numerical deviations of the model predictions from these analytic solutions were less than two per cent for the computational regime. The report gives numerical results for the vertical profiles of the eddy diffusivity and the dispersion parameters, and shows spatial concentration distributions in various atmospheric conditions 39 refs.
Directory of Open Access Journals (Sweden)
Greg Yarwood
2011-08-01
Full Text Available Multi-pollutant chemical transport models (CTMs are being routinely used to predict the impacts of emission controls on the concentrations and deposition of primary and secondary pollutants. While these models have a fairly comprehensive treatment of the governing atmospheric processes, they are unable to correctly represent processes that occur at very fine scales, such as the near-source transport and chemistry of emissions from elevated point sources, because of their relatively coarse horizontal resolution. Several different approaches have been used to address this limitation, such as using fine grids, adaptive grids, hybrid modeling, or an embedded sub-grid scale plume model, i.e., plume-in-grid (PinG modeling. In this paper, we first discuss the relative merits of these various approaches used to resolve sub-grid scale effects in grid models, and then focus on PinG modeling which has been very effective in addressing the problems listed above. We start with a history and review of PinG modeling from its initial applications for ozone modeling in the Urban Airshed Model (UAM in the early 1980s using a relatively simple plume model, to more sophisticated and state-of-the-science plume models, that include a full treatment of gas-phase, aerosol, and cloud chemistry, embedded in contemporary models such as CMAQ, CAMx, and WRF-Chem. We present examples of some typical results from PinG modeling for a variety of applications, discuss the implications of PinG on model predictions of source attribution, and discuss possible future developments and applications for PinG modeling.
The Supervised Learning Gaussian Mixture Model
Institute of Scientific and Technical Information of China (English)
马继涌; 高文
1998-01-01
The traditional Gaussian Mixture Model(GMM)for pattern recognition is an unsupervised learning method.The parameters in the model are derived only by the training samples in one class without taking into account the effect of sample distributions of other classes,hence,its recognition accuracy is not ideal sometimes.This paper introduces an approach for estimating the parameters in GMM in a supervising way.The Supervised Learning Gaussian Mixture Model(SLGMM)improves the recognition accuracy of the GMM.An experimental example has shown its effectiveness.The experimental results have shown that the recognition accuracy derived by the approach is higher than those obtained by the Vector Quantization(VQ)approach,the Radial Basis Function (RBF) network model,the Learning Vector Quantization (LVQ) approach and the GMM.In addition,the training time of the approach is less than that of Multilayer Perceptrom(MLP).
Numerical Modelling of Jets and Plumes
DEFF Research Database (Denmark)
Larsen, Torben
1993-01-01
An overview on numerical models for prediction of the flow and mixing processes in turbulent jets and plumes is given. The overview is structured to follow an increasing complexity in the physical and numerical principles. The various types of models are briefly mentioned, from the one-dimensiona......An overview on numerical models for prediction of the flow and mixing processes in turbulent jets and plumes is given. The overview is structured to follow an increasing complexity in the physical and numerical principles. The various types of models are briefly mentioned, from the one......-dimensional integral method to the general 3-dimensional solution of the Navier-Stokes equations. Also the predictive capabilities of the models are discussed. The presentation takes the perspective of civil engineering and covers issues like sewage outfalls and cooling water discharges to the sea....
Gaussian Sum PHD Filtering Algorithm for Nonlinear Non-Gaussian Models
Institute of Scientific and Technical Information of China (English)
Yin Jianjun; Zhang Jianqiu; Zhuang Zesen
2008-01-01
A new multi-target filtering algorithm, termed as the Gaussian sum probability hypothesis density (GSPHD) filter, is proposed for nonlinear non-Gaussian tracking models. Provided that the initial prior intensity of the states is Gaussian or can be identified as a Gaussiaa sum, the analytical results of the algorithm show that the posterior intensity at any subsequent time step remains a Gaussian sum under the assumption that the state noise, the measurement noise, target spawn intensity, new target birth intensity, target survival probability, and detection probability are all Gaussian sums. The analysis also shows that the existing Gaassian mixture probability hypothesis density (GMPHD) filter, which is unsuitable for handling the non-Gaussian noise cases, is no more than a special ease of the proposed algorithm, which fills the shortage of incapability of treating non-Gaussian noise. The multi-target tracking simulation results verify the effectiveness of the proposed GSPHD.
A global sensitivity analysis of the PlumeRise model of volcanic plumes
Woodhouse, Mark J.; Hogg, Andrew J.; Phillips, Jeremy C.
2016-10-01
Integral models of volcanic plumes allow predictions of plume dynamics to be made and the rapid estimation of volcanic source conditions from observations of the plume height by model inversion. Here we introduce PlumeRise, an integral model of volcanic plumes that incorporates a description of the state of the atmosphere, includes the effects of wind and the phase change of water, and has been developed as a freely available web-based tool. The model can be used to estimate the height of a volcanic plume when the source conditions are specified, or to infer the strength of the source from an observed plume height through a model inversion. The predictions of the volcanic plume dynamics produced by the model are analysed in four case studies in which the atmospheric conditions and the strength of the source are varied. A global sensitivity analysis of the model to a selection of model inputs is performed and the results are analysed using parallel coordinate plots for visualisation and variance-based sensitivity indices to quantify the sensitivity of model outputs. We find that if the atmospheric conditions do not vary widely then there is a small set of model inputs that strongly influence the model predictions. When estimating the height of the plume, the source mass flux has a controlling influence on the model prediction, while variations in the plume height strongly effect the inferred value of the source mass flux when performing inversion studies. The values taken for the entrainment coefficients have a particularly important effect on the quantitative predictions. The dependencies of the model outputs to variations in the inputs are discussed and compared to simple algebraic expressions that relate source conditions to the height of the plume.
Gaussian Confinement in a Jkj Decay Model
da Silva, Mario L. L.; Hadjimichef, Dimiter; Vasconcellos, Cesar A. Z.
In microscopic decay models, one attempts to describe hadron strong decays in terms of quark and gluon degrees of freedom. We begin by assuming that strong decays are driven by the same interquark Hamiltonian which determines the spectrum, and that it incorporates gaussian confinement. An A → BC decay matrix element of the JKJ Hamiltonian involves a pair-production current matrix elements times a scatering matrix element. Diagrammatically this corresponds to an interaction between an initial line and produced pair.
Soft sensor modeling based on Gaussian processes
Institute of Scientific and Technical Information of China (English)
XIONG Zhi-hua; HUANG Guo-hong; SHAO Hui-he
2005-01-01
In order to meet the demand of online optimal running, a novel soft sensor modeling approach based on Gaussian processes was proposed. The approach is moderately simple to implement and use without loss of performance. It is trained by optimizing the hyperparameters using the scaled conjugate gradient algorithm with the squared exponential covariance function employed. Experimental simulations show that the soft sensor modeling approach has the advantage via a real-world example in a refinery. Meanwhile, the method opens new possibilities for application of kernel methods to potential fields.
Video compressive sensing using Gaussian mixture models.
Yang, Jianbo; Yuan, Xin; Liao, Xuejun; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence
2014-11-01
A Gaussian mixture model (GMM)-based algorithm is proposed for video reconstruction from temporally compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression.
Bayesian model selection in Gaussian regression
Abramovich, Felix
2009-01-01
We consider a Bayesian approach to model selection in Gaussian linear regression, where the number of predictors might be much larger than the number of observations. From a frequentist view, the proposed procedure results in the penalized least squares estimation with a complexity penalty associated with a prior on the model size. We investigate the optimality properties of the resulting estimator. We establish the oracle inequality and specify conditions on the prior that imply its asymptotic minimaxity within a wide range of sparse and dense settings for "nearly-orthogonal" and "multicollinear" designs.
Modeling contaminant plumes in fractured limestone aquifers
DEFF Research Database (Denmark)
Mosthaf, Klaus; Brauns, Bentje; Fjordbøge, Annika Sidelmann
the established approaches of the equivalent porous medium, discrete fracture and dual continuum models. However, these modeling concepts are not well tested for contaminant plume migration in limestone geologies. Our goal was to develop and evaluate approaches for modeling the transport of dissolved contaminant...... in the planning of field tests and to update the conceptual model in an iterative process. Field data includes information on spill history, distribution of the contaminant (multilevel sampling), geology and hydrogeology. To describe the geology and fracture system, data from borehole logs, packer tests, optical...... distribution in the aquifer. Different models were used for the planning and interpretation of the pump and tracer test. The models were evaluated by examining their ability to describe collected field data. The comparison with data showed that the models have substantially different representations...
Linear Latent Force Models using Gaussian Processes
Álvarez, Mauricio A; Lawrence, Neil D
2011-01-01
Purely data driven approaches for machine learning present difficulties when data is scarce relative to the complexity of the model or when the model is forced to extrapolate. On the other hand, purely mechanistic approaches need to identify and specify all the interactions in the problem at hand (which may not be feasible) and still leave the issue of how to parameterize the system. In this paper, we present a hybrid approach using Gaussian processes and differential equations to combine data driven modelling with a physical model of the system. We show how different, physically-inspired, kernel functions can be developed through sensible, simple, mechanistic assumptions about the underlying system. The versatility of our approach is illustrated with three case studies from motion capture, computational biology and geostatistics.
Connections between Graphical Gaussian Models and Factor Analysis
Salgueiro, M. Fatima; Smith, Peter W. F.; McDonald, John W.
2010-01-01
Connections between graphical Gaussian models and classical single-factor models are obtained by parameterizing the single-factor model as a graphical Gaussian model. Models are represented by independence graphs, and associations between each manifest variable and the latent factor are measured by factor partial correlations. Power calculations…
A note on moving average models for Gaussian random fields
DEFF Research Database (Denmark)
Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.
The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...
Distributed static linear Gaussian models using consensus.
Belanovic, Pavle; Valcarcel Macua, Sergio; Zazo, Santiago
2012-10-01
Algorithms for distributed agreement are a powerful means for formulating distributed versions of existing centralized algorithms. We present a toolkit for this task and show how it can be used systematically to design fully distributed algorithms for static linear Gaussian models, including principal component analysis, factor analysis, and probabilistic principal component analysis. These algorithms do not rely on a fusion center, require only low-volume local (1-hop neighborhood) communications, and are thus efficient, scalable, and robust. We show how they are also guaranteed to asymptotically converge to the same solution as the corresponding existing centralized algorithms. Finally, we illustrate the functioning of our algorithms on two examples, and examine the inherent cost-performance trade-off.
Think continuous: Markovian Gaussian models in spatial statistics
Simpson, Daniel; Rue, Håvard
2011-01-01
Gaussian Markov random fields (GMRFs) are frequently used as computationally efficient models in spatial statistics. Unfortunately, it has traditionally been difficult to link GMRFs with the more traditional Gaussian random field models as the Markov property is difficult to deploy in continuous space. Following the pioneering work of Lindgren et al. (2011), we expound on the link between Markovian Gaussian random fields and GMRFs. In particular, we discuss the theoretical and practical aspects of fast computation with continuously specified Markovian Gaussian random fields, as well as the clear advantages they offer in terms of clear, parsimonious and interpretable models of anisotropy and non-stationarity.
Efficient sampling of Gaussian graphical models using conditional Bayes factors
Hinne, M.; Lenkoski, A.; Heskes, T.M.; Gerven, M.A.J. van
2014-01-01
Bayesian estimation of Gaussian graphical models has proven to be challenging because the conjugate prior distribution on the Gaussian precision matrix, the G-Wishart distribution, has a doubly intractable partition function. Recent developments provide a direct way to sample from the G-Wishart
STRATAFORM Plume Study: Analysis and Modeling
1999-09-30
of settling is explained by the variation of plume speed, rather than by variations in settling velocity (Hill et al., submitted). Floculation is an...mouth. However, the fraction of floculated sediment does not vary as much as expected with changes in forcing conditions. There do appear to be large...differences in the floculation rate between the extreme flood conditions of 1997 and the more moderate floods of 1998. The detailed examination of plume
A Thermal Plume Model for the Martian Convective Boundary Layer
Colaïtis, Arnaud; Hourdin, Frédéric; Rio, Catherine; Forget, François; Millour, Ehouarn
2013-01-01
The Martian Planetary Boundary Layer [PBL] is a crucial component of the Martian climate system. Global Climate Models [GCMs] and Mesoscale Models [MMs] lack the resolution to predict PBL mixing which is therefore parameterized. Here we propose to adapt the "thermal plume" model, recently developed for Earth climate modeling, to Martian GCMs, MMs, and single-column models. The aim of this physically-based parameterization is to represent the effect of organized turbulent structures (updrafts and downdrafts) on the daytime PBL transport, as it is resolved in Large-Eddy Simulations [LESs]. We find that the terrestrial thermal plume model needs to be modified to satisfyingly account for deep turbulent plumes found in the Martian convective PBL. Our Martian thermal plume model qualitatively and quantitatively reproduces the thermal structure of the daytime PBL on Mars: superadiabatic near-surface layer, mixing layer, and overshoot region at PBL top. This model is coupled to surface layer parameterizations taking ...
Determining resolvability of mantle plumes with synthetic seismic modeling
Maguire, R.; Van Keken, P. E.; Ritsema, J.; Fichtner, A.; Goes, S. D. B.
2014-12-01
Hotspot volcanism in locations such as Hawaii and Iceland is commonly thought to be associated with plumes rising from the deep mantle. In theory these dynamic upwellings should be visible in seismic data due to their reduced seismic velocity and their effect on mantle transition zone thickness. Numerous studies have attempted to image plumes [1,2,3], but their deep mantle origin remains unclear. In addition, a debate continues as to whether lower mantle plumes are visible in the form of body wave travel time delays, or whether such delays will be erased due to wavefront healing. Here we combine geodynamic modeling of mantle plumes with synthetic seismic waveform modeling in order to quantitatively determine under what conditions mantle plumes should be seismically visible. We model compressible plumes with phase changes at 410 km and 670 km, and a viscosity reduction in the upper mantle. These plumes thin from greater than 600 km in diameter in the lower mantle, to 200 - 400 km in the upper mantle. Plume excess potential temperature is 375 K, which maps to seismic velocity reductions of 4 - 12 % in the upper mantle, and 2 - 4 % in the lower mantle. Previous work that was limited to an axisymmetric spherical geometry suggested that these plumes would not be visible in the lower mantle [4]. Here we extend this approach to full 3D spherical wave propagation modeling. Initial results using a simplified cylindrical plume conduit suggest that mantle plumes with a diameter of 1000 km or greater will retain a deep mantle seismic signature. References[1] Wolfe, Cecily J., et al. "Seismic structure of the Iceland mantle plume." Nature 385.6613 (1997): 245-247. [2] Montelli, Raffaella, et al. "Finite-frequency tomography reveals a variety of plumes in the mantle." Science 303.5656 (2004): 338-343. [3] Schmandt, Brandon, et al. "Hot mantle upwelling across the 660 beneath Yellowstone." Earth and Planetary Science Letters 331 (2012): 224-236. [4] Hwang, Yong Keun, et al
Statistical Compressive Sensing of Gaussian Mixture Models
Yu, Guoshen
2010-01-01
A new framework of compressive sensing (CS), namely statistical compressive sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution and achieving accurate reconstruction on average, is introduced. For signals following a Gaussian distribution, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS, where N is the signal dimension, and with an optimal decoder implemented with linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the k-best term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is upper bounded by a constant times the k-best term approximation with probability one, and the ...
Cooling tower and plume modeling for satellite remote sensing applications
Energy Technology Data Exchange (ETDEWEB)
Powers, B.J.
1995-05-01
It is often useful in nonproliferation studies to be able to remotely estimate the power generated by a power plant. Such information is indirectly available through an examination of the power dissipated by the plant. Power dissipation is generally accomplished either by transferring the excess heat generated into the atmosphere or into bodies of water. It is the former method with which we are exclusively concerned in this report. We discuss in this report the difficulties associated with such a task. In particular, we primarily address the remote detection of the temperature associated with the condensed water plume emitted from the cooling tower. We find that the effective emissivity of the plume is of fundamental importance for this task. Having examined the dependence of the plume emissivity in several IR bands and with varying liquid water content and droplet size distributions, we conclude that the plume emissivity, and consequently the plume brightness temperature, is dependent upon not only the liquid water content and band, but also upon the droplet size distribution. Finally, we discuss models dependent upon a detailed point-by-point description of the hydrodynamics and thermodynamics of the plume dynamics and those based upon spatially integrated models. We describe in detail a new integral model, the LANL Plume Model, which accounts for the evolution of the droplet size distribution. Some typical results obtained from this model are discussed.
Cosine-Gaussian Schell-model sources.
Mei, Zhangrong; Korotkova, Olga
2013-07-15
We introduce a new class of partially coherent sources of Schell type with cosine-Gaussian spectral degree of coherence and confirm that such sources are physically genuine. Further, we derive the expression for the cross-spectral density function of a beam generated by the novel source propagating in free space and analyze the evolution of the spectral density and the spectral degree of coherence. It is shown that at sufficiently large distances from the source the degree of coherence of the propagating beam assumes Gaussian shape while the spectral density takes on the dark-hollow profile.
The Communicating Pipe Model for Icy Plumes on Enceladus
Institute of Scientific and Technical Information of China (English)
MA Qian-Li; CHEN Chu-Xin
2009-01-01
We analyze the communicating pipe model on Enceladus, and predict that Saturn's strong tidal force in Enceladus plays a significant role in the plumes. In this model, the scale of the volcanoes can be evaluated based on the history of the craters and plumes. The correspondence of the data and observation make the model valid for the eruption. So it is imaginable that the tidal force is pulling the liquid out through the communicating pipe while reshaping the surface on Enceladus.
Blind source separation based on generalized gaussian model
Institute of Scientific and Technical Information of China (English)
YANG Bin; KONG Wei; ZHOU Yue
2007-01-01
Since in most blind source separation (BSS) algorithms the estimations of probability density function (pdf) of sources are fixed or can only switch between one sup-Gaussian and other sub-Gaussian model,they may not be efficient to separate sources with different distributions. So to solve the problem of pdf mismatch and the separation of hybrid mixture in BSS, the generalized Gaussian model (GGM) is introduced to model the pdf of the sources since it can provide a general structure of univariate distributions. Its great advantage is that only one parameter needs to be determined in modeling the pdf of different sources, so it is less complex than Gaussian mixture model. By using maximum likelihood (ML) approach, the convergence of the proposed algorithm is improved. The computer simulations show that it is more efficient and valid than conventional methods with fixed pdf estimation.
3-D numerical modeling of plume-induced subduction initiation
Baes, Marzieh; Gerya, taras; Sobolev, Stephan
2016-04-01
Investigation of mechanisms involved in formation of a new subduction zone can help us to better understand plate tectonics. Despite numerous previous studies, it is still unclear how and where an old oceanic plate starts to subduct beneath the other plate. One of the proposed scenarios for nucleation of subduction is plume-induced subduction initiation, which was investigated in detail, using 2-D models, by Ueda et al. (2008). Recently. Gerya et al. (2015), using 3D numerical models, proposed that plume-lithosphere interaction in the Archean led to the subduction initiation and onset of plate tectonic. In this study, we aim to pursue work of Ueda et al. (2008) by incorporation of 3-D thermo-mechanical models to investigate conditions leading to oceanic subduction initiation as a result of thermal-chemical mantle plume-lithosphere interaction in the modern earth. Results of our experiments show four different deformation regimes in response to plume-lithosphere interaction, that are a) self-sustaining subduction initiation where subduction becomes self-sustained, b) freezing subduction initiation where subduction stops at shallow depths, c) slab break-off where subducting circular slab breaks off soon after formation and d) plume underplating where plume does not pass through the lithosphere but spreads beneath it (failed subduction initiation). These different regimes depend on several parameters such as plume's size, composition and temperature, lithospheric brittle/plastic strength, age of the oceanic lithosphere and presence/absence of lithospheric heterogeneities. Results show that subduction initiates and becomes self-sustained when lithosphere is older than 10 Myr and non-dimensional ratio of the plume buoyancy force and lithospheric strength above the plume is higher than 2.
Gaussian and non-Gaussian inverse modeling of groundwater flow using copulas and random mixing
Bárdossy, András.; Hörning, Sebastian
2016-06-01
This paper presents a new copula-based methodology for Gaussian and non-Gaussian inverse modeling of groundwater flow. The presented approach is embedded in a Monte Carlo framework and it is based on the concept of mixing spatial random fields where a spatial copula serves as spatial dependence function. The target conditional spatial distribution of hydraulic transmissivities is obtained as a linear combination of unconditional spatial fields. The corresponding weights of this linear combination are chosen such that the combined field has the prescribed spatial variability, and honors all the observations of hydraulic transmissivities. The constraints related to hydraulic head observations are nonlinear. In order to fulfill these constraints, a connected domain in the weight space, inside which all linear constraints are fulfilled, is identified. This domain is defined analytically and includes an infinite number of conditional fields (i.e., conditioned on the observed hydraulic transmissivities), and the nonlinear constraints can be fulfilled via minimization of the deviation of the modeled and the observed hydraulic heads. This procedure enables the simulation of a great number of solutions for the inverse problem, allowing a reasonable quantification of the associated uncertainties. The methodology can be used for fields with Gaussian copula dependence, and fields with specific non-Gaussian copula dependence. Further, arbitrary marginal distributions can be considered.
Multiphase CFD modeling of nearfield fate of sediment plumes
DEFF Research Database (Denmark)
Saremi, Sina; Hjelmager Jensen, Jacob
2014-01-01
Disposal of dredged material and the overflow discharge during the dredging activities is a matter of concern due to the potential risks imposed by the plumes on surrounding marine environment. This gives rise to accurately prediction of the fate of the sediment plumes released in ambient waters....... The two-phase mixture solution based on the drift-flux method is evaluated for 3D simulation of material disposal and overflow discharge from the hoppers. The model takes into account the hindrance and resistance mechanisms in the mixture and is capable of describing the flow details within the plumes...... and gives excellent results when compared to experimental data....
Gaussian mixture models as flux prediction method for central receivers
Grobler, Annemarie; Gauché, Paul; Smit, Willie
2016-05-01
Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.
ASHEE: a compressible, Equilibrium–Eulerian model for volcanic ash plumes
Directory of Open Access Journals (Sweden)
M. Cerminara
2015-10-01
and Balachandar, 2001, valid for low concentration regimes (particle volume fraction less than 10−3 and particles Stokes number (St, i.e., the ratio between their relaxation time and flow characteristic time not exceeding about 0.2. The new model, which is called ASHEE (ASH Equilibrium Eulerian, is significantly faster than the N-phase Eulerian model while retaining the capability to describe gas-particle non-equilibrium effects. Direct numerical simulation accurately reproduce the dynamics of isotropic, compressible turbulence in subsonic regime. For gas-particle mixtures, it describes the main features of density fluctuations and the preferential concentration and clustering of particles by turbulence, thus verifying the model reliability and suitability for the numerical simulation of high-Reynolds number and high-temperature regimes in presence of a dispersed phase. On the other hand, Large-Eddy Numerical Simulations of forced plumes are able to reproduce their observed averaged and instantaneous flow properties. In particular, the self-similar Gaussian radial profile and the development of large-scale coherent structures are reproduced, including the rate of turbulent mixing and entrainment of atmospheric air. Application to the Large-Eddy Simulation of the injection of the eruptive mixture in a stratified atmosphere describes some of important features of turbulent volcanic plumes, including air entrainment, buoyancy reversal, and maximum plume height. For very fine particles (St → 0, when non-equilibrium effects are negligible the model reduces to the so-called dusty-gas model. However, coarse particles partially decouple from the gas phase within eddies (thus modifying the turbulent structure and preferentially concentrate at the eddy periphery, eventually being lost from the plume margins due to the concurrent effect of gravity. By these mechanisms, gas-particle non-equilibrium processes are able to influence the large-scale behavior of volcanic plumes.
A Preliminary Model of Infrared Image Generation for Exhaust Plume
Directory of Open Access Journals (Sweden)
Fei Mei
2011-06-01
Full Text Available Based on the irradiance calculation of all pixels on the focal plane array, a preliminary infrared imaging prediction model of exhaust plume that have considered the geometrical and the thermal resolution of the camera was developed to understanding the infrared characteristics of exhaust plume. In order to compute the irradiance incident on each pixel, the gas radiation transfer path in the plume for the instantaneous field of view corresponds to the pixel was solved by the simultaneous equation of a enclosure cylinder which covers the exhaust plume and the line of sight. Radiance of the transfer path was calculated by radiation transfer equation for nonscattering gas. The radiative properties of combustion needed in the equation was provided by employing Malkmus model with EM2C narrow band database(25cm-1. The pressure, species concentration along the path was determination by CFD analysis. The relative irradiance intensity of each pixel was converted to color in the display according to gray map coding and hot map coding. Infrared image of the exhaust plumes from a subsonic axisymmetric nozzle with different relative position of camera and the plume was predicted with the model. By changing the parameters, such as FOV and space resolution, the image of different imaging system can be predicted.
Updated Conceptual Model for the 300 Area Uranium Groundwater Plume
Energy Technology Data Exchange (ETDEWEB)
Zachara, John M.; Freshley, Mark D.; Last, George V.; Peterson, Robert E.; Bjornstad, Bruce N.
2012-11-01
The 300 Area uranium groundwater plume in the 300-FF-5 Operable Unit is residual from past discharge of nuclear fuel fabrication wastes to a number of liquid (and solid) disposal sites. The source zones in the disposal sites were remediated by excavation and backfilled to grade, but sorbed uranium remains in deeper, unexcavated vadose zone sediments. In spite of source term removal, the groundwater plume has shown remarkable persistence, with concentrations exceeding the drinking water standard over an area of approximately 1 km2. The plume resides within a coupled vadose zone, groundwater, river zone system of immense complexity and scale. Interactions between geologic structure, the hydrologic system driven by the Columbia River, groundwater-river exchange points, and the geochemistry of uranium contribute to persistence of the plume. The U.S. Department of Energy (DOE) recently completed a Remedial Investigation/Feasibility Study (RI/FS) to document characterization of the 300 Area uranium plume and plan for beginning to implement proposed remedial actions. As part of the RI/FS document, a conceptual model was developed that integrates knowledge of the hydrogeologic and geochemical properties of the 300 Area and controlling processes to yield an understanding of how the system behaves and the variables that control it. Recent results from the Hanford Integrated Field Research Challenge site and the Subsurface Biogeochemistry Scientific Focus Area Project funded by the DOE Office of Science were used to update the conceptual model and provide an assessment of key factors controlling plume persistence.
Gaussian Process Structural Equation Models with Latent Variables
Silva, Ricardo
2010-01-01
In a variety of disciplines such as social sciences, psychology, medicine and economics, the recorded data are considered to be noisy measurements of latent variables connected by some causal structure. This corresponds to a family of graphical models known as the structural equation model with latent variables. While linear non-Gaussian variants have been well-studied, inference in nonparametric structural equation models is still underdeveloped. We introduce a sparse Gaussian process parameterization that defines a non-linear structure connecting latent variables, unlike common formulations of Gaussian process latent variable models. An efficient Markov chain Monte Carlo procedure is described. We evaluate the stability of the sampling procedure and the predictive ability of the model compared against the current practice.
Energy Technology Data Exchange (ETDEWEB)
Sykes, R.I.; Lewellen, W.S.; Parker, S.F.; Henn, D.S.
1989-01-01
The Second Order Closure Integrated Puff Model (SCIPUFF) is the intermediate resolution member of a hierarchy of models. It simulates the expected values of plume concentration downwind of a fossil-fueled power plant stack, along with an estimate of the variation around this value. To represent the turbulent atmosphere surrounding the plume compatibly with available meteorological data, a second order closure sub-model is used. SCIPUFF represents the plume by a series of Gaussian puffs, typically 10 seconds apart; plume growth is calculated by a random walk phase combined with plume expansion calculated from the volume integrals of the equations used in the Stack Exhaust Model (SEM), the highest resolution model. Meteorological uncertainty is accounted for by means of extra dispersion terms. SCIPUFF was tested against more than 250 hours of plume data including both a level site and a moderately complex terrain site; approximately 200 samplers were used. Model predictions were evaluated by comparing the measured ground level concentration distribution to that simulated by the model. Further, the simulated and actual distributions of deviations between simulated or observed and expected values were compared. The predicted distributions were close to the measured ones. The overall results from SCIPUFF were similar to those from the lowest resolution model, SCIMP. The advantage of SCIPUFF is its flexibility for including future improvements. When combined with a suitable mesoscale model, SCIPUFF may be able to simulate plume dispersion beyond the 50 km limit of other available models. The ability to cover a wide range of time and space scales in a single calculation is another valuable feature. 3 refs., 10 figs., 4 tabs.
Merits of a Scenario Approach in Dredge Plume Modelling
DEFF Research Database (Denmark)
Pedersen, Claus; Chu, Amy Ling Chu; Hjelmager Jensen, Jacob
2011-01-01
Dredge plume modelling is a key tool for quantification of potential impacts to inform the EIA process. There are, however, significant uncertainties associated with the modelling at the EIA stage when both dredging methodology and schedule are likely to be a guess at best as the dredging contrac...
Deformed Gaussian Orthogonal Ensemble Analysis of the Interacting Boson Model
Pato, M P; Lima, C L; Hussein, M S; Alhassid, Y
1994-01-01
A Deformed Gaussian Orthogonal Ensemble (DGOE) which interpolates between the Gaussian Orthogonal Ensemble and a Poissonian Ensemble is constructed. This new ensemble is then applied to the analysis of the chaotic properties of the low lying collective states of nuclei described by the Interacting Boson Model (IBM). This model undergoes a transition order-chaos-order from the $SU(3)$ limit to the $O(6)$ limit. Our analysis shows that the quantum fluctuations of the IBM Hamiltonian, both of the spectrum and the eigenvectors, follow the expected behaviour predicted by the DGOE when one goes from one limit to the other.
Identification of Multimodel LPV Models with Asymmetric Gaussian Weighting Function
Directory of Open Access Journals (Sweden)
Jie You
2013-01-01
Full Text Available This paper is concerned with the identification of linear parameter varying (LPV systems by utilizing a multimodel structure. To improve the approximation capability of the LPV model, asymmetric Gaussian weighting functions are introduced and compared with commonly used symmetric Gaussian functions. By this mean, locations of operating points can be selected freely. It has been demonstrated through simulations with a high purity distillation column that the identified models provide more satisfactory approximation. Moreover, an experiment is performed on real HVAC (heating, ventilation, and air-conditioning to further validate the effectiveness of the proposed approach.
Modelling and control of dynamic systems using gaussian process models
Kocijan, Juš
2016-01-01
This monograph opens up new horizons for engineers and researchers in academia and in industry dealing with or interested in new developments in the field of system identification and control. It emphasizes guidelines for working solutions and practical advice for their implementation rather than the theoretical background of Gaussian process (GP) models. The book demonstrates the potential of this recent development in probabilistic machine-learning methods and gives the reader an intuitive understanding of the topic. The current state of the art is treated along with possible future directions for research. Systems control design relies on mathematical models and these may be developed from measurement data. This process of system identification, when based on GP models, can play an integral part of control design in data-based control and its description as such is an essential aspect of the text. The background of GP regression is introduced first with system identification and incorporation of prior know...
A parameter model for dredge plume sediment source terms
Decrop, Boudewijn; De Mulder, Tom; Toorman, Erik; Sas, Marc
2017-01-01
The presented model allows for fast simulations of the near-field behaviour of overflow dredging plumes. Overflow dredging plumes occur when dredging vessels employ a dropshaft release system to discharge the excess sea water, which is pumped into the trailing suction hopper dredger (TSHD) along with the dredged sediments. The fine sediment fraction in the loaded water-sediment mixture does not fully settle before it reaches the overflow shaft. By consequence, the released water contains a fine sediment fraction of time-varying concentration. The sediment grain size is in the range of clays, silt and fine sand; the sediment concentration varies roughly between 10 and 200 g/l in most cases, peaking at even higher value with short duration. In order to assess the environmental impact of the increased turbidity caused by this release, plume dispersion predictions are often carried out. These predictions are usually executed with a large-scale model covering a complete coastal zone, bay, or estuary. A source term of fine sediments is implemented in the hydrodynamic model to simulate the fine sediment dispersion. The large-scale model mesh resolution and governing equations, however, do not allow to simulate the near-field plume behaviour in the vicinity of the ship hull and propellers. Moreover, in the near-field, these plumes are under influence of buoyancy forces and air bubbles. The initial distribution of sediments is therefore unknown and has to be based on crude assumptions at present. The initial (vertical) distribution of the sediment source is indeed of great influence on the final far-field plume dispersion results. In order to study this near-field behaviour, a highly-detailed computationally fluid dynamics (CFD) model was developed. This model contains a realistic geometry of a dredging vessel, buoyancy effects, air bubbles and propeller action, and was validated earlier by comparing with field measurements. A CFD model requires significant simulation times
Exposure estimates using urban plume dispersion and traffic microsimulation models
Energy Technology Data Exchange (ETDEWEB)
Brown, M.J.; Mueller, C.; Bush, B.; Stretz, P.
1997-12-01
The goal of this research effort was to demonstrate a capability for analyzing emergency response issues resulting from accidental or mediated airborne toxic releases in an urban setting. In the first year of the program, the authors linked a system of fluid dynamics, plume dispersion, and vehicle transportation models developed at Los Alamos National Laboratory to study the dispersion of a plume in an urban setting and the resulting exposures to vehicle traffic. This research is part of a larger laboratory-directed research and development project for studying the relationships between urban infrastructure elements and natural systems.
Critical Behaviour of the Gaussian Model on Sierpinski Carpets
Institute of Scientific and Technical Information of China (English)
林振权; 孔祥木
2001-01-01
The Gaussian model on Sierpinski carpets with two types of nearest neighbour interactions K and Kw and two corresponding types of the Gaussian distribution constants b and bw is constructed by generalizing that on a translationally invariant square lattice. The critical behaviours are studied by the renormalization-group approach and spin rescaling method. They are found to be quite different from that on a translationally invariant square lattice. There are two critical points at (K* = b, K*w = 0) and (K* = 0, K*w = bw), and the correlation length critical exponents are calculated.
Improved Gaussian Mixture Models for Adaptive Foreground Segmentation
DEFF Research Database (Denmark)
Katsarakis, Nikolaos; Pnevmatikakis, Aristodemos; Tan, Zheng-Hua
2016-01-01
Adaptive foreground segmentation is traditionally performed using Stauffer & Grimson’s algorithm that models every pixel of the frame by a mixture of Gaussian distributions with continuously adapted parameters. In this paper we provide an enhancement of the algorithm by adding two important dynamic...... elements to the baseline algorithm: The learning rate can change across space and time, while the Gaussian distributions can be merged together if they become similar due to their adaptation process. We quantify the importance of our enhancements and the effect of parameter tuning using an annotated...
Hidden Markov Models with Factored Gaussian Mixtures Densities
Institute of Scientific and Technical Information of China (English)
LI Hao-zheng; LIU Zhi-qiang; ZHU Xiang-hua
2004-01-01
We present a factorial representation of Gaussian mixture models for observation densities in Hidden Markov Models(HMMs), which uses the factorial learning in the HMM framework. We derive the reestimation formulas for estimating the factorized parameters by the Expectation Maximization (EM) algorithm. We conduct several experiments to compare the performance of this model structure with Factorial Hidden Markov Models(FHMMs) and HMMs, some conclusions and promising empirical results are presented.
Bayesian Gaussian Copula Factor Models for Mixed Data.
Murray, Jared S; Dunson, David B; Carin, Lawrence; Lucas, Joseph E
2013-06-01
Gaussian factor models have proven widely useful for parsimoniously characterizing dependence in multivariate data. There is a rich literature on their extension to mixed categorical and continuous variables, using latent Gaussian variables or through generalized latent trait models acommodating measurements in the exponential family. However, when generalizing to non-Gaussian measured variables the latent variables typically influence both the dependence structure and the form of the marginal distributions, complicating interpretation and introducing artifacts. To address this problem we propose a novel class of Bayesian Gaussian copula factor models which decouple the latent factors from the marginal distributions. A semiparametric specification for the marginals based on the extended rank likelihood yields straightforward implementation and substantial computational gains. We provide new theoretical and empirical justifications for using this likelihood in Bayesian inference. We propose new default priors for the factor loadings and develop efficient parameter-expanded Gibbs sampling for posterior computation. The methods are evaluated through simulations and applied to a dataset in political science. The models in this paper are implemented in the R package bfa.
Development and application of a reactive plume-in-grid model: evaluation over Greater Paris
Directory of Open Access Journals (Sweden)
I. Korsakissok
2010-09-01
Full Text Available Emissions from major point sources are badly represented by classical Eulerian models. An overestimation of the horizontal plume dilution, a bad representation of the vertical diffusion as well as an incorrect estimate of the chemical reaction rates are the main limitations of such models in the vicinity of major point sources. The plume-in-grid method is a multiscale modeling technique that couples a local-scale Gaussian puff model with an Eulerian model in order to better represent these emissions. We present the plume-in-grid model developed in the air quality modeling system Polyphemus, with full gaseous chemistry. The model is evaluated on the metropolitan Île-de-France region, during six months (summer 2001. The subgrid-scale treatment is used for 89 major point sources, a selection based on the emission rates of NO_{x} and SO_{2}. Results with and without the subgrid treatment of point emissions are compared, and their performance by comparison to the observations on measurement stations is assessed. A sensitivity study is also carried out, on several local-scale parameters as well as on the vertical diffusion within the urban area.
Primary pollutants are shown to be the most impacted by the plume-in-grid treatment. SO_{2} is the most impacted pollutant, since the point sources account for an important part of the total SO_{2} emissions, whereas NO_{x} emissions are mostly due to traffic. The spatial impact of the subgrid treatment is localized in the vicinity of the sources, especially for reactive species (NO_{x} and O_{3}. Ozone is mostly sensitive to the time step between two puff emissions which influences the in-plume chemical reactions, whereas the almost-passive species SO_{2} is more sensitive to the injection time, which determines the duration of the subgrid-scale treatment.
Future developments include an extension to handle aerosol chemistry
Development and application of a reactive plume-in-grid model: evaluation over Greater Paris
Directory of Open Access Journals (Sweden)
I. Korsakissok
2010-02-01
Full Text Available Emissions from major point sources are badly represented by classical Eulerian models. An overestimation of the horizontal plume dilution, a bad representation of the vertical diffusion as well as an incorrect estimate of the chemical reaction rates are the main limitations of such models in the vicinity of major point sources. The plume-in-grid method is a multiscale modeling technique that couples a local-scale Gaussian puff model with an Eulerian model in order to better represent these emissions. We present the plume-in-grid model developed in the air quality modeling system Polyphemus, with full gaseous chemistry. The model is evaluated on the metropolitan Île-de-France region, during six months (summer 2001. The subgrid-scale treatment is used for 89 major point sources, a selection based on the emission rates of NO_{x} and SO_{2}. Results with and without the subgrid treatment of point emissions are compared, and their performance by comparison to the observations at measurement stations is assessed. A sensitivity study is also carried out, on several local-scale parameters as well as on the vertical diffusion within the urban area.
Primary pollutants are shown to be the most impacted by the plume-in-grid treatment, with a decrease in RMSE by up to about -17% for SO_{2} and -7% for NO at measurement stations. SO_{2} is the most impacted pollutant, since the point sources account for an important part of the total SO_{2} emissions, whereas NO_{x} emissions are mostly due to traffic. The spatial impact of the subgrid treatment is localized in the vicinity of the sources, especially for reactive species (NO_{x} and O_{3}. Reactive species are mostly sensitive to the local-scale parameters, such as the time step between two puff emissions which influences the in-plume chemical reactions, whereas the almost-passive species SO_{2} is more sensitive to the
Gaussian kinetic model for granular gases.
Dufty, James W; Baskaran, Aparna; Zogaib, Lorena
2004-05-01
A kinetic model for the Boltzmann equation is proposed and explored as a practical means to investigate the properties of a dilute granular gas. It is shown that all spatially homogeneous initial distributions approach a universal "homogeneous cooling solution" after a few collisions. The homogeneous cooling solution (HCS) is studied in some detail and the exact solution is compared with known results for the hard sphere Boltzmann equation. It is shown that all qualitative features of the HCS, including the nature of overpopulation at large velocities, are reproduced by the kinetic model. It is also shown that all the transport coefficients are in excellent agreement with those from the Boltzmann equation. Also, the model is specialized to one having a velocity independent collision frequency and the resulting HCS and transport coefficients are compared to known results for the Maxwell model. The potential of the model for the study of more complex spatially inhomogeneous states is discussed.
Extended Linear Models with Gaussian Priors
DEFF Research Database (Denmark)
Quinonero, Joaquin
2002-01-01
on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....
Designing Multi-target Compound Libraries with Gaussian Process Models.
Bieler, Michael; Reutlinger, Michael; Rodrigues, Tiago; Schneider, Petra; Kriegl, Jan M; Schneider, Gisbert
2016-05-01
We present the application of machine learning models to selecting G protein-coupled receptor (GPCR)-focused compound libraries. The library design process was realized by ant colony optimization. A proprietary Boehringer-Ingelheim reference set consisting of 3519 compounds tested in dose-response assays at 11 GPCR targets served as training data for machine learning and activity prediction. We compared the usability of the proprietary data with a public data set from ChEMBL. Gaussian process models were trained to prioritize compounds from a virtual combinatorial library. We obtained meaningful models for three of the targets (5-HT2c , MCH, A1), which were experimentally confirmed for 12 of 15 selected and synthesized or purchased compounds. Overall, the models trained on the public data predicted the observed assay results more accurately. The results of this study motivate the use of Gaussian process regression on public data for virtual screening and target-focused compound library design.
Identifiability of Gaussian Structural Equation Models with Same Error Variances
Peters, Jonas
2012-01-01
We consider structural equation models (SEMs) in which variables can be written as a function of their parents and noise terms (the latter are assumed to be jointly independent). Corresponding to each SEM, there is a directed acyclic graph (DAG) G_0 describing the relationships between the variables. In Gaussian SEMs with linear functions, the graph can be identified from the joint distribution only up to Markov equivalence classes (assuming faithfulness). It has been shown, however, that this constitutes an exceptional case. In the case of linear functions and non-Gaussian noise, the DAG becomes identifiable. Apart from few exceptions the same is true for non-linear functions and arbitrarily distributed additive noise. In this work, we prove identifiability for a third modification: if we require all noise variables to have the same variances, again, the DAG can be recovered from the joint Gaussian distribution. Our result can be applied to the problem of causal inference. If the data follow a Gaussian SEM w...
Modelling the fate of the Tijuana River discharge plume
van Ormondt, M.; Terrill, E.; Hibler, L. F.; van Dongeren, A. R.
2010-12-01
After rainfall events, the Tijuana River discharges excess runoff into the ocean in a highly turbid plume. The runoff waters contain large suspended solids concentrations, as well as high levels of toxic contaminants, bacteria, and hepatitis and enteroviruses. Public health hazards posed by the effluent often result in beach closures for several kilometers northward along the U.S. shoreline. A Delft3D model has been set up to predict the fate of the Tijuana River plume. The model takes into account the effects of tides, wind, waves, salinity, and temperature stratification. Heat exchange with the atmosphere is also included. The model consists of a relatively coarse outer domain and a high-resolution surf zone domain that are coupled with Domain Decomposition. The offshore boundary conditions are obtained from the larger NCOM SoCal model (operated by the US Navy) that spans the entire Southern California Bight. A number of discharge events are investigated, in which model results are validated against a wide range of field measurements in the San Diego Bight. These include HF Radar surface currents, REMUS tracks, drifter deployments, satellite imagery, as well as current and temperature profile measurements at a number of locations. The model is able to reproduce the observed current and temperature patterns reasonably well. Under calm conditions, the model results suggest that the hydrodynamics in the San Diego Bight are largely governed by internal waves. During rainfall events, which are typically accompanied by strong winds and high waves, wind and wave driven currents become dominant. An analysis will be made of what conditions determine the trapping and mixing of the plume inside the surfzone and/or the propagation of the plume through the breakers and onto the coastal shelf. The model is now also running in operational mode. Three day forecasts are made every 24 hours. This study was funded by the Office of Naval Research.
Extended Bayesian Information Criteria for Gaussian Graphical Models
Foygel, Rina
2010-01-01
Gaussian graphical models with sparsity in the inverse covariance matrix are of significant interest in many modern applications. For the problem of recovering the graphical structure, information criteria provide useful optimization objectives for algorithms searching through sets of graphs or for selection of tuning parameters of other methods such as the graphical lasso, which is a likelihood penalization technique. In this paper we establish the consistency of an extended Bayesian information criterion for Gaussian graphical models in a scenario where both the number of variables p and the sample size n grow. Compared to earlier work on the regression case, our treatment allows for growth in the number of non-zero parameters in the true model, which is necessary in order to cover connected graphs. We demonstrate the performance of this criterion on simulated data when used in conjunction with the graphical lasso, and verify that the criterion indeed performs better than either cross-validation or the ordi...
A Closure Model with Plumes I. The solar convection
Belkacem, K; Goupil, M J; Kupka, F
2006-01-01
Oscillations of stellar p modes, excited by turbulent convection, are investigated. We take into account the asymmetry of the up and downflows created by turbulent plumes through an adapted closure model. In a companion paper, we apply it to the formalism of excitation of solar p modes developed by Samadi & Goupil 2001. Using results from 3D numerical simulations of the upper most part of the solar convection zone, we show that the two-scale-mass-flux model (TFM) is valid only for quasi-laminar or highly skewed flows (Gryanik & Hartmann 2002). We build a generalized-Two-scale-Mass-Flux Model (GTFM) model which takes into account both the skew introduced by the presence of two flows and the effects of turbulence in each flow. In order to apply the GTFM to the solar case, we introduce the plume dynamics as modelled by Rieutord & Zahn (1995) and construct a Closure Model with Plumes (CMP). When comparing with 3D simulation results, the CMP improves the agreement for the fourth order moments, by appro...
Observations and modeling of a tidal inlet dye tracer plume
Feddersen, Falk; Olabarrieta, Maitane; Guza, R. T.; Winters, D.; Raubenheimer, Britt; Elgar, Steve
2016-10-01
A 9 km long tracer plume was created by continuously releasing Rhodamine WT dye for 2.2 h during ebb tide within the southern edge of the main tidal channel at New River Inlet, NC on 7 May 2012, with highly obliquely incident waves and alongshore winds. Over 6 h from release, COAWST (coupled ROMS and SWAN, including wave, wind, and tidal forcing) modeled dye compares well with (aerial hyperspectral and in situ) observed dye concentration. Dye first was transported rapidly seaward along the main channel and partially advected across the ebb-tidal shoal until reaching the offshore edge of the shoal. Dye did not eject offshore in an ebb-tidal jet because the obliquely incident breaking waves retarded the inlet-mouth ebb-tidal flow and forced currents along the ebb shoal. The dye plume largely was confined to <4 m depth. Dye was then transported downcoast in the narrow (few 100 m wide) surfzone of the beach bordering the inlet at 0.3 m s-1 driven by wave breaking. Over 6 h, the dye plume is not significantly affected by buoyancy. Observed dye mass balances close indicating all released dye is accounted for. Modeled and observed dye behaviors are qualitatively similar. The model simulates well the evolution of the dye center of mass, lateral spreading, surface area, and maximum concentration, as well as regional ("inlet" and "ocean") dye mass balances. This indicates that the model represents well the dynamics of the ebb-tidal dye plume. Details of the dye transport pathways across the ebb shoal are modeled poorly perhaps owing to low-resolution and smoothed model bathymetry. Wave forcing effects have a large impact on the dye transport.
Non-Gaussianity in axion N-flation models.
Kim, Soo A; Liddle, Andrew R; Seery, David
2010-10-29
We study perturbations in the multifield axion N-flation model, taking account of the full cosine potential. We find significant differences from previous analyses which made a quadratic approximation to the potential. The tensor-to-scalar ratio and the scalar spectral index move to lower values, which nevertheless provide an acceptable fit to observation. Most significantly, we find that the bispectrum non-Gaussianity parameter f{NL} may be large, typically of order 10 for moderate values of the axion decay constant, increasing to of order 100 for decay constants slightly smaller than the Planck scale. Such a non-Gaussian fraction is detectable. We argue that this property is generic in multifield models of hilltop inflation.
On the thermodynamic properties of the generalized Gaussian core model
Directory of Open Access Journals (Sweden)
B.M.Mladek
2005-01-01
Full Text Available We present results of a systematic investigation of the properties of the generalized Gaussian core model of index n. The potential of this system interpolates via the index n between the potential of the Gaussian core model and the penetrable sphere system, thereby varying the steepness of the repulsion. We have used both conventional and self-consistent liquid state theories to calculate the structural and thermodynamic properties of the system; reference data are provided by computer simulations. The results indicate that the concept of self-consistency becomes indispensable to guarantee excellent agreement with simulation data; in particular, structural consistency (in our approach taken into account via the zero separation theorem is obviously a very important requirement. Simulation results for the dimensionless equation of state, β P / ρ, indicate that for an index-value of 4, a clustering transition, possibly into a structurally ordered phase might set in as the system is compressed.
Model for non-Gaussian intraday stock returns
Gerig, Austin; Vicente, Javier; Fuentes, Miguel A.
2009-12-01
Stock prices are known to exhibit non-Gaussian dynamics, and there is much interest in understanding the origin of this behavior. Here, we present a model that explains the shape and scaling of the distribution of intraday stock price fluctuations (called intraday returns) and verify the model using a large database for several stocks traded on the London Stock Exchange. We provide evidence that the return distribution for these stocks is non-Gaussian and similar in shape and that the distribution appears stable over intraday time scales. We explain these results by assuming the volatility of returns is constant intraday but varies over longer periods such that its inverse square follows a gamma distribution. This produces returns that are Student distributed for intraday time scales. The predicted results show excellent agreement with the data for all stocks in our study and over all regions of the return distribution.
Classifying Gamma-Ray Bursts with Gaussian Mixture Model
Yang, En-Bo; Choi, Chul-Sung; Chang, Heon-Young
2016-01-01
Using Gaussian Mixture Model (GMM) and Expectation Maximization Algorithm, we perform an analysis of time duration ($T_{90}$) for \\textit{CGRO}/BATSE, \\textit{Swift}/BAT and \\textit{Fermi}/GBM Gamma-Ray Bursts. The $T_{90}$ distributions of 298 redshift-known \\textit{Swift}/BAT GRBs have also been studied in both observer and rest frames. Bayesian Information Criterion has been used to compare between different GMM models. We find that two Gaussian components are better to describe the \\textit{CGRO}/BATSE and \\textit{Fermi}/GBM GRBs in the observer frame. Also, we caution that two groups are expected for the \\textit{Swift}/BAT bursts in the rest frame, which is consistent with some previous results. However, \\textit{Swift} GRBs in the observer frame seem to show a trimodal distribution, of which the superficial intermediate class may result from the selection effect of \\textit{Swift}/BAT.
Classifying gamma-ray bursts with Gaussian Mixture Model
Zhang, Zhi-Bin; Yang, En-Bo; Choi, Chul-Sung; Chang, Heon-Young
2016-11-01
Using Gaussian Mixture Model (GMM) and expectation-maximization algorithm, we perform an analysis of time duration (T90) for Compton Gamma Ray Observatory (CGRO)/BATSE, Swift/BAT and Fermi/GBM gamma-ray bursts (GRBs). The T90 distributions of 298 redshift-known Swift/BAT GRBs have also been studied in both observer and rest frames. Bayesian information criterion has been used to compare between different GMM models. We find that two Gaussian components are better to describe the CGRO/BATSE and Fermi/GBM GRBs in the observer frame. Also, we caution that two groups are expected for the Swift/BAT bursts in the rest frame, which is consistent with some previous results. However, Swift GRBs in the observer frame seem to show a trimodal distribution, of which the superficial intermediate class may result from the selection effect of Swift/BAT.
Evaluation of Distance Measures Between Gaussian Mixture Models of MFCCs
DEFF Research Database (Denmark)
Jensen, Jesper Højvang; Ellis, Dan P. W.; Christensen, Mads Græsbøll
2007-01-01
In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although the norma......In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although...... the normalized L2 distance was slightly inferior to the Kullback-Leibler distance with respect to classification performance, it has the advantage of obeying the triangle inequality, which allows for efficient searching....
Detecting Clusters in Atom Probe Data with Gaussian Mixture Models.
Zelenty, Jennifer; Dahl, Andrew; Hyde, Jonathan; Smith, George D W; Moody, Michael P
2017-04-01
Accurately identifying and extracting clusters from atom probe tomography (APT) reconstructions is extremely challenging, yet critical to many applications. Currently, the most prevalent approach to detect clusters is the maximum separation method, a heuristic that relies heavily upon parameters manually chosen by the user. In this work, a new clustering algorithm, Gaussian mixture model Expectation Maximization Algorithm (GEMA), was developed. GEMA utilizes a Gaussian mixture model to probabilistically distinguish clusters from random fluctuations in the matrix. This machine learning approach maximizes the data likelihood via expectation maximization: given atomic positions, the algorithm learns the position, size, and width of each cluster. A key advantage of GEMA is that atoms are probabilistically assigned to clusters, thus reflecting scientifically meaningful uncertainty regarding atoms located near precipitate/matrix interfaces. GEMA outperforms the maximum separation method in cluster detection accuracy when applied to several realistically simulated data sets. Lastly, GEMA was successfully applied to real APT data.
Second order closure modeling of turbulent buoyant wall plumes
Zhu, Gang; Lai, Ming-Chia; Shih, Tsan-Hsing
1992-01-01
Non-intrusive measurements of scalar and momentum transport in turbulent wall plumes, using a combined technique of laser Doppler anemometry and laser-induced fluorescence, has shown some interesting features not present in the free jet or plumes. First, buoyancy-generation of turbulence is shown to be important throughout the flow field. Combined with low-Reynolds-number turbulence and near-wall effect, this may raise the anisotropic turbulence structure beyond the prediction of eddy-viscosity models. Second, the transverse scalar fluxes do not correspond only to the mean scalar gradients, as would be expected from gradient-diffusion modeling. Third, higher-order velocity-scalar correlations which describe turbulent transport phenomena could not be predicted using simple turbulence models. A second-order closure simulation of turbulent adiabatic wall plumes, taking into account the recent progress in scalar transport, near-wall effect and buoyancy, is reported in the current study to compare with the non-intrusive measurements. In spite of the small velocity scale of the wall plumes, the results showed that low-Reynolds-number correction is not critically important to predict the adiabatic cases tested and cannot be applied beyond the maximum velocity location. The mean and turbulent velocity profiles are very closely predicted by the second-order closure models. but the scalar field is less satisfactory, with the scalar fluctuation level underpredicted. Strong intermittency of the low-Reynolds-number flow field is suspected of these discrepancies. The trends in second- and third-order velocity-scalar correlations, which describe turbulent transport phenomena, are also predicted in general, with the cross-streamwise correlations better than the streamwise one. Buoyancy terms modeling the pressure-correlation are shown to improve the prediction slightly. The effects of equilibrium time-scale ratio and boundary condition are also discussed.
Modelling the plasma plume of an assist source in PIAD
Wauer, Jochen; Harhausen, Jens; Foest, Rüdiger; Loffhagen, Detlef
2016-09-01
Plasma ion assisted deposition (PIAD) is a technique commonly used to produce high-precision optical interference coatings. Knowledge regarding plasma properties is most often limited to dedicated scenarios without film deposition. Approaches have been made to gather information on the process plasma in situ to detect drifts which are suspected to cause limits in repeatability of resulting layer properties. Present efforts focus on radiance monitoring of the plasma plume of an Advanced Plasma Source (APSpro, Bühler) by optical emission spectroscopy to provide the basis for an advanced plasma control. In this contribution modelling results of the plume region are presented to interpret these experimental data. In the framework of the collisional radiative model used, 15 excited neutral argon states in the plasma are considered. Results of the species densities show good consistency with the measured optical emission of various argon 2 p - 1 s transitions. This work was funded by BMBF under grant 13N13213.
XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-08-01
XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.
Mathematical modeling of turbulent reacting plumes - I. General theory and model formulation
Energy Technology Data Exchange (ETDEWEB)
Georgopoulos, P.G.; Seinfeld, J.H.
1986-01-01
A new, comprehensive model for a chemically reacting plume is presented that accounts for the effects of incomplete turbulent macro- and micromixing on chemical reactions between plume and atmospheric constituents. The model is modular in nature, allowing for the use of different levels of approximation of the phenomena involved. The core of the model consists of the evolution equations for reaction progress variables appropriate for evolving spatially varying systems. These equations estimate the interaction of mixing and chemical reaction and require input parameters characterizing internal plume behavior, such as relative dispersion and fine scale plume segregation. The model addresses deficiencies in previous reactive plume models. Part II is devoted to atmospheric application of the model. (authors).
Prediction of Geological Subsurfaces Based on Gaussian Random Field Models
Energy Technology Data Exchange (ETDEWEB)
Abrahamsen, Petter
1997-12-31
During the sixties, random functions became practical tools for predicting ore reserves with associated precision measures in the mining industry. This was the start of the geostatistical methods called kriging. These methods are used, for example, in petroleum exploration. This thesis reviews the possibilities for using Gaussian random functions in modelling of geological subsurfaces. It develops methods for including many sources of information and observations for precise prediction of the depth of geological subsurfaces. The simple properties of Gaussian distributions make it possible to calculate optimal predictors in the mean square sense. This is done in a discussion of kriging predictors. These predictors are then extended to deal with several subsurfaces simultaneously. It is shown how additional velocity observations can be used to improve predictions. The use of gradient data and even higher order derivatives are also considered and gradient data are used in an example. 130 refs., 44 figs., 12 tabs.
van der Swaluw, Eric; de Vries, Wilco; Sauter, Ferd; Aben, Jan; Velders, Guus; van Pul, Addo
2017-04-01
We present high-resolution model results of air pollution and deposition over the Netherlands with three models, the Eulerian grid model LOTOS-EUROS, the Gaussian plume model OPS and the hybrid model LEO. The latter combines results from LOTOS-EUROS and OPS using source apportionment techniques. The hybrid modelling combines the efficiency of calculating at high-resolution around sources with the plume model, and the accuracy of taking into account long-range transport and chemistry with a Eulerian grid model. We compare calculations from all three models with measurements for the period 2009-2011 for ammonia, NOx, secondary inorganic aerosols, particulate matter (PM10) and wet deposition of acidifying and eutrophying components (ammonium, nitrate and sulfate). It is found that concentrations of ammonia, NOx and the wet deposition components are best represented by the Gaussian plume model OPS. Secondary inorganic aerosols are best modelled with the LOTOS-EUROS model, and PM10 is best described with the LEO model. Subsequently for the year 2011, PM10 concentration and reduced nitrogen dry deposition maps are presented with respectively the OPS and LEO model. Using the LEO calculations for the production of the PM10 map, yields an overall better result than using the OPS calculations for this application. This is mainly due to the fact that the spatial distribution of the secondary inorganic aerosols is better described in the LEO model than in OPS, and because more (natural induced) PM10 sources are included in LEO, i.e. the contribution to PM10 of sea-salt and wind-blown dust as calculated by the LOTOS-EUROS model. Finally, dry deposition maps of reduced nitrogen over the Netherlands are compared as calculated by respectively the OPS and LEO model. The differences between both models are overall small (±100 mol/ha) with respect to the peak values observed in the maps (>2000 mol/ha). This is due to the fact that the contribution of dry deposition of reduced
Evaluation of Gaussian approximations for data assimilation in reservoir models
Iglesias, Marco A.
2013-07-14
The Bayesian framework is the standard approach for data assimilation in reservoir modeling. This framework involves characterizing the posterior distribution of geological parameters in terms of a given prior distribution and data from the reservoir dynamics, together with a forward model connecting the space of geological parameters to the data space. Since the posterior distribution quantifies the uncertainty in the geologic parameters of the reservoir, the characterization of the posterior is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based on Gaussian approximations, are often used for characterizing the posterior distribution. Evaluating the performance of those Gaussian approximations is typically conducted by assessing their ability at reproducing the truth within the confidence interval provided by the ad hoc technique under consideration. This has the disadvantage of mixing up the approximation properties of the history matching algorithm employed with the information content of the particular observations used, making it hard to evaluate the effect of the ad hoc approximations alone. In this paper, we avoid this disadvantage by comparing the ad hoc techniques with a fully resolved state-of-the-art probing of the Bayesian posterior distribution. The ad hoc techniques whose performance we assess are based on (1) linearization around the maximum a posteriori estimate, (2) randomized maximum likelihood, and (3) ensemble Kalman filter-type methods. In order to fully resolve the posterior distribution, we implement a state-of-the art Markov chain Monte Carlo (MCMC) method that scales well with respect to the dimension of the parameter space, enabling us to study realistic forward models, in two space dimensions, at a high level of grid refinement. Our
Hollow Gaussian Schell-model beam and its propagation
Wang, Li-Gang
2007-01-01
In this paper, we present a new model, hollow Gaussian-Schell model beams (HGSMBs), to describe the practical dark hollow beams. An analytical propagation formula for HGSMBs passing through a paraxial first-order optical system is derived based on the theory of coherence. Based on the derived formula, an application example showing the influence of spatial coherence on the propagation of beams is illustrated. It is found that the beam propagating properties of HGSMBs will be greatly affected by their spatial coherence. Our model provides a very convenient way for analyzing the propagation properties of partially coherent dark hollow beams.
A Gaussian Mixed Model for Learning Discrete Bayesian Networks.
Balov, Nikolay
2011-02-01
In this paper we address the problem of learning discrete Bayesian networks from noisy data. Considered is a graphical model based on mixture of Gaussian distributions with categorical mixing structure coming from a discrete Bayesian network. The network learning is formulated as a Maximum Likelihood estimation problem and performed by employing an EM algorithm. The proposed approach is relevant to a variety of statistical problems for which Bayesian network models are suitable - from simple regression analysis to learning gene/protein regulatory networks from microarray data.
Molecular Code Division Multiple Access: Gaussian Mixture Modeling
Zamiri-Jafarian, Yeganeh
Communications between nano-devices is an emerging research field in nanotechnology. Molecular Communication (MC), which is a bio-inspired paradigm, is a promising technique for communication in nano-network. In MC, molecules are administered to exchange information among nano-devices. Due to the nature of molecular signals, traditional communication methods can't be directly applied to the MC framework. The objective of this thesis is to present novel diffusion-based MC methods when multi nano-devices communicate with each other in the same environment. A new channel model and detection technique, along with a molecular-based access method, are proposed in here for communication between asynchronous users. In this work, the received molecular signal is modeled as a Gaussian mixture distribution when the MC system undergoes Brownian noise and inter-symbol interference (ISI). This novel approach demonstrates a suitable modeling for diffusion-based MC system. Using the proposed Gaussian mixture model, a simple receiver is designed by minimizing the error probability. To determine an optimum detection threshold, an iterative algorithm is derived which minimizes a linear approximation of the error probability function. Also, a memory-based receiver is proposed to improve the performance of the MC system by considering previously detected symbols in obtaining the threshold value. Numerical evaluations reveal that theoretical analysis of the bit error rate (BER) performance based on the Gaussian mixture model match simulation results very closely. Furthermore, in this thesis, molecular code division multiple access (MCDMA) is proposed to overcome the inter-user interference (IUI) caused by asynchronous users communicating in a shared propagation environment. Based on the selected molecular codes, a chip detection scheme with an adaptable threshold value is developed for the MCDMA system when the proposed Gaussian mixture model is considered. Results indicate that the
Non-gaussianity and Statistical Anisotropy in Cosmological Inflationary Models
Valenzuela-Toledo, Cesar A
2010-01-01
We study the statistical descriptors for some cosmological inflationary models that allow us to get large levels of non-gaussianity and violations of statistical isotropy. Basically, we study two different class of models: a model that include only scalar field perturbations, specifically a subclass of small-field slow-roll models of inflation with canonical kinetic terms, and models that admit both vector and scalar field perturbations. We study the former to show that it is possible to attain very high, including observable, values for the levels of non-gaussianity f_{NL} and \\tao_{NL} in the bispectrum B_\\zeta and trispectrum T_\\zeta of the primordial curvature perturbation \\zeta respectively. Such a result is obtained by taking care of loop corrections in the spectrum P_\\zeta, the bispectrum B_\\zeta and the trispectrum T_\\zeta . Sizeable values for f_{NL} and \\tao_{NL} arise even if \\zeta is generated during inflation. For the latter we study the spectrum P_\\zeta, bispectrum B_\\zeta and trispectrum $T_\\ze...
Generic inference of inflation models by local non-Gaussianity
Dorn, Sebastian; Kunze, Kerstin E; Hofmann, Stefan; Enßlin, Torsten A
2014-01-01
The presence of multiple fields during inflation might seed a detectable amount of non-Gaussianity in the curvature perturbations, which in turn becomes observable in present data sets like the cosmic microwave background (CMB) or the large scale structure (LSS). Within this proceeding we present a fully analytic method to infer inflationary parameters from observations by exploiting higher-order statistics of the curvature perturbations. To keep this analyticity, and thereby to dispense with numerically expensive sampling techniques, a saddle-point approximation is introduced whose precision has been validated for a numerical toy example. Applied to real data, this approach might enable to discriminate among the still viable models of inflation.
Fault Tolerant Control Using Gaussian Processes and Model Predictive Control
Directory of Open Access Journals (Sweden)
Yang Xiaoke
2015-03-01
Full Text Available Essential ingredients for fault-tolerant control are the ability to represent system behaviour following the occurrence of a fault, and the ability to exploit this representation for deciding control actions. Gaussian processes seem to be very promising candidates for the first of these, and model predictive control has a proven capability for the second. We therefore propose to use the two together to obtain fault-tolerant control functionality. Our proposal is illustrated by several reasonably realistic examples drawn from flight control.
a Gaussian Process Based Multi-Person Interaction Model
Klinger, T.; Rottensteiner, F.; Heipke, C.
2016-06-01
Online multi-person tracking in image sequences is commonly guided by recursive filters, whose predictive models define the expected positions of future states. When a predictive model deviates too much from the true motion of a pedestrian, which is often the case in crowded scenes due to unpredicted accelerations, the data association is prone to fail. In this paper we propose a novel predictive model on the basis of Gaussian Process Regression. The model takes into account the motion of every tracked pedestrian in the scene and the prediction is executed with respect to the velocities of all interrelated persons. As shown by the experiments, the model is capable of yielding more plausible predictions even in the presence of mutual occlusions or missing measurements. The approach is evaluated on a publicly available benchmark and outperforms other state-of-the-art trackers.
Nonparaxial multi-Gaussian beam models and measurement models for phased array transducers.
Zhao, Xinyu; Gang, Tie
2009-01-01
A nonparaxial multi-Gaussian beam model is proposed in order to overcome the limitation that paraxial Gaussian beam models lose accuracy in simulating the beam steering behavior of phased array transducers. Using this nonparaxial multi-Gaussian beam model, the focusing and steering sound fields generated by an ultrasonic linear phased array transducer are calculated and compared with the corresponding results obtained by paraxial multi-Gaussian beam model and more exact Rayleigh-Sommerfeld integral model. In addition, with help of this novel nonparaxial method, an ultrasonic measurement model is provided to investigate the sensitivity of linear phased array transducers versus steering angles. Also the comparisons of model predictions with experimental results are presented to certify the accuracy of this provided measurement model.
Transform Coding for Point Clouds Using a Gaussian Process Model.
De Queiroz, Ricardo; Chou, Philip A
2017-04-28
We propose using stationary Gaussian Processes (GPs) to model the statistics of the signal on points in a point cloud, which can be considered samples of a GP at the positions of the points. Further, we propose using Gaussian Process Transforms (GPTs), which are Karhunen-Lo`eve transforms of the GP, as the basis of transform coding of the signal. Focusing on colored 3D point clouds, we propose a transform coder that breaks the point cloud into blocks, transforms the blocks using GPTs, and entropy codes the quantized coefficients. The GPT for each block is derived from both the covariance function of the GP and the locations of the points in the block, which are separately encoded. The covariance function of the GP is parameterized, and its parameters are sent as side information. The quantized coefficients are sorted by eigenvalues of the GPTs, binned, and encoded using an arithmetic coder with bin-dependent Laplacian models whose parameters are also sent as side information. Results indicate that transform coding of 3D point cloud colors using the proposed GPT and entropy coding achieves superior compression performance on most of our data sets.
Modeling Smoke Plume-Rise and Dispersion from Southern United States Prescribed Burns with Daysmoke
Directory of Open Access Journals (Sweden)
Mehmet Talat Odman
2011-08-01
Full Text Available We present Daysmoke, an empirical-statistical plume rise and dispersion model for simulating smoke from prescribed burns. Prescribed fires are characterized by complex plume structure including multiple-core updrafts which makes modeling with simple plume models difficult. Daysmoke accounts for plume structure in a three-dimensional veering/sheering atmospheric environment, multiple-core updrafts, and detrainment of particulate matter. The number of empirical coefficients appearing in the model theory is reduced through a sensitivity analysis with the Fourier Amplitude Sensitivity Test (FAST. Daysmoke simulations for “bent-over” plumes compare closely with Briggs theory although the two-thirds law is not explicit in Daysmoke. However, the solutions for the “highly-tilted” plume characterized by weak buoyancy, low initial vertical velocity, and large initial plume diameter depart considerably from Briggs theory. Results from a study of weak plumes from prescribed burns at Fort Benning GA showed simulated ground-level PM2.5 comparing favorably with observations taken within the first eight kilometers of eleven prescribed burns. Daysmoke placed plume tops near the lower end of the range of observed plume tops for six prescribed burns. Daysmoke provides the levels and amounts of smoke injected into regional scale air quality models. Results from CMAQ with and without an adaptive grid are presented.
Multi-resolution image segmentation based on Gaussian mixture model
Institute of Scientific and Technical Information of China (English)
Tang Yinggan; Liu Dong; Guan Xinping
2006-01-01
Mixture model based image segmentation method, which assumes that image pixels are independent and do not consider the position relationship between pixels, is not robust to noise and usually leads to misclassification. A new segmentation method, called multi-resolution Gaussian mixture model method, is proposed. First, an image pyramid is constructed and son-father link relationship is built between each level of pyramid. Then the mixture model segmentation method is applied to the top level. The segmentation result on the top level is passed top-down to the bottom level according to the son-father link relationship between levels. The proposed method considers not only local but also global information of image, it overcomes the effect of noise and can obtain better segmentation result. Experimental result demonstrates its effectiveness.
Gaussian Mixture Model and Rjmcmc Based RS Image Segmentation
Shi, X.; Zhao, Q. H.
2017-09-01
For the image segmentation method based on Gaussian Mixture Model (GMM), there are some problems: 1) The number of component was usually a fixed number, i.e., fixed class and 2) GMM is sensitive to image noise. This paper proposed a RS image segmentation method that combining GMM with reversible jump Markov Chain Monte Carlo (RJMCMC). In proposed algorithm, GMM was designed to model the distribution of pixel intensity in RS image. Assume that the number of component was a random variable. Respectively build the prior distribution of each parameter. In order to improve noise resistance, used Gibbs function to model the prior distribution of GMM weight coefficient. According to Bayes' theorem, build posterior distribution. RJMCMC was used to simulate the posterior distribution and estimate its parameters. Finally, an optimal segmentation is obtained on RS image. Experimental results show that the proposed algorithm can converge to the optimal number of class and get an ideal segmentation results.
A comprehensive breath plume model for disease transmission via expiratory aerosols.
Directory of Open Access Journals (Sweden)
Siobhan K Halloran
Full Text Available The peak in influenza incidence during wintertime in temperate regions represents a longstanding, unresolved scientific question. One hypothesis is that the efficacy of airborne transmission via aerosols is increased at lower humidities and temperatures, conditions that prevail in wintertime. Recent work with a guinea pig model by Lowen et al. indicated that humidity and temperature do modulate airborne influenza virus transmission, and several investigators have interpreted the observed humidity dependence in terms of airborne virus survivability. This interpretation, however, neglects two key observations: the effect of ambient temperature on the viral growth kinetics within the animals, and the strong influence of the background airflow on transmission. Here we provide a comprehensive theoretical framework for assessing the probability of disease transmission via expiratory aerosols between test animals in laboratory conditions. The spread of aerosols emitted from an infected animal is modeled using dispersion theory for a homogeneous turbulent airflow. The concentration and size distribution of the evaporating droplets in the resulting "Gaussian breath plume" are calculated as functions of position, humidity, and temperature. The overall transmission probability is modeled with a combination of the time-dependent viral concentration in the infected animal and the probability of droplet inhalation by the exposed animal downstream. We demonstrate that the breath plume model is broadly consistent with the results of Lowen et al., without invoking airborne virus survivability. The results also suggest that, at least for guinea pigs, variation in viral kinetics within the infected animals is the dominant factor explaining the increased transmission probability observed at lower temperatures.
Ash plume properties retrieved from infrared images: a forward and inverse modeling approach
Cerminara, Matteo; Valade, Sébastien; Harris, Andrew J L
2014-01-01
We present a coupled fluid-dynamic and electromagnetic model for volcanic ash plumes. In a forward approach, the model is able to simulate the plume dynamics from prescribed input flow conditions and generate the corresponding synthetic thermal infrared (TIR) image, allowing a comparison with field-based observations. An inversion procedure is then developed to retrieve ash plume properties from TIR images. The adopted fluid-dynamic model is based on a one-dimensional, stationary description of a self-similar (top-hat) turbulent plume, for which an asymptotic analytical solution is obtained. The electromagnetic emission/absorption model is based on the Schwarzschild's equation and on Mie's theory for disperse particles, assuming that particles are coarser than the radiation wavelength and neglecting scattering. [...] Application of the inversion procedure to an ash plume at Santiaguito volcano (Guatemala) has allowed us to retrieve the main plume input parameters, namely the initial radius $b_0$, velocity $U_...
The Gaussian streaming model and Lagrangian effective field theory
Vlah, Zvonimir; White, Martin
2016-01-01
We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM to a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of int...
The Gaussian streaming model and convolution Lagrangian effective field theory
Vlah, Zvonimir; Castorina, Emanuele; White, Martin
2016-12-01
We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM to a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.
Fuzzy local Gaussian mixture model for brain MR image segmentation.
Ji, Zexuan; Xia, Yong; Sun, Quansen; Chen, Qiang; Xia, Deshen; Feng, David Dagan
2012-05-01
Accurate brain tissue segmentation from magnetic resonance (MR) images is an essential step in quantitative brain image analysis. However, due to the existence of noise and intensity inhomogeneity in brain MR images, many segmentation algorithms suffer from limited accuracy. In this paper, we assume that the local image data within each voxel's neighborhood satisfy the Gaussian mixture model (GMM), and thus propose the fuzzy local GMM (FLGMM) algorithm for automated brain MR image segmentation. This algorithm estimates the segmentation result that maximizes the posterior probability by minimizing an objective energy function, in which a truncated Gaussian kernel function is used to impose the spatial constraint and fuzzy memberships are employed to balance the contribution of each GMM. We compared our algorithm to state-of-the-art segmentation approaches in both synthetic and clinical data. Our results show that the proposed algorithm can largely overcome the difficulties raised by noise, low contrast, and bias field, and substantially improve the accuracy of brain MR image segmentation.
A Gaussian graphical model approach to climate networks
Energy Technology Data Exchange (ETDEWEB)
Zerenner, Tanja, E-mail: tanjaz@uni-bonn.de [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Friederichs, Petra; Hense, Andreas [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany); Lehnertz, Klaus [Department of Epileptology, University of Bonn, Sigmund-Freud-Straße 25, 53105 Bonn (Germany); Helmholtz Institute for Radiation and Nuclear Physics, University of Bonn, Nussallee 14-16, 53115 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany)
2014-06-15
Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.
A Non-Gaussian Spatial Generalized Linear Latent Variable Model
Irincheeva, Irina
2012-08-03
We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.
Ripping RNA by Force using Gaussian Network Models
Hyeon, Changbong
2016-01-01
Using force as a probe to map the folding landscapes of RNA molecules has become a reality thanks to major advances in single molecule pulling experiments. Although the unfolding pathways under tension are complicated to predict studies in the context of proteins have shown that topology plays is the major determinant of the unfolding landscapes. By building on this finding we study the responses of RNA molecules to force by adapting Gaussian network model (GNM) that represents RNAs using a bead-spring network with isotropic interactions. Cross-correlation matrices of residue fluctuations, which are analytically calculated using GNM even upon application of mechanical force, show distinct allosteric communication as RNAs rupture. The model is used to calculate the force-extension curves at full thermodynamic equilibrium, and the corresponding unfolding pathways of four RNA molecules subject to a quasi-statically increased force.Our study finds that the analysis using GNM captures qualitatively the unfolding p...
Efficient speaker verification using Gaussian mixture model component clustering.
Energy Technology Data Exchange (ETDEWEB)
De Leon, Phillip L. (New Mexico State University, Las Cruces, NM); McClanahan, Richard D.
2012-04-01
In speaker verification (SV) systems that employ a support vector machine (SVM) classifier to make decisions on a supervector derived from Gaussian mixture model (GMM) component mean vectors, a significant portion of the computational load is involved in the calculation of the a posteriori probability of the feature vectors of the speaker under test with respect to the individual component densities of the universal background model (UBM). Further, the calculation of the sufficient statistics for the weight, mean, and covariance parameters derived from these same feature vectors also contribute a substantial amount of processing load to the SV system. In this paper, we propose a method that utilizes clusters of GMM-UBM mixture component densities in order to reduce the computational load required. In the adaptation step we score the feature vectors against the clusters and calculate the a posteriori probabilities and update the statistics exclusively for mixture components belonging to appropriate clusters. Each cluster is a grouping of multivariate normal distributions and is modeled by a single multivariate distribution. As such, the set of multivariate normal distributions representing the different clusters also form a GMM. This GMM is referred to as a hash GMM which can be considered to a lower resolution representation of the GMM-UBM. The mapping that associates the components of the hash GMM with components of the original GMM-UBM is referred to as a shortlist. This research investigates various methods of clustering the components of the GMM-UBM and forming hash GMMs. Of five different methods that are presented one method, Gaussian mixture reduction as proposed by Runnall's, easily outperformed the other methods. This method of Gaussian reduction iteratively reduces the size of a GMM by successively merging pairs of component densities. Pairs are selected for merger by using a Kullback-Leibler based metric. Using Runnal's method of reduction, we
Williams, George J.; Kojima, Jun J.; Arrington, Lynn A.; Deans, Matthew C.; Reed, Brian D.; Kinzbach, McKenzie I.; McLean, Christopher H.
2015-01-01
The Green Propellant Infusion Mission (GPIM) will demonstrate the capability of a green propulsion system, specifically, one using the monopropellant, AF-M315E. One of the risks identified for GPIM is potential contamination of sensitive areas of the spacecraft from the effluents in the plumes of AF-M315E thrusters. Plume characterization of a laboratory-model 22 N thruster via optical diagnostics was conducted at NASA GRC in a space-simulated environment. A high-frequency pulsed laser was coupled with an electron-multiplied ICCD camera to perform Raman spectroscopy in the near-field, low-pressure plume. The Raman data yielded plume constituents and temperatures over a range of thruster chamber pressures and as a function of thruster (catalyst) operating time. Schlieren images of the near-field plume enabled calculation of plume velocities and revealed general plume structure of the otherwise invisible plume. The measured velocities are compared to those predicted by a two-dimensional, kinetic model. Trends in data and numerical results are presented from catalyst mid-life to end-of-life. The results of this investigation were coupled with the Raman and Schlieren data to provide an anchor for plume impingement analysis presented in a companion paper. The results of both analyses will be used to improve understanding of the nature of AF-M315E plumes and their impacts to GPIM and other future missions.
3-D thermo-mechanical modeling of plume-induced subduction initiation
Baes, M.; Gerya, T.; Sobolev, S. V.
2016-11-01
Here, we study the 3-D subduction initiation process induced by the interaction between a hot thermo-chemical mantle plume and oceanic lithosphere using thermo-mechanical viscoplastic finite difference marker-in-cell models. Our numerical modeling results show that self-sustaining subduction is induced by plume-lithosphere interaction when the plume is sufficiently buoyant, the oceanic lithosphere is sufficiently old and the plate is weak enough to allow the buoyant plume to pass through it. Subduction initiation occurs following penetration of the lithosphere by the hot plume and the downward displacement of broken, nearly circular segments of lithosphere (proto-slabs) as a result of partially molten plume rocks overriding the proto-slabs. Our experiments show four different deformation regimes in response to plume-lithosphere interaction: a) self-sustaining subduction initiation, in which subduction becomes self-sustaining; b) frozen subduction initiation, in which subduction stops at shallow depths; c) slab break-off, in which the subducting circular slab breaks off soon after formation; and d) plume underplating, in which the plume does not pass through the lithosphere and instead spreads beneath it (i.e., failed subduction initiation). These regimes depend on several parameters, such as the size, composition, and temperature of the plume, the brittle/plastic strength and age of the oceanic lithosphere, and the presence/absence of lithospheric heterogeneities. The results show that subduction initiates and becomes self-sustaining when the lithosphere is older than 10 Myr and the non-dimensional ratio of the plume buoyancy force and lithospheric strength above the plume is higher than approximately 2. The outcomes of our numerical experiments are applicable for subduction initiation in the modern and Precambrian Earth and for the origin of plume-related corona structures on Venus.
Kinetic Gaussian Model with Long-Range Interactions
Institute of Scientific and Technical Information of China (English)
KONGXiang-Mu; YANGZhan-Ru
2004-01-01
In this paper dynamical critical phenomena of the Gaussian model with long-range interactions decayingas 1/rd+δ (δ>0) on d-dimensional hypercubic lattices (d = 1, 2, and 3) are studied. First, the critical points are exactly calculated, and it is found that the critical points depend on the value of δ and the range of interactions. Then the critical dynamics is considered. We calculate the time evolutions of the local magnetizations and the spin-spin correlation functions, and further the dynamic critical exponents are obtained. For one-, two- and three-dimensional lattices, it is found that the dynamic critical exponents are all z = 2 if δ > 2, which agrees with the result when only considering nearest neighboring interactions, and that they are all δ if 0 < δ < 2. It shows that the dynamic critical exponents are independent of the spatial dimensionality but depend on the value of δ.
Kinetic Gaussian Model with Long-Range Interactions
Institute of Scientific and Technical Information of China (English)
KONG Xiang-Mu; YANG Zhan-Ru
2004-01-01
In this paper dynamical critical phenomena of the Gaussian model with long-range interactions decaying as 1/rd+δ (δ＞ 0) on d-dimensional hypercubic lattices (d = 1, 2, and 3) are studied. First, the critical points are exactly calculated, and it is found that the critical points depend on the value of δ and the range of interactions. Then the critical dynamics is considered. We calculate the time evolutions of the local magnetizations and the spin-spin correlation functions, and further the dynamic critical exponents are obtained. For one-, two- and three-dimensional lattices, it is found that the dynamic critical exponents are all z = 2 if δ＞ 2, which agrees with the result when only considering nearest neighboring interactions, and that they are all δ if 0 ＜δ＜ 2. It shows that the dynamic critical exponents are independent of the spatial dimensionality but depend on the value of δ.
Linear $\\Sigma$ Model in the Gaussian Functional Approximation
Nakamura, I
2001-01-01
We apply a self-consistent relativistic mean-field variational ``Gaussian functional'' (or Hartree) approximation to the linear $\\sigma$ model with spontaneously and explicitly broken chiral O(4) symmetry. We set up the self-consistency, or ``gap'' and the Bethe-Salpeter equations. We check and confirm the chiral Ward-Takahashi identities, among them the Nambu-Goldstone theorem and the (partial) axial current conservation [CAC], both in and away from the chiral limit. With explicit chiral symmetry breaking we confirm the Dashen relation for the pion mass and partial CAC. We solve numerically the gap and Bethe-Salpeter equations, discuss the solutions' properties and the particle content of the theory.
Hertog, Maarten L. A. T. M.; Scheerlinck, Nico; Nicolaï, Bart M.
2009-01-01
When modelling the behaviour of horticultural products, demonstrating large sources of biological variation, we often run into the issue of non-Gaussian distributed model parameters. This work presents an algorithm to reproduce such correlated non-Gaussian model parameters for use with Monte Carlo simulations. The algorithm works around the problem of non-Gaussian distributions by transforming the observed non-Gaussian probability distributions using a proposed SKN-distribution function before applying the covariance decomposition algorithm to generate Gaussian random co-varying parameter sets. The proposed SKN-distribution function is based on the standard Gaussian distribution function and can exhibit different degrees of both skewness and kurtosis. This technique is demonstrated using a case study on modelling the ripening of tomato fruit evaluating the propagation of biological variation with time.
Three-dimensional laboratory modeling of the Tonga trench and Samoan plume interaction
Druken, K. A.; Kincaid, C. R.; Pockalny, R. A.; Griffiths, R. W.; Hart, S. R.
2009-12-01
Plume processes occurring near ridge centers (e.g. Iceland) or mid-plate (e.g. Hawaii) have been well studied; however, the behavior of a plume near a subducting plate is still poorly understood and may in fact differ from the typical expected plume surfacing patterns. We investigate how three-dimensional subduction-driven flow relates to the deformation and dispersal of nearby upwelling plume material and the associated geochemical spatial patterns, with site-specific comparisons to the Tonga trench and Samoan plume system. Eighteen plume-trench laboratory experiments were conducted with varied combinations of subduction motions (down-dip, trench rollback, slab steepening and back-arc extension) and plume parameters (position and temperature.) A phenolic plate and glucose syrup, with a temperature dependent viscosity, are used to model the slab and upper mantle, respectively. Hydraulic pistons control longitudinal, translational and steepening motions of the slab as a simplified kinematic approach to mimic dynamic experiments. Results show that the subduction-induced flow dominates the upwelling strength of the plume, causing a significant portion of the plume head to subduct before reaching the melt zone. The remaining material is entrained around the slab edge into the mantle wedge by the trench rollback-induced flow. The proportion of subducted verses entrained material is predominantly dependent on plume location (relative to the trench) and thermal strength, with additional effects from back-arc extension and plate steepening.
Insights into the formation and dynamics of coignimbrite plumes from one-dimensional models
Engwell, S. L.; de'Michieli Vitturi, M.; Esposti Ongaro, T.; Neri, A.
2016-06-01
Coignimbrite plumes provide a common and effective mechanism by which large volumes of fine-grained ash are injected into the atmosphere. Nevertheless, controls on formation of these plumes as a function of eruptive conditions are still poorly constrained. Herein, two 1-D axysymmetric steady state models were coupled, the first describing the parent pyroclastic density current and the second describing plume rise. Global sensitivity analysis is applied to investigate controls on coignimbrite plume formation and describe coignimbrite source and the maximum plume height attained. For a range of initial mass flow rates between 108 and 1010 kg/s, modeled liftoff distance (the distance at which neutral buoyancy is attained), assuming radial supercritical flow, is controlled by the initial flow radius, gas mass fraction, flow thickness, and temperature. The predicted decrease in median grain size between flow initiation and plume liftoff is negligible. Calculated initial plume vertical velocities, assuming uniform liftoff velocity over the pyroclastic density current invasion area, are much greater (several tens of m/s) than those previously used in modeling coignimbrite plumes (1 m/s). Such velocities are inconsistent with the fine grain size of particles lofted into coignimbrite plumes, highlighting an unavailability of large clasts, possibly due to particle segregation within the flow, prior to plume formation. Source radius and initial vertical velocity have the largest effect on maximum plume height, closely followed by initial temperature. Modeled plume heights are between 25 and 47 km, comparable with Plinian eruption columns, highlighting the potential of such events for distributing fine-grained ash over significant areas.
Critical behavior of Gaussian model on diamond-type hierarchical lattices
Institute of Scientific and Technical Information of China (English)
孔祥木; 李崧
1999-01-01
It is proposed that the Gaussian type distribution constant bqi in the Gaussian model depends on the coordination number qi of site i, and that the relation bqi/bqj = qi/qj holds among bqi’s. The Gaussian model is then studied on a family of the diamond-type hierarchical （or DH） lattices, by the decimation real-space renormalization group following spin-resealing method. It is found that the magnetic property of the Gaussian model belongs to the same universal class, and that the critical point K* and the critical exponent v are given by K*= bqi/qi and v=1/2, respectively.
Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis
Directory of Open Access Journals (Sweden)
Zhanyu Ma
2014-06-01
Full Text Available As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.
Surface fire effects on conifer and hardwood crowns--applications of an integral plume model
Matthew Dickinson; Anthony Bova; Kathleen Kavanagh; Antoine Randolph; Lawrence Band
2009-01-01
An integral plume model was applied to the problems of tree death from canopy injury in dormant-season hardwoods and branch embolism in Douglas fir (Pseudotsuga menziesii) crowns. Our purpose was to generate testable hypotheses. We used the integral plume models to relate crown injury to bole injury and to explore the effects of variation in fire...
Gaussian and Lognormal Models of Hurricane Gust Factors
Merceret, Frank
2009-01-01
A document describes a tool that predicts the likelihood of land-falling tropical storms and hurricanes exceeding specified peak speeds, given the mean wind speed at various heights of up to 500 feet (150 meters) above ground level. Empirical models to calculate mean and standard deviation of the gust factor as a function of height and mean wind speed were developed in Excel based on data from previous hurricanes. Separate models were developed for Gaussian and offset lognormal distributions for the gust factor. Rather than forecasting a single, specific peak wind speed, this tool provides a probability of exceeding a specified value. This probability is provided as a function of height, allowing it to be applied at a height appropriate for tall structures. The user inputs the mean wind speed, height, and operational threshold. The tool produces the probability from each model that the given threshold will be exceeded. This application does have its limits. They were tested only in tropical storm conditions associated with the periphery of hurricanes. Winds of similar speed produced by non-tropical system may have different turbulence dynamics and stability, which may change those winds statistical characteristics. These models were developed along the Central Florida seacoast, and their results may not accurately extrapolate to inland areas, or even to coastal sites that are different from those used to build the models. Although this tool cannot be generalized for use in different environments, its methodology could be applied to those locations to develop a similar tool tuned to local conditions.
Vegetation Monitoring with Gaussian Processes and Latent Force Models
Camps-Valls, Gustau; Svendsen, Daniel; Martino, Luca; Campos, Manuel; Luengo, David
2017-04-01
Monitoring vegetation by biophysical parameter retrieval from Earth observation data is a challenging problem, where machine learning is currently a key player. Neural networks, kernel methods, and Gaussian Process (GP) regression have excelled in parameter retrieval tasks at both local and global scales. GP regression is based on solid Bayesian statistics, yield efficient and accurate parameter estimates, and provides interesting advantages over competing machine learning approaches such as confidence intervals. However, GP models are hampered by lack of interpretability, that prevented the widespread adoption by a larger community. In this presentation we will summarize some of our latest developments to address this issue. We will review the main characteristics of GPs and their advantages in vegetation monitoring standard applications. Then, three advanced GP models will be introduced. First, we will derive sensitivity maps for the GP predictive function that allows us to obtain feature ranking from the model and to assess the influence of examples in the solution. Second, we will introduce a Joint GP (JGP) model that combines in situ measurements and simulated radiative transfer data in a single GP model. The JGP regression provides more sensible confidence intervals for the predictions, respects the physics of the underlying processes, and allows for transferability across time and space. Finally, a latent force model (LFM) for GP modeling that encodes ordinary differential equations to blend data-driven modeling and physical models of the system is presented. The LFM performs multi-output regression, adapts to the signal characteristics, is able to cope with missing data in the time series, and provides explicit latent functions that allow system analysis and evaluation. Empirical evidence of the performance of these models will be presented through illustrative examples.
Compressive sensing by learning a Gaussian mixture model from measurements.
Yang, Jianbo; Liao, Xuejun; Yuan, Xin; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence
2015-01-01
Compressive sensing of signals drawn from a Gaussian mixture model (GMM) admits closed-form minimum mean squared error reconstruction from incomplete linear measurements. An accurate GMM signal model is usually not available a priori, because it is difficult to obtain training signals that match the statistics of the signals being sensed. We propose to solve that problem by learning the signal model in situ, based directly on the compressive measurements of the signals, without resorting to other signals to train a model. A key feature of our method is that the signals being sensed are treated as random variables and are integrated out in the likelihood. We derive a maximum marginal likelihood estimator (MMLE) that maximizes the likelihood of the GMM of the underlying signals given only their linear compressive measurements. We extend the MMLE to a GMM with dominantly low-rank covariance matrices, to gain computational speedup. We report extensive experimental results on image inpainting, compressive sensing of high-speed video, and compressive hyperspectral imaging (the latter two based on real compressive cameras). The results demonstrate that the proposed methods outperform state-of-the-art methods by significant margins.
Inter-comparison of three-dimensional models of volcanic plumes
Suzuki, Y. J.; Costa, A.; Cerminara, M.; Esposti Ongaro, T.; Herzog, M.; Van Eaton, A. R.; Denby, L. C.
2016-10-01
We performed an inter-comparison study of three-dimensional models of volcanic plumes. A set of common volcanological input parameters and meteorological conditions were provided for two kinds of eruptions, representing a weak and a strong eruption column. From the different models, we compared the maximum plume height, neutral buoyancy level (where plume density equals that of the atmosphere), and level of maximum radial spreading of the umbrella cloud. We also compared the vertical profiles of eruption column properties, integrated across cross-sections of the plume (integral variables). Although the models use different numerical procedures and treatments of subgrid turbulence and particle dynamics, the inter-comparison shows qualitatively consistent results. In the weak plume case (mass eruption rate 1.5 × 106 kg s- 1), the vertical profiles of plume properties (e.g., vertical velocity, temperature) are similar among models, especially in the buoyant plume region. Variability among the simulated maximum heights is 20%, whereas neutral buoyancy level and level of maximum radial spreading vary by 10%. Time-averaging of the three-dimensional (3D) flow fields indicates an effective entrainment coefficient around 0.1 in the buoyant plume region, with much lower values in the jet region, which is consistent with findings of small-scale laboratory experiments. On the other hand, the strong plume case (mass eruption rate 1.5 × 109 kg s- 1) shows greater variability in the vertical plume profiles predicted by the different models. Our analysis suggests that the unstable flow dynamics in the strong plume enhances differences in the formulation and numerical solution of the models. This is especially evident in the overshooting top of the plume, which extends a significant portion ( 1/8) of the maximum plume height. Nonetheless, overall variability in the spreading level and neutral buoyancy level is 20%, whereas that of maximum height is 10%. This inter
Filling transitions on rough surfaces: inadequacy of Gaussian surface models
Dufour, Renaud; Herminghaus, Stephan
2015-01-01
We present numerical studies of wetting on various topographic substrates, including random topographies. We find good agreement with recent predictions based on an analytical interface-displacement-type theory \\cite{Herminghaus2012, Herminghaus2012a}. The phase diagrams are qualitatively as predicted, but differently in this study the critical points are found to lie within the physical parameter range (i.e., at positive contact angle) in all cases studied. Notably, it is corroborated that Gaussian random surfaces behave qualitatively different from all non-Gaussian topographies investigated, exhibiting a qualitatively different phase diagram. This shows that Gaussian random surfaces must be used with great care in the context of wetting phenomena.
Multi-Scale Gaussian Processes: a Novel Model for Chaotic Time Series Prediction
Institute of Scientific and Technical Information of China (English)
ZHOU Ya-Tong; ZHANG Tai-Yi; SUN Jian-Cheng
2007-01-01
@@ Based on the classical Gaussian process (GP) model, we propose a multi-scale Gaussian process (MGP) model to predict the existence of chaotic time series. The MGP employs a covariance function that is constructed by a scaling function with its different dilations and translations, ensuring that the optimal hyperparameter is easy to determine.
GAUSSIAN COPULA MARGINAL REGRESSION FOR MODELING EXTREME DATA WITH APPLICATION
Directory of Open Access Journals (Sweden)
Sutikno
2014-01-01
Full Text Available Regression is commonly used to determine the relationship between the response variable and the predictor variable, where the parameters are estimated by Ordinary Least Square (OLS. This method can be used with an assumption that residuals are normally distributed (0, σ^{2}. However, the assumption of normality of the data is often violated due to extreme observations, which are often found in the climate data. Modeling of rice harvested area with rainfall predictor variables allows extreme observations. Therefore, another approximation is necessary to be applied in order to overcome the presence of extreme observations. The method used to solve this problem is a Gaussian Copula Marginal Regression (GCMR, the regression-based Copula. As a case study, the method is applied to model rice harvested area of rice production centers in East Java, Indonesia, covering District: Banyuwangi, Lamongan, Bojonegoro, Ngawi and Jember. Copula is chosen because this method is not strict against the assumption distribution, especially the normal distribution. Moreover, this method can describe dependency on extreme point clearly. The GCMR performance will be compared with OLS and Generalized Linear Models (GLM. The identification result of the dependencies structure between the Rice Harvest per period (RH and monthly rainfall showed a dependency in all areas of research. It is shown that the real test copula type mostly follows the Gumbel distribution. While the comparison of the model goodness for rice harvested area in the modeling showed that the method used to model the exact GCMR in five districts RH1 and RH2 in Jember district since its lowest AICc. Looking at the data distribution pattern of response variables, it can be concluded that the GCMR good for modeling the response variable that is not normally distributed and tend to have a large skew.
Modeling of Mauritius as a Heterogeneous Mantle Plume
Moore, J. C.; White, W. M.; Paul, D.; Duncan, R. A.
2008-12-01
) components are modeled using Adiabat1pH: 1) mixing of melts from the enriched and depleted components, and 2) generation and melting of a hybrid peridotite/pyroxenite component. P-T conditions for the system are modeled as a weighted composite of the two components, producing enhanced eclogite and depressed peridotite productivity relative to isentropic melting for each component. In the first model, the mixing of large degree eclogitic melts (~70%) with small degree peridotite melts (~2%) produces a limited range of Sm/Yb, inconsistent with Mauritius shield lavas. In the second model, larger degree eclogite melts (~20%) interact with solid peridotite, with the new-formed hybrid melting to relatively large degrees (~50%). Mixing of these melts with small degree peridotite melts produces a steep mixing trajectory with a range in Sm/Yb comparable to that seen in the Older Series. Post-erosional lavas for both models melt to only small degrees in the plume tail for both eclogite and peridotite, remaining enigmatic in the context of models and intraplate volcanism.
Ship plume dispersion rates in convective boundary layers for chemistry models
Directory of Open Access Journals (Sweden)
F. Chosson
2008-04-01
Full Text Available Detailed ship plume simulations in various convective boundary layer situations have been performed using a Lagrangian Dispersion Model driven by a Large Eddy Simulation Model. The simulations focus on early stage (1–2 h of plume dispersion regime and take into account the effects of plume rise on dispersion. Results are presented in an attempt to provide to chemical modellers community a realistic description of the impact of characteristic dispersion on exhaust ship plume chemistry. Plume dispersion simulations are used to derive analytical dilution rate functions. Even though results exhibit striking effects of plume rise parameter on dispersion patterns, it is shown that initial buoyancy fluxes at ship stack have minor effect on plume dilution rate. After initial high dispersion regimes a simple characteristic dilution time scale can be used to parameterize the subgrid plume dilution effects in large scale chemistry models. The results show that this parameter is directly related to the typical turn-over time scale of the convective boundary layer.
Feedback Message Passing for Inference in Gaussian Graphical Models
Liu, Ying; Anandkumar, Animashree; Willsky, Alan S
2011-01-01
While loopy belief propagation (LBP) performs reasonably well for inference in some Gaussian graphical models with cycles, its performance is unsatisfactory for many others. In particular for some models LBP does not converge, and in general when it does converge, the computed variances are incorrect (except for cycle-free graphs for which belief propagation (BP) is non-iterative and exact). In this paper we propose {\\em feedback message passing} (FMP), a message-passing algorithm that makes use of a special set of vertices (called a {\\em feedback vertex set} or {\\em FVS}) whose removal results in a cycle-free graph. In FMP, standard BP is employed several times on the cycle-free subgraph excluding the FVS while a special message-passing scheme is used for the nodes in the FVS. The computational complexity of exact inference is $O(k^2n)$, where $k$ is the number of feedback nodes, and $n$ is the total number of nodes. When the size of the FVS is very large, FMP is intractable. Hence we propose {\\em approximat...
Gaussian Process Model for Collision Dynamics of Complex Molecules.
Cui, Jie; Krems, Roman V
2015-08-14
We show that a Gaussian process model can be combined with a small number (of order 100) of scattering calculations to provide a multidimensional dependence of scattering observables on the experimentally controllable parameters (such as the collision energy or temperature) as well as the potential energy surface (PES) parameters. For the case of Ar-C_{6}H_{6} collisions, we show that 200 classical trajectory calculations are sufficient to provide a ten-dimensional hypersurface, giving the dependence of the collision lifetimes on the collision energy, internal temperature, and eight PES parameters. This can be used for solving the inverse scattering problem, for the efficient calculation of thermally averaged observables, for reducing the error of the molecular dynamics calculations by averaging over the PES variations, and for the analysis of the sensitivity of the observables to individual parameters determining the PES. Trained by a combination of classical and quantum calculations, the model provides an accurate description of the quantum scattering cross sections, even near scattering resonances.
Modeling the radiation of ultrasonic phased-array transducers with Gaussian beams.
Huang, Ruiju; Schmerr, Lester W; Sedov, Alexander
2008-12-01
A new transducer beam model, called a multi-Gaussian array beam model, is developed to simulate the wave fields radiated by ultrasonic phased-array transducers. This new model overcomes the restrictions on using ordinary multi-Gaussian beam models developed for large single-element transducers in phased-array applications. It is demonstrated that this new beam model can effectively model the steered and focused beams of a linear phased-array transducer.
Space-based Observational Constraints for 1-D Plume Rise Models
Martin, Maria Val; Kahn, Ralph A.; Logan, Jennifer A.; Paguam, Ronan; Wooster, Martin; Ichoku, Charles
2012-01-01
We use a space-based plume height climatology derived from observations made by the Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard the NASA Terra satellite to evaluate the ability of a plume-rise model currently embedded in several atmospheric chemical transport models (CTMs) to produce accurate smoke injection heights. We initialize the plume-rise model with assimilated meteorological fields from the NASA Goddard Earth Observing System and estimated fuel moisture content at the location and time of the MISR measurements. Fire properties that drive the plume-rise model are difficult to estimate and we test the model with four estimates for active fire area and four for total heat flux, obtained using empirical data and Moderate Resolution Imaging Spectroradiometer (MODIS) re radiative power (FRP) thermal anomalies available for each MISR plume. We show that the model is not able to reproduce the plume heights observed by MISR over the range of conditions studied (maximum r2 obtained in all configurations is 0.3). The model also fails to determine which plumes are in the free troposphere (according to MISR), key information needed for atmospheric models to simulate properly smoke dispersion. We conclude that embedding a plume-rise model using currently available re constraints in large-scale atmospheric studies remains a difficult proposition. However, we demonstrate the degree to which the fire dynamical heat flux (related to active fire area and sensible heat flux), and atmospheric stability structure influence plume rise, although other factors less well constrained (e.g., entrainment) may also be significant. Using atmospheric stability conditions, MODIS FRP, and MISR plume heights, we offer some constraints on the main physical factors that drive smoke plume rise. We find that smoke plumes reaching high altitudes are characterized by higher FRP and weaker atmospheric stability conditions than those at low altitude, which tend to remain confined
Development of a Random Field Model for Gas Plume Detection in Multiple LWIR Images.
Energy Technology Data Exchange (ETDEWEB)
Heasler, Patrick G.
2008-09-30
This report develops a random field model that describes gas plumes in LWIR remote sensing images. The random field model serves as a prior distribution that can be combined with LWIR data to produce a posterior that determines the probability that a gas plume exists in the scene and also maps the most probable location of any plume. The random field model is intended to work with a single pixel regression estimator--a regression model that estimates gas concentration on an individual pixel basis.
Gaussian noise and the two-network frustrated Kuramoto model
Holder, Andrew B.; Zuparic, Mathew L.; Kalloniatis, Alexander C.
2017-02-01
We examine analytically and numerically a variant of the stochastic Kuramoto model for phase oscillators coupled on a general network. Two populations of phased oscillators are considered, labelled 'Blue' and 'Red', each with their respective networks, internal and external couplings, natural frequencies, and frustration parameters in the dynamical interactions of the phases. We disentangle the different ways that additive Gaussian noise may influence the dynamics by applying it separately on zero modes or normal modes corresponding to a Laplacian decomposition for the sub-graphs for Blue and Red. Under the linearisation ansatz that the oscillators of each respective network remain relatively phase-synchronised centroids or clusters, we are able to obtain simple closed-form expressions using the Fokker-Planck approach for the dynamics of the average angle of the two centroids. In some cases, this leads to subtle effects of metastability that we may analytically describe using the theory of ratchet potentials. These considerations are extended to a regime where one of the populations has fragmented in two. The analytic expressions we derive largely predict the dynamics of the non-linear system seen in numerical simulation. In particular, we find that noise acting on a more tightly coupled population allows for improved synchronisation of the other population where deterministically it is fragmented.
Generation of secondary pollutants in a power plant plume: A model study
Hov, Oystein; Isaksen, Ivar S. A.
A plume model is developed where chemistry and meteorology of the boundary layer interact with a power plant plume which is given a spatial resolution in the cross wind direction. Ozone bulges are formed after 2 1/2-3 h or more, with excess ozone 10-20 % above ambient levels in fair weather during summer for a plume comparable to the St. Louis Labadie power plant plume. The chemical activity peaks first on the plume fringes, later close to the central axis. Hydroxyl exceeds 13 × 10 6 molecules cm -3 within the plume after a few hours and the corresponding SO 2 to sulphate conversion rate ranges between 1 and 5%h -1. Nitric acid formation exceeds sulphuric acid formation during developed stages of the plume. Ambient emissions of nitrogen oxides and hydrocarbons representative for heavily populated areas tend to reduce the relative size of the ozone bulge compared to cases with lower emissions, and medium size power plants give rise to more excess ozone than larger plants. The ozone bulge disappears when the solar radiation is substantially reduced. The fate of the HSO x radicals and its involvement in odd hydrogen regeneration is essential in the understanding of the plume chemistry.
Kremer, T.; Maineult, A. J.; Binley, A.; Vieira, C.; Zamora, M.
2012-12-01
CO2 capture and storage into deep geological formations is one of the main solutions proposed to reduce the concentration of anthropic CO2 in the atmosphere. The monitoring of injection sites is a crucial issue to assess for the long term viability of CO2 storage. With the intention of detecting potential leakages, we are investigating the possibility of using electrical resistivity tomography (ERT) techniques to detect CO2 transfers in the shallow sub-surface. ERT measurements were performed during a CO2 injection in a cylindrical tank filled with Fontainebleau sand and saturated with water. Several measurements protocols were tested. The inversion of the resistances measured with the software R3T (Binley and Kemna (2005)) clearly showed that the CO2 injection induces significant changes in the resistivity distribution of the medium, and that ERT has a promising potential for the detection and survey of CO2 transfers through unconsolidated saturated media. We modeled this experiment using Matlab by building a 3D cellular automaton that describes the CO2 spreading, following the geometric and stochastic approach described by Selker et al. (2007). The CO2 circulation is described as independents, circular and continuous gas channels whose horizontal spread depends on a Gaussian probability law. From the channel distribution we define the corresponding gas concentration distribution and calculate the resistivity of the medium by applying Archie's law for unsaturated conditions. The forward modelling was performed with the software R3T to convert the resistivity distribution into resistances values, each corresponding to one of the electrode arrays used in the experimental measurements. Modelled and measured resistances show a good correlation, except for the electrode arrays located at the top or the bottom of the tank. We improved the precision of the model by considering the effects due to CO2 dissolution in the water which increases the conductivity of the
Uncertainties in volcanic plume modeling: a parametric study using FPLUME model
Macedonio, Giovanni; Costa, Antonio; Folch, Arnau
2016-04-01
Tephra transport and dispersal models are commonly used for volcanic hazard assessment and tephra dispersal (ash cloud) forecasts. The proper quantification of the parameters defining the source term in the dispersal models, and in particular the estimation of the mass eruption rate, plume height, and particle vertical mass distribution, is of paramount importance for obtaining reliable results in terms of particle mass concentration in the atmosphere and loading on the ground. The study builds upon numerical simulations of using FPLUME, an integral steady-state model based on the Buoyant Plume Theory, generalized in order to account for volcanic processes (particle fallout and re-entrainment, water phase changes, effects of wind, etc). As reference cases for strong and weak plumes, we consider the cases defined during the IAVCEI Commission on tephra hazard modeling inter-comparison exercise. The goal was to explore the leading order role of each parameter in order to assess which should be better constrained to better quantify the eruption source parameters for use by the dispersal models. Moreover, a sensitivity analysis investigates the role of wind entrainment and intensity, atmospheric humidity, water phase changes, and particle fallout and re-entrainment. Results show that the leading-order parameters are the mass eruption rate and the air entrainment coefficient, specially for weak plumes.
Non-Gaussian PDF Modeling of Turbulent Boundary Layer Fluctuating Pressure Excitation
Steinwolf, Alexander; Rizzi, Stephen A.
2003-01-01
The purpose of the study is to investigate properties of the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the exterior of a supersonic transport aircraft. It is shown that fluctuating pressure PDFs differ from the Gaussian distribution even for surface conditions having no significant discontinuities. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations upstream of forward-facing step discontinuities and downstream of aft-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. Various analytical PDF distributions are used and further developed to model this behavior.
Background based Gaussian mixture model lesion segmentation in PET
Energy Technology Data Exchange (ETDEWEB)
Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe [DEIB, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan 20133 (Italy); De Bernardi, Elisabetta [Department of Medicine and Surgery, Tecnomed Foundation, University of Milano—Bicocca, Monza 20900 (Italy); Zito, Felicia; Castellani, Massimo [Nuclear Medicine Department, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, Milan 20122 (Italy)
2016-05-15
Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previous analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was
Enceladus Plume Density Modeling and Reconstruction for Cassini Attitude Control System
Sarani, Siamak
2010-01-01
In 2005, Cassini detected jets composed mostly of water, spouting from a set of nearly parallel rifts in the crust of Enceladus, an icy moon of Saturn. During an Enceladus flyby, either reaction wheels or attitude control thrusters on the Cassini spacecraft are used to overcome the external torque imparted on Cassini due to Enceladus plume or jets, as well as to slew the spacecraft in order to meet the pointing needs of the on-board science instruments. If the estimated imparted torque is larger than it can be controlled by the reaction wheel control system, thrusters are used to control the spacecraft. Having an engineering model that can predict and simulate the external torque imparted on Cassini spacecraft due to the plume density during all projected low-altitude Enceladus flybys is important. Equally important is being able to reconstruct the plume density after each flyby in order to calibrate the model. This paper describes an engineering model of the Enceladus plume density, as a function of the flyby altitude, developed for the Cassini Attitude and Articulation Control Subsystem, and novel methodologies that use guidance, navigation, and control data to estimate the external torque imparted on the spacecraft due to the Enceladus plume and jets. The plume density is determined accordingly. The methodologies described have already been used to reconstruct the plume density for three low-altitude Enceladus flybys of Cassini in 2008 and will continue to be used on all remaining low-altitude Enceladus flybys in Cassini's extended missions.
Enceladus Plume Density Modeling and Reconstruction for Cassini Attitude Control System
Sarani, Siamak
2010-01-01
In 2005, Cassini detected jets composed mostly of water, spouting from a set of nearly parallel rifts in the crust of Enceladus, an icy moon of Saturn. During an Enceladus flyby, either reaction wheels or attitude control thrusters on the Cassini spacecraft are used to overcome the external torque imparted on Cassini due to Enceladus plume or jets, as well as to slew the spacecraft in order to meet the pointing needs of the on-board science instruments. If the estimated imparted torque is larger than it can be controlled by the reaction wheel control system, thrusters are used to control the spacecraft. Having an engineering model that can predict and simulate the external torque imparted on Cassini spacecraft due to the plume density during all projected low-altitude Enceladus flybys is important. Equally important is being able to reconstruct the plume density after each flyby in order to calibrate the model. This paper describes an engineering model of the Enceladus plume density, as a function of the flyby altitude, developed for the Cassini Attitude and Articulation Control Subsystem, and novel methodologies that use guidance, navigation, and control data to estimate the external torque imparted on the spacecraft due to the Enceladus plume and jets. The plume density is determined accordingly. The methodologies described have already been used to reconstruct the plume density for three low-altitude Enceladus flybys of Cassini in 2008 and will continue to be used on all remaining low-altitude Enceladus flybys in Cassini's extended missions.
Development and experimental verification of a model for an air jet penetrated by plumes
Directory of Open Access Journals (Sweden)
Xin Wang
2015-03-01
Full Text Available This article presents the fluid mechanics of a ventilation system formed by a momentum source and buoyancy sources. We investigate the interaction between plumes and a non-isothermal air jet for separate sources of buoyancy produced by the plume and the momentum of the air jet. The mathematical model represents the situation in which a plume rises from two heat sources causing buoyancy. The model is used to discuss the interactions involved. The effects of parameters such as the power of the source and the air-flow volume used in the mathematical-physical model are also discussed. An expression is deduced for the trajectory of the non-isothermal air jet penetrated by plumes. Experiments were also carried out to illustrate the effect on the flow of the air jet and to validate the theoretical work. The results show that the buoyancy source’s efforts to baffle the descent of the cold air have even been effective in reversing the direction of the trajectory. However, increasing the distance between the plumes can reduce the effect of the plumes on the jet curve. And it is apparent that when the velocity of the air supply increases, the interference caused by the plumes can be reduced.
Application of Gaussian Process Modeling to Analysis of Functional Unreliability
Energy Technology Data Exchange (ETDEWEB)
R. Youngblood
2014-06-01
This paper applies Gaussian Process (GP) modeling to analysis of the functional unreliability of a “passive system.” GPs have been used widely in many ways [1]. The present application uses a GP for emulation of a system simulation code. Such an emulator can be applied in several distinct ways, discussed below. All applications illustrated in this paper have precedents in the literature; the present paper is an application of GP technology to a problem that was originally analyzed [2] using neural networks (NN), and later [3, 4] by a method called “Alternating Conditional Expectations” (ACE). This exercise enables a multifaceted comparison of both the processes and the results. Given knowledge of the range of possible values of key system variables, one could, in principle, quantify functional unreliability by sampling from their joint probability distribution, and performing a system simulation for each sample to determine whether the function succeeded for that particular setting of the variables. Using previously available system simulation codes, such an approach is generally impractical for a plant-scale problem. It has long been recognized, however, that a well-trained code emulator or surrogate could be used in a sampling process to quantify certain performance metrics, even for plant-scale problems. “Response surfaces” were used for this many years ago. But response surfaces are at their best for smoothly varying functions; in regions of parameter space where key system performance metrics may behave in complex ways, or even exhibit discontinuities, response surfaces are not the best available tool. This consideration was one of several that drove the work in [2]. In the present paper, (1) the original quantification of functional unreliability using NN [2], and later ACE [3], is reprised using GP; (2) additional information provided by the GP about uncertainty in the limit surface, generally unavailable in other representations, is discussed
Contaminant plume configuration and movement: an experimental model
Alencoao, A.; Reis, A.; Pereira, M. G.; Liberato, M. L. R.; Caramelo, L.; Amraoui, M.; Amorim, V.
2009-04-01
The relevance of Science and Technology in our daily routines makes it compulsory to educate citizens who have both scientific literacy and scientific knowledge. These will allow them to be intervening citizens in a constantly changing society. Thus, physical and natural sciences are included in school curricula, both in primary and secondary education, with the fundamental aim of developing in the students the skills, attitudes and knowledge needed for the understanding of the planet Earth and its real problems. On the other hand, teaching in Geosciences is more and more based on practical methodologies which use didactic material, sustaining teachers' pedagogical practices and facilitating students' learning tasks suggested on the syllabus defined for each school level. Themes related to exploring the different components of the Hydrological Cycle and themes related to natural environment protection and preservation, namely water resources and soil contamination by industrial and urban sewage are examples of subject matters included on the Portuguese syllabus. These topics motivated the conception and construction of experimental models for the study of the propagation of pollutants on a porous medium. The experimental models allow inducing a horizontal flux of water though different kinds of permeable substances (e.g. sand, silt), with contamination spots on its surface. These experimental activities facilitate the student to understand the flow path of contaminating substances on the saturated zone and to observe the contaminant plume configuration and movement. The activities are explored in a teaching and learning process perspective where the student builds its own knowledge through real question- problem based learning which relate Science, Technology and Society. These activities have been developed in the framework of project ‘Water in the Environment' (CV/PVI/0854) of the POCTI Program (Programa Operacional "Ciência, Tecnologia, Inovação") financed
Isopycnal deepening of an under-ice river plume in coastal waters: Field observations and modeling
Li, S. Samuel; Ingram, R. Grant
2007-07-01
The Great Whale River, located on the southeast coast of Hudson Bay in Canada, forms a large river plume under complete landfast ice during early spring. Short-term fluctuations of plume depth have motivated the present numerical study of an under-ice river plume subject to tidal motion and friction. We introduce a simple two-layer model for predicting the vertical penetration of the under-ice river plume as it propagates over a deepening topography. The topography is idealized but representative. Friction on the bottom surface of the ice cover, on the seabed, and at the plume interface is parameterized using the quadratic friction law. The extent of the vertical penetration is controlled by dimensionless parameters related to tidal motion and river outflow. Model predictions are shown to compare favorably with under-ice plume measurements from the river mouth. This study illustrates that isopycnal deepening occurs when the ice-cover vertical motion creates a reduced flow cross-section during the ebbing tide. This results in supercritical flow and triggers the downward plume penetration in the offshore. For a given river discharge, the freshwater source over a tidal cycle is unsteady in terms of discharge velocity because of the variation in the effective cross-sectional area at the river mouth, through which freshwater flows.
DEFF Research Database (Denmark)
Li, Chunjian; Andersen, Søren Vang
2007-01-01
We propose two blind system identification methods that exploit the underlying dynamics of non-Gaussian signals. The two signal models to be identified are: an Auto-Regressive (AR) model driven by a discrete-state Hidden Markov process, and the same model whose output is perturbed by white Gaussian...
In situ chemical sensing for hydrothermal plume mapping and modeling
Fukuba, T.; Kusunoki, T.; Maeda, Y.; Shitashima, K.; Kyo, M.; Fujii, T.; Noguchi, T.; Sunamura, M.
2012-12-01
/RMS, distribution of pH anomalies were successfully visualized at the Kairei site. During the operations with the dredge sampler, MTD nets, and VMPS, the pH sensors successfully worked except for a few failures of measurements due to a problem on a sensor cable. The pH sensor mounted on the AUV "R2D4" recoded a weak low-pH anomaly during a dive at the Yokoniwa site. Representative of the pH data obtained at southern Central Indian Ride will be shown visually on the poster. The spatiotemporally resolved data can be also utilized to develop reliable numerical models to estimate fluxes of energy and matters from geologically active sites. An example of optimization of a numerical model for hydrothermal plume study using 4D pH data obtained at a back-arc hydrothermal system (the Hatoma-knoll, the Okinawa Trough, Japan) will be also presented.
Institute of Scientific and Technical Information of China (English)
Jie You; Qinmin Yang; Jiangang Lu; Youxian Sun
2014-01-01
In this paper, asymmetric Gaussian weighting functions are introduced for the identification of linear parameter varying systems by utilizing an input-output multi-model structure. It is not required to select operating points with uniform spacing and more flexibility is achieved. To verify the effectiveness of the proposed approach, sev-eral weighting functions, including linear, Gaussian and asymmetric Gaussian weighting functions, are evaluated and compared. It is demonstrated through simulations with a continuous stirred tank reactor model that the proposed approach provides more satisfactory approximation.
Beam conditions for radiation generated by an electromagnetic Hermite-Gaussian model source
Institute of Scientific and Technical Information of China (English)
LI Jia; XIN Yu; CHEN Yan-ru
2011-01-01
@@ Within the framework of the correlation theory of electromagnetic laser beams, the far field cross-spectral density matrix of the light radiated from an electromagnetic Hermite-Gaussian model source is derived.By utilizing the convergence property of Hermite polynomials, the conditions of the matrices for the source to generate an electromagnetic Hermite-Gaussian beam are obtained.Furthermore, in order to generate a scalar Hermite-Gaussian model beam, it is required that the source should be locally rather coherent in the spatial domain.
Numerical models of volcanic eruption plumes: inter-comparison and sensitivity
Costa, Antonio; Suzuki, Yujiro; Folch, Arnau; Cioni, Raffaello
2016-10-01
The accurate description of the dynamics of convective plumes developed during explosive volcanic eruptions represents one of the most crucial and intriguing challenges in volcanology. Eruptive plume dynamics are significantly affected by complex interactions with the surrounding atmosphere, in the case of both strong eruption columns, rising vertically above the tropopause, and weak volcanic plumes, developing within the troposphere and often following bended trajectories. The understanding of eruptive plume dynamics is pivotal for estimating mass flow rates of volcanic sources, a crucial aspect for tephra dispersion models used to assess aviation safety and tephra fallout hazard. For these reasons, several eruption column models have been developed in the past decades, including the more recent sophisticated computational fluid dynamic models.
Statistical imitation system using relational interest points and Gaussian mixture models
CSIR Research Space (South Africa)
Claassens, J
2009-11-01
Full Text Available The author proposes an imitation system that uses relational interest points (RIPs) and Gaussian mixture models (GMMs) to characterize a behaviour. The system's structure is inspired by the Robot Programming by Demonstration (RDP) paradigm...
Institute of Scientific and Technical Information of China (English)
ZHAO Xinyu; GANG Tie; ZHANG Bixing
2009-01-01
A nonparaxial multi-Gaussian beam model based on the rectangular aperture is proposed in order to overcome the hmitation of paraxial Gaussian beam model which losing accuracy in off-axis beam fields. With the method, acoustical field generated by an ultra-sonic linear phased array transducer is calculated and compared with the corresponding field obtained by Rayleigh-Sommerfeld integral, paraxial multi-Gaussian beam model, and Fraunhof-fer approximation method. Simulation examples show that nonparaxial multi-Gaussian beam model is not limited by the paraxial approximation condition and can predict efficiently and accurately the acoustical field radiated by a linear phased array transducer over a wide range of steering angles.
Invariant Measures and Asymptotic Gaussian Bounds for Normal Forms of Stochastic Climate Model
Institute of Scientific and Technical Information of China (English)
Yuan YUAN; Andrew J.MAJDA
2011-01-01
The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive high dimensional climate models is an important topic for atmospheric low-frequency variability, climate sensitivity, and improved extended range forecasting. Recently, techniques from applied mathematics have been utilized to systematically derive normal forms for reduced stochastic climate models for low-frequency variables. It was shown that dyad and multiplicative triad interactions combine with the climatological linear operator interactions to produce a normal form with both strong nonlinear cubic dissipation and Correlated Additive and Multiplicative (CAM) stochastic noise. The probability distribution functions (PDFs) of low frequency climate variables exhibit small but significant departure from Gaussianity but have asymptotic tails which decay at most like a Gaussian. Here, rigorous upper bounds with Gaussian decay are proved for the invariant measure of general normal form stochastic models. Asymptotic Gaussian lower bounds are also established under suitable hypotheses.
The role of viscosity contrast on plume structure in laboratory modeling of mantle convection
Prakash, Vivek N; Arakeri, Jaywant H
2016-01-01
We have conducted laboratory experiments to model important aspects of plumes in mantle convection. We focus on the role of the viscosity ratio U (between the ambient fluid and the plume fluid) in determining the plume structure and dynamics. In our experiments, we are able to capture geophysical convection regimes relevant to mantle convection both for hot spots (when U > 1) and plate-subduction (when U < 1) regimes. The planar laser induced fluorescence (PLIF) technique is used for flow visualization and characterizing the plume structures. The convection is driven by compositional buoyancy generated by the perfusion of lighter fluid across a permeable mesh and the viscosity ratio U is systematically varied over a range from 1/300 to 2500. The planform, near the bottom boundary for U=1, exhibits a well-known dendritic line plume structure. As the value of U is increased, a progressive morphological transition is observed from the dendritic-plume structure to discrete spherical plumes, accompanied with th...
Gaussian Cubes: Real-Time Modeling for Visual Exploration of Large Multidimensional Datasets.
Wang, Zhe; Ferreira, Nivan; Wei, Youhao; Bhaskar, Aarthy Sankari; Scheidegger, Carlos
2017-01-01
Recently proposed techniques have finally made it possible for analysts to interactively explore very large datasets in real time. However powerful, the class of analyses these systems enable is somewhat limited: specifically, one can only quickly obtain plots such as histograms and heatmaps. In this paper, we contribute Gaussian Cubes, which significantly improves on state-of-the-art systems by providing interactive modeling capabilities, which include but are not limited to linear least squares and principal components analysis (PCA). The fundamental insight in Gaussian Cubes is that instead of precomputing counts of many data subsets (as state-of-the-art systems do), Gaussian Cubes precomputes the best multivariate Gaussian for the respective data subsets. As an example, Gaussian Cubes can fit hundreds of models over millions of data points in well under a second, enabling novel types of visual exploration of such large datasets. We present three case studies that highlight the visualization and analysis capabilities in Gaussian Cubes, using earthquake safety simulations, astronomical catalogs, and transportation statistics. The dataset sizes range around one hundred million elements and 5 to 10 dimensions. We present extensive performance results, a discussion of the limitations in Gaussian Cubes, and future research directions.
Modelling reaction front formation and oscillatory behaviour in a contaminant plume
Cribbin, Laura; Fowler, Andrew; Mitchell, Sarah; Winstanley, Henry
2013-04-01
Groundwater contamination is a concern in all industrialised countries that suffer countless spills and leaks of various contaminants. Often, the contaminated groundwater forms a plume that, under the influences of regional groundwater flow, could eventually migrate to streams or wells. This can have catastrophic consequences for human health and local wildlife. The process known as bioremediation removes pollutants in the contaminated groundwater through bacterial reactions. Microorganisms can transform the contaminant into less harmful metabolic products. It is important to be able to predict whether such bioremediation will be sufficient for the safe clean-up of a plume before it reaches wells or lakes. Borehole data from a contaminant plume which resulted from spillage at a coal carbonisation plant in Mansfield, England is the motivation behind modelling the properties of a contaminant plume. In the upper part of the plume, oxygen is consumed and a nitrate spike forms. Deep inside the plume, nitrate is depleted and oscillations of organic carbon and ammonium concentration profiles are observed. While there are various numerical models that predict the evolution of a contaminant plume, we aim to create a simplified model that captures the fundamental characteristics of the plume while being comparable in accuracy to the detailed numerical models that currently exist. To model the transport of a contaminant, we consider the redox reactions that occur in groundwater systems. These reactions deplete the contaminant while creating zones of dominant terminal electron accepting processes throughout the plume. The contaminant is depleted by a series of terminal electron acceptors, the order of which is typically oxygen, nitrate, manganese, iron, sulphate and carbon dioxide. We describe a reaction front, characteristic of a redox zone, by means of rapid reaction and slow diffusion. This aids in describing the depletion of oxygen in the upper part of the plume. To
Analytic model of the effect of poly-Gaussian roughness on rarefied gas flow near the surface
Aksenova, Olga A.; Khalidov, Iskander A.
2016-11-01
The dependence of the macro-parameters of the flow on surface roughness of the walls and on geometrical shape of the surface is investigated asymptotically and numerically in a rarefied gas molecular flow at high Knudsen numbers. Surface roughness is approximated in statistical simulation by the model of poly-Gaussian (with probability density as the mixture of Gaussian densities [1]) random process. Substantial difference is detected for considered models of the roughness (Gaussian, poly-Gaussian and simple models applied by other researchers), as well in asymptotical expressions [3], as in numerical results. For instance, the influence of surface roughness on momentum and energy exchange coefficients increases noticeably for poly-Gaussian model compared to Gaussian one (although the main properties of poly-Gaussian random processes and fields are similar to corresponding properties of Gaussian processes and fields). Main advantage of the model is based on relative simple relations between the parameters of the model and the basic statistical characteristics of random field. Considered statistical approach permits to apply not only diffuse-specular model of the local scattering function V0 of reflected gas atoms, but also Cercignani-Lampis scattering kernel or phenomenological models of scattering function. Thus, the comparison between poly-Gaussian and Gaussian models shows more significant effect of roughness in aerodynamic values for poly-Gaussian model.
Directory of Open Access Journals (Sweden)
Cohen S.X.
2014-03-01
Full Text Available In this article, we describe a novel unsupervised spectral image segmentation algorithm. This algorithm extends the classical Gaussian Mixture Model-based unsupervised classification technique by incorporating a spatial flavor into the model: the spectra are modelized by a mixture of K classes, each with a Gaussian distribution, whose mixing proportions depend on the position. Using a piecewise constant structure for those mixing proportions, we are able to construct a penalized maximum likelihood procedure that estimates the optimal partition as well as all the other parameters, including the number of classes. We provide a theoretical guarantee for this estimation, even when the generating model is not within the tested set, and describe an efficient implementation. Finally, we conduct some numerical experiments of unsupervised segmentation from a real dataset.
Modeling Macro- and Micro-Scale Turbulent Mixing and Chemistry in Engine Exhaust Plumes
Menon, Suresh
1998-01-01
Simulation of turbulent mixing and chemical processes in the near-field plume and plume-vortex regimes has been successfully carried out recently using a reduced gas phase kinetics mechanism which substantially decreased the computational cost. A detailed mechanism including gas phase HOx, NOx, and SOx chemistry between the aircraft exhaust and the ambient air in near-field aircraft plumes is compiled. A reduced mechanism capturing the major chemical pathways is developed. Predictions by the reduced mechanism are found to be in good agreement with those by the detailed mechanism. With the reduced chemistry, the computer CPU time is saved by a factor of more than 3.5 for the near-field plume modeling. Distributions of major chemical species are obtained and analyzed. The computed sensitivities of major species with respect to reaction step are deduced for identification of the dominant gas phase kinetic reaction pathways in the jet plume. Both the near field plume and the plume-vortex regimes were investigated using advanced mixing models. In the near field, a stand-alone mixing model was used to investigate the impact of turbulent mixing on the micro- and macro-scale mixing processes using a reduced reaction kinetics model. The plume-vortex regime was simulated using a large-eddy simulation model. Vortex plume behind Boeing 737 and 747 aircraft was simulated along with relevant kinetics. Many features of the computed flow field show reasonable agreement with data. The entrainment of the engine plumes into the wing tip vortices and also the partial detrainment of the plume were numerically captured. The impact of fluid mechanics on the chemical processes was also studied. Results show that there are significant differences between spatial and temporal simulations especially in the predicted SO3 concentrations. This has important implications for the prediction of sulfuric acid aerosols in the wake and may partly explain the discrepancy between past numerical studies
Kim, Y.; Seigneur, C.; Duclaux, O.
2014-04-01
Plume-in-grid (PinG) models incorporating a host Eulerian model and a subgrid-scale model (usually a Gaussian plume or puff model) have been used for the simulations of stack emissions (e.g., fossil fuel-fired power plants and cement plants) for gaseous and particulate species such as nitrogen oxides (NOx), sulfur dioxide (SO2), particulate matter (PM) and mercury (Hg). Here, we describe the extension of a PinG model to study the impact of an oil refinery where volatile organic compound (VOC) emissions can be important. The model is based on a reactive PinG model for ozone (O3), which incorporates a three-dimensional (3-D) Eulerian model and a Gaussian puff model. The model is extended to treat PM, with treatments of aerosol chemistry, particle size distribution, and the formation of secondary aerosols, which are consistent in both the 3-D Eulerian host model and the Gaussian puff model. Furthermore, the PinG model is extended to include the treatment of volume sources to simulate fugitive VOC emissions. The new PinG model is evaluated over Greater Paris during July 2009. Model performance is satisfactory for O3, PM2.5 and most PM2.5 components. Two industrial sources, a coal-fired power plant and an oil refinery, are simulated with the PinG model. The characteristics of the sources (stack height and diameter, exhaust temperature and velocity) govern the surface concentrations of primary pollutants (NOx, SO2 and VOC). O3 concentrations are impacted differently near the power plant than near the refinery, because of the presence of VOC emissions at the latter. The formation of sulfate is influenced by both the dispersion of SO2 and the oxidant concentration; however, the former tends to dominate in the simulations presented here. The impact of PinG modeling on the formation of secondary organic aerosol (SOA) is small and results mostly from the effect of different oxidant concentrations on biogenic SOA formation. The investigation of the criteria for injecting
Information Geometric Complexity of a Trivariate Gaussian Statistical Model
Directory of Open Access Journals (Sweden)
Domenico Felice
2014-05-01
Full Text Available We evaluate the information geometric complexity of entropic motion on low-dimensional Gaussian statistical manifolds in order to quantify how difficult it is to make macroscopic predictions about systems in the presence of limited information. Specifically, we observe that the complexity of such entropic inferences not only depends on the amount of available pieces of information but also on the manner in which such pieces are correlated. Finally, we uncover that, for certain correlational structures, the impossibility of reaching the most favorable configuration from an entropic inference viewpoint seems to lead to an information geometric analog of the well-known frustration effect that occurs in statistical physics.
EigenGP: Sparse Gaussian process models with data-dependent eigenfunctions
Qi, Yuan; Dai, Bo; Zhu, Yao
2012-01-01
Gaussian processes (GPs) provide a nonparametric representation of functions. However, classical GP inference suffers from high computational cost and it is difficult to design nonstationary GP priors in practice. In this paper, we propose a sparse Gaussian process model, EigenGP, based on the Karhunen-Loeve (KL) expansion of a GP prior. We use the Nystrom approximation to obtain data dependent eigenfunctions and select these eigenfunctions by evidence maximization. This selection reduces the...
Socolofsky, S. A.; Rezvani, M.
2010-12-01
The accidental blowout plume of the Deepwater Horizon (DH) oil well is an unprecedented event that will have far-reaching environmental, economic, and societal impact. The subsurface structure of the blowout plume, including its layered system of intrusions, conforms qualitatively to that predicted in the literature; however, new modeling tools are currently needed to produce highly-resolved predictions of such a complex plume in the stratified and flowing ocean. We present laboratory experiments of multiphase plumes in stratification and crossflow to understand the physical mechanisms that lead to separation among the buoyant dispersed phases (oil and gas) and the entrained and dissolved constituents in the continuous phase. Scale analysis indicates that the DH plume is stratification dominated, and observed locations of hydrocarbon intrusion layers agree well with the experimentally derived empirical scaling laws. New flow visualization measurements in gas plumes in stratification demonstrate that unsteady plume oscillation and detrainment events result from regular shedding of coherent structures on the order of the plume width and are not directly related to the stratification frequency. Similar particle image velocimetry (PIV) measurements in weak crossflows characterize the transport mechanisms in the plume wake. The results of these experiments will be used in the context of a National Science Foundation RAPID grant to validate a nested large eddy simulations (LES) / Reynolds averaged Navier-Stokes (RANS) model of the DH plume, and early results from this model demonstrate its feasibility to capture the unsteady and complex structure of the plume evolution.
Mena, Francisco; Bond, Tami C.; Riemer, Nicole
2017-08-01
Residential biofuel combustion is an important source of aerosols and gases in the atmosphere. The change in cloud characteristics due to biofuel burning aerosols is uncertain, in part, due to the uncertainty in the added number of cloud condensation nuclei (CCN) from biofuel burning. We provide estimates of the CCN activity of biofuel burning aerosols by explicitly modeling plume dynamics (coagulation, condensation, chemical reactions, and dilution) in a young biofuel burning plume from emission until plume exit, defined here as the condition when the plume reaches ambient temperature and specific humidity through entrainment. We found that aerosol-scale dynamics affect CCN activity only during the first few seconds of evolution, after which the CCN efficiency reaches a constant value. Homogenizing factors in a plume are co-emission of semi-volatile organic compounds (SVOCs) or emission at small particle sizes; SVOC co-emission can be the main factor determining plume-exit CCN for hydrophobic or small particles. Coagulation limits emission of CCN to about 1016 per kilogram of fuel. Depending on emission factor, particle size, and composition, some of these particles may not activate at low supersaturation (ssat). Hygroscopic Aitken-mode particles can contribute to CCN through self-coagulation but have a small effect on the CCN activity of accumulation-mode particles, regardless of composition differences. Simple models (monodisperse coagulation and average hygroscopicity) can be used to estimate plume-exit CCN within about 20 % if particles are unimodal and have homogeneous composition, or when particles are emitted in the Aitken mode even if they are not homogeneous. On the other hand, if externally mixed particles are emitted in the accumulation mode without SVOCs, an average hygroscopicity overestimates emitted CCN by up to a factor of 2. This work has identified conditions under which particle populations become more homogeneous during plume processes. This
A gaussian model for simulated geomagnetic field reversals
Wicht, Johannes; Meduri, Domenico G.
2016-10-01
Field reversals are the most spectacular events in the geomagnetic history but remain little understood. Here we explore the dipole behaviour in particularly long numerical dynamo simulations to reveal statistically significant conditions required for reversals and excursions to happen. We find that changes in the axial dipole moment behaviour are crucial while the equatorial dipole moment plays a negligible role. For small Rayleigh numbers, the axial dipole always remains strong and stable and obeys a clearly Gaussian probability distribution. Only when the Rayleigh number is increased sufficiently the axial dipole can reverse and its distribution becomes decisively non-Gaussian. Increased likelihoods around zero indicate a pronounced lingering in a new low dipole moment state. Reversals and excursions can only happen when axial dipole fluctuations are large enough to drive the system from the high dipole moment state assumed during stable polarity epochs into the low dipole moment state. Since it is just a matter of chance which polarity is amplified during dipole recovery, reversals and grand excursions, i.e. excursions during which the dipole assumes reverse polarity, are equally likely. While the overall reversal behaviour seems Earth-like, a closer comparison to palaeomagnetic findings suggests that the simulated events last too long and that grand excursions are too rare. For a particularly large Ekman number we find a second but less Earth-like type of reversals where the total field decays and recovers after a certain time.
Models of the SL9 Impacts II. Radiative-hydrodynamic Modeling of the Plume Splashback
Deming, D; Deming, Drake; Harrington, Joseph
2001-01-01
We model the plume "splashback" phase of the SL9 collisions with Jupiter using the ZEUS-3D hydrodynamic code. We modified the Zeus code to include gray radiative transport, and we present validation tests. We couple the infalling mass and momentum fluxes of SL9 plume material (from paper I) to a jovian atmospheric model. A strong and complex shock structure results. The modeled shock temperatures agree well with observations, and the structure and evolution of the modeled shocks account for the appearance of high excitation molecular line emission after the peak of the continuum light curve. The splashback region cools by radial expansion as well as by radiation. The morphology of our synthetic continuum light curves agree with observations over a broad wavelength range (0.9 to 12 microns). A feature of our ballistic plume is a shell of mass at the highest velocities, which we term the "vanguard". Portions of the vanguard ejected on shallow trajectories produce a lateral shock front, whose initial expansion a...
Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes
Energy Technology Data Exchange (ETDEWEB)
Stein, Michael [Univ. of Chicago, IL (United States)
2017-03-13
Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead to predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the
American Option Pricing using GARCH models and the Normal Inverse Gaussian distribution
DEFF Research Database (Denmark)
Stentoft, Lars Peter
In this paper we propose a feasible way to price American options in a model with time varying volatility and conditional skewness and leptokurtosis using GARCH processes and the Normal Inverse Gaussian distribution. We show how the risk neutral dynamics can be obtained in this model, we interpret...... the effect of the riskneutralization, and we derive approximation procedures which allow for a computationally efficient implementation of the model. When the model is estimated on financial returns data the results indicate that compared to the Gaussian case the extension is important. A study of the model...
Space-time clutter model for airborne bistatic radar with non-Gaussian statistics
Institute of Scientific and Technical Information of China (English)
Duan Rui; Wang Xuegang; Yiang Chaoshu; Chen Zhuming
2009-01-01
To validate the potential space-time adaptive processing (STAP) algorithms for airborne bistatic radar clutter suppression under nonstationary and non-Gaussian clutter environments, a statistically non-Gaussian, space-time clutter model in varying bistatic geometrical scenarios is presented. The inclusive effects of the model contain the range dependency of bistatic clutter spectrum and clutter power variation in range-angle cells. To capture them, a new approach to coordinate system conversion is initiated into formulating bistatic geometrical model, and the bistatic non-Gaussian amplitude clutter representation method based on a compound model is introduced. The veracity of the geometrical model is validated by using the bistatie configuration parameters of multi-channel airborne radar measurement (MCARM) experiment. And simulation results manifest that the proposed model can accurately shape the space-time clutter spectrum tied up with specific airborne bistatic radar scenario and can characterize the heterogeneity of clutter amplitude distribution in practical clutter environments.
Anomalous scaling in a non-Gaussian random shell model for passive scalars
Institute of Scientific and Technical Information of China (English)
2007-01-01
In this paper, we have introduced a shell-model of Kraichnan's passive scalar problem. Different from the original problem, the prescribed random velocity field is non-Gaussian and δ correlated in time, and its introduction is inspired by She and Lév(e)que (Phys. Rev. Lett. 72,336 (1994)). For comparison, we also give the passive scalar advected by the Gaussian random velocity field. The anomalous scaling exponents H(p) of passive scalar advected by these two kinds of random velocities above are determined for structure function with values of p up to 15 by Monte Carlo simulations of the random shell model, with Gear methods used to solve the stochastic differential equations. We find that the H(p) advected by the non-Gaussian random velocity is not more anomalous than that advected by the Gaussian random velocity. Whether the advecting velocity is non-Gaussian or Gaussian, similar scaling exponents of passive scalar are obtained with the same molecular diffusivity.
Chemistry in plumes of high-flying aircraft with H2 combustion engines: a modelling study
Directory of Open Access Journals (Sweden)
G. Weibring
Full Text Available Recent discussions on high-speed civil transport (HSCT systems have renewed the interest in the chemistry of supersonic-aircraft plumes. The engines of these aircraft emit large concentrations of radicals like O, H, OH, and NO. In order to study the effect of these species on the composition of the atmosphere, the detailed chemistry of an expanding and cooling plume is examined for different expansion models.
For a representative flight at 26 km the computed trace gas concentrations do not differ significantly for different models of the expansion behaviour. However, it is shown that the distributions predicted by all these models differ significantly from those adopted in conventional meso-scale and global models in which the plume chemistry is not treated in detail. This applies in particular to the reservoir species HONO and H_{2}O_{2}.
A comparison of Gamma and Gaussian dynamic convolution models of the fMRI BOLD response.
Chen, Huafu; Yao, Dezhong; Liu, Zuxiang
2005-01-01
Blood oxygenation level-dependent (BOLD) contrast-based functional magnetic resonance imaging (fMRI) has been widely utilized to detect brain neural activities and great efforts are now stressed on the hemodynamic processes of different brain regions activated by a stimulus. The focus of this paper is the comparison of Gamma and Gaussian dynamic convolution models of the fMRI BOLD response. The convolutions are between the perfusion function of the neural response to a stimulus and a Gaussian or Gamma function. The parameters of the two models are estimated by a nonlinear least-squares optimal algorithm for the fMRI data of eight subjects collected in a visual stimulus experiment. The results show that the Gaussian model is better than the Gamma model in fitting the data. The model parameters are different in the left and right occipital regions, which indicate that the dynamic processes seem different in various cerebral functional regions.
Infrared signature modelling of a rocket jet plume - comparison with flight measurements
Rialland, V.; Guy, A.; Gueyffier, D.; Perez, P.; Roblin, A.; Smithson, T.
2016-01-01
The infrared signature modelling of rocket plumes is a challenging problem involving rocket geometry, propellant composition, combustion modelling, trajectory calculations, fluid mechanics, atmosphere modelling, calculation of gas and particles radiative properties and of radiative transfer through the atmosphere. This paper presents ONERA simulation tools chained together to achieve infrared signature prediction, and the comparison of the estimated and measured signatures of an in-flight rocket plume. We consider the case of a solid rocket motor with aluminized propellant, the Black Brant sounding rocket. The calculation case reproduces the conditions of an experimental rocket launch, performed at White Sands in 1997, for which we obtained high quality infrared signature data sets from DRDC Valcartier. The jet plume is calculated using an in-house CFD software called CEDRE. The plume infrared signature is then computed on the spectral interval 1900-5000 cm-1 with a step of 5 cm-1. The models and their hypotheses are presented and discussed. Then the resulting plume properties, radiance and spectra are detailed. Finally, the estimated infrared signature is compared with the spectral imaging measurements. The discrepancies are analyzed and discussed.
Transient Plume Model Testing Using LADEE Spacecraft Attitude Control System Operations
Woronowicz, Michael
2011-01-01
We have learned it is conceivable that the Neutral Mass Spectrometer on board the Lunarr Atmosphere Dust Environment Explorer (LADEE) could measure gases from surface-reflected Attitude Control System (ACS) thruster plume. At minimum altitude, the measurement would be maximized, and gravitational influence minimized ("short" time-of-flight (TOF) situation) Could use to verify aspects of thruster plume modeling Model the transient disturbance to NMS measurements due to ACS gases reflected from lunar surface Observe evolution of various model characteristics as measured by NMS Species magnitudes, TOF measurements, angular distribution, species separation effects
de'Michieli Vitturi, M.; Engwell, S. L.; Neri, A.; Barsotti, S.
2016-10-01
The behavior of plumes associated with explosive volcanic eruptions is complex and dependent on eruptive source parameters (e.g. exit velocity, gas fraction, temperature and grain-size distribution). It is also well known that the atmospheric environment interacts with volcanic plumes produced by explosive eruptions in a number of ways. The wind field can bend the plume but also affect atmospheric air entrainment into the column, enhancing its buoyancy and in some cases, preventing column collapse. In recent years, several numerical simulation tools and observational systems have investigated the action of eruption parameters and wind field on volcanic column height and column trajectory, revealing an important influence of these variables on plume behavior. In this study, we assess these dependencies using the integral model PLUME-MoM, whereby the continuous polydispersity of pyroclastic particles is described using a quadrature-based moment method, an innovative approach in volcanology well-suited for the description of the multiphase nature of magmatic mixtures. Application of formalized uncertainty quantification and sensitivity analysis techniques enables statistical exploration of the model, providing information on the extent to which uncertainty in the input or model parameters propagates to model output uncertainty. In particular, in the framework of the IAVCEI Commission on tephra hazard modeling inter-comparison study, PLUME-MoM is used to investigate the parameters exerting a major control on plume height, applying it to a weak plume scenario based on 26 January 2011 Shinmoe-dake eruptive conditions and a strong plume scenario based on the climatic phase of the 15 June 1991 Pinatubo eruption.
Directory of Open Access Journals (Sweden)
Shih-Sian Cheng
2004-12-01
Full Text Available We propose a self-splitting Gaussian mixture learning (SGML algorithm for Gaussian mixture modelling. The SGML algorithm is deterministic and is able to find an appropriate number of components of the Gaussian mixture model (GMM based on a self-splitting validity measure, Bayesian information criterion (BIC. It starts with a single component in the feature space and splits adaptively during the learning process until the most appropriate number of components is found. The SGML algorithm also performs well in learning the GMM with a given component number. In our experiments on clustering of a synthetic data set and the text-independent speaker identification task, we have observed the ability of the SGML for model-based clustering and automatically determining the model complexity of the speaker GMMs for speaker identification.
3D Numeric modeling of slab-plume interaction in Kamchatka
Constantin Manea, Vlad; Portnyagin, Maxim; Manea, Marina
2010-05-01
Volcanic rocks located in the central segment of the Eastern Volcanic Belt of Kamchatka show a high variability, both in age as well as in the geochemical composition. Three principal groups have been identified, an older group (7-12 my) represented by rich alkaline and transitional basalts, a 7-8 my group exemplified by alkaline basalts of extreme plume type, and a younger group (3-8 my) characterized by calc-alkaline andesites and dacites rocks. Moreover, the younger group shows an adakitic signature. The magmas are assumed to originate from two principle sources: from a subduction modified Pacific MORB-type and from plume-type mantle. In this paper we study the interaction of a cold subducting slab and a hot plume by means of 3D numeric modeling integrated 30 my back in time. Our preliminary modeling results show a short episode of plume material inflowing into the mantle wedge at ~10 my consistent with the second rocks group (plume like). Also our models predict slab edge melting consistent with the youngest group.
Energy Technology Data Exchange (ETDEWEB)
Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)
2016-07-05
Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.
de Michele, Marcello; Raucoules, Daniel; Arason, Þórður; Spinetti, Claudia; Corradini, Stefano; Merucci, Luca
2016-04-01
The retrieval of both height and velocity of a volcanic plume is an important issue in volcanology. As an example, it is known that large volcanic eruptions can temporarily alter the climate, causing global cooling and shifting precipitation patterns; the ash/gas dispersion in the atmosphere, their impact and lifetime around the globe, greatly depends on the injection altitude. Plume height information is critical for ash dispersion modelling and air traffic security. Furthermore, plume height during explosive volcanism is the primary parameter for estimating mass eruption rate. Knowing the plume altitude is also important to get the correct amount of SO2 concentration from dedicated spaceborne spectrometers. Moreover, the distribution of ash deposits on ground greatly depends on the ash cloud altitude, which has an impact on risk assessment and crisis management. Furthermore, a spatially detailed plume height measure could be used as a hint for gas emission rate estimation and for ash plume volume researches, which both have an impact on climate research, air quality assessment for aviation and finally for the understanding of the volcanic system itself as ash/gas emission rates are related to the state of pressurization of the magmatic chamber. Today, the community mainly relies on ground based measurements but often they can be difficult to collect as by definition volcanic areas are dangerous areas (presence of toxic gases) and can be remotely situated and difficult to access. Satellite remote sensing offers a comprehensive and safe way to estimate plume height. Conventional photogrammetric restitution based on satellite imagery fails in precisely retrieving a plume elevation model as the plume own velocity induces an apparent parallax that adds up to the standard parallax given by the stereoscopic view. Therefore, measurements based on standard satellite photogrammeric restitution do not apply as there is an ambiguity in the measurement of the plume position
Critical behavior of the Gaussian model on fractal lattices in external magnetic field
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
For inhomogeneous lattices we generalize the classical Gaussian model, i.e. it is proposed that the Gaussian type distribution constant and the external magnetic field of site i in this model depend on the coordination number qi of site i, and that the relation bqi/bqj=qi/qj holds among bqi's, where bqi is the Gaussian type distribution constant of site i. Using the decimation real-space renormalization group following the spin-rescaling method, the critical points and critical exponents of the Gaussian model are calculated on some Koch type curves and a family of the diamond-type hierarchical (or DH) lattices. At the critical points, it is found that the nearest-neighbor interaction and the magnetic field of site i can be expressed in the form K*=bqi/qi and h*qi=0, respectively. It is also found that most critical exponents depend on the fractal dimensionality of a fractal system. For the family of the DH lattices, the results are identical with the exact results on translation symmetric lattices, and if the fractal dimensionality df=4, the Gaussian model and the mean field theories give the same results.
Meris, Ronald G; Barbera, Joseph A
2014-01-01
In a large-scale outdoor, airborne, hazardous materials (HAZMAT) incident, such as ruptured chlorine rail cars during a train derailment, the local Incident Commanders and HAZMAT emergency responders must obtain accurate information quickly to assess the situation and act promptly and appropriately. HAZMAT responders must have a clear understanding of key information and how to integrate it into timely and effective decisions for action planning. This study examined the use of HAZMAT plume modeling as a decision support tool during incident action planning in this type of extreme HAZMAT incident. The concept of situation awareness as presented by Endsley's dynamic situation awareness model contains three levels: perception, comprehension, and projection. It was used to examine the actions of incident managers related to adequate data acquisition, current situational understanding, and accurate situation projection. Scientists and engineers have created software to simulate and predict HAZMAT plume behavior, the projected hazard impact areas, and the associated health effects. Incorporating the use of HAZMAT plume projection modeling into an incident action plan may be a complex process. The present analysis used a mixed qualitative and quantitative methodological approach and examined the use and limitations of a "HAZMAT Plume Modeling Cycle" process that can be integrated into the incident action planning cycle. HAZMAT response experts were interviewed using a computer-based simulation. One of the research conclusions indicated the "HAZMAT Plume Modeling Cycle" is a critical function so that an individual/team can be tasked with continually updating the hazard plume model with evolving data, promoting more accurate situation awareness.
Blind source separation of ship-radiated noise based on generalized Gaussian model
Institute of Scientific and Technical Information of China (English)
Kong Wei; Yang Bin
2006-01-01
When the distribution of the sources cannot be estimated accurately, the ICA algorithms failed to separate the mixtures blindly. The generalized Gaussian model (GGM) is presented in ICA algorithm since it can model nonGaussian statistical structure of different source signals easily. By inferring only one parameter, a wide class of statistical distributions can be characterized. By using maximum likelihood (ML) approach and natural gradient descent, the learning rules of blind source separation (BSS) based on GGM are presented. The experiment of the ship-radiated noise demonstrates that the GGM can model the distributions of the ship-radiated noise and sea noise efficiently, and the learning rules based on GGM gives more successful separation results after comparing it with several conventional methods such as high order cumulants and Gaussian mixture density function.
Multiple Human Tracking Using Particle Filter with Gaussian Process Dynamical Model
Directory of Open Access Journals (Sweden)
Wang Jing
2008-01-01
Full Text Available Abstract We present a particle filter-based multitarget tracking method incorporating Gaussian process dynamical model (GPDM to improve robustness in multitarget tracking. With the particle filter Gaussian process dynamical model (PFGPDM, a high-dimensional target trajectory dataset of the observation space is projected to a low-dimensional latent space in a nonlinear probabilistic manner, which will then be used to classify object trajectories, predict the next motion state, and provide Gaussian process dynamical samples for the particle filter. In addition, Histogram-Bhattacharyya, GMM Kullback-Leibler, and the rotation invariant appearance models are employed, respectively, and compared in the particle filter as complimentary features to coordinate data used in GPDM. The simulation results demonstrate that the approach can track more than four targets with reasonable runtime overhead and performance. In addition, it can successfully deal with occasional missing frames and temporary occlusion.
Interaction of pollution plumes and discontinuous fields in atmospheric chemistry models
Santillana, Mauricio; Brenner, Michael P.; Rastigeyev, Yevgeniy; Jacob, Daniel J.
2010-11-01
Atmospheric pollutants originate from concentrated sources such as cities, power plants, and biomass fires. They are injected in the troposphere where eddies and convective motions of various scales act to shear and dilute the pollution plumes as they are advected downwind. Despite this shear and dilution, observations from aircraft, sondes, and satellites show that pollution plumes in the remote free troposphere can preserve their identity as well-defined layers for a week or more as they are transported on intercontinental scales. This structure cannot be reproduced in the standard Eulerian chemical transport models used for global modeling of tropospheric composition, instead, the plumes dissipate far too quickly. In this work, we study how the structure of plumes is modified when they cross discontinuities arising for example: from the moving day-night boundaries or from abrupt unresolved horizontal temperature changes (for example in horizontal ocean-land or ocean-ice transitions). Chemical reactions within the plumes depend strongly on photon availability and temperature, and thus, discontinuities in these variables lead to discontinuous changes in reaction rate constants.
Column Testing and 1D Reactive Transport Modeling to Evaluate Uranium Plume Persistence Processes
Energy Technology Data Exchange (ETDEWEB)
Johnson, Raymond H. [Navarro Research and Engineering, Inc.; Morrison, Stan [Navarro Research and Engineering, Inc.; Morris, Sarah [Navarro Research and Engineering, Inc.; Tigar, Aaron [Navarro Research and Engineering, Inc.; Dam, William [U.S. Department of Energy, Office of Legacy Management; Dayvault, Jalena [U.S. Department of Energy, Office of Legacy Management
2016-04-26
Motivation for Study: Natural flushing of contaminants at various U.S. Department of Energy Office of Legacy Management sites is not proceeding as quickly as predicted (plume persistence) Objectives: Help determine natural flushing rates using column tests. Use 1D reactive transport modeling to better understand the major processes that are creating plume persistence Approach: Core samples from under a former mill tailings area Tailings have been removed. Column leaching using lab-prepared water similar to nearby Gunnison River water. 1D reactive transport modeling to evaluate processes
Realmuto, Vincent J.; Berk, Alexander
2016-11-01
We describe the development of Plume Tracker, an interactive toolkit for the analysis of multispectral thermal infrared observations of volcanic plumes and clouds. Plume Tracker is the successor to MAP_SO2, and together these flexible and comprehensive tools have enabled investigators to map sulfur dioxide (SO2) emissions from a number of volcanoes with TIR data from a variety of airborne and satellite instruments. Our objective for the development of Plume Tracker was to improve the computational performance of the retrieval procedures while retaining the accuracy of the retrievals. We have achieved a 300 × improvement in the benchmark performance of the retrieval procedures through the introduction of innovative data binning and signal reconstruction strategies, and improved the accuracy of the retrievals with a new method for evaluating the misfit between model and observed radiance spectra. We evaluated the accuracy of Plume Tracker retrievals with case studies based on MODIS and AIRS data acquired over Sarychev Peak Volcano, and ASTER data acquired over Kilauea and Turrialba Volcanoes. In the Sarychev Peak study, the AIRS-based estimate of total SO2 mass was 40% lower than the MODIS-based estimate. This result was consistent with a 45% reduction in the AIRS-based estimate of plume area relative to the corresponding MODIS-based estimate. In addition, we found that our AIRS-based estimate agreed with an independent estimate, based on a competing retrieval technique, within a margin of ± 20%. In the Kilauea study, the ASTER-based concentration estimates from 21 May 2012 were within ± 50% of concurrent ground-level concentration measurements. In the Turrialba study, the ASTER-based concentration estimates on 21 January 2012 were in exact agreement with SO2 concentrations measured at plume altitude on 1 February 2012.
Spectroscopic modeling and characterization of a collisionally confined laser-ablated plasma plume.
Sherrill, M E; Mancini, R C; Bailey, J; Filuk, A; Clark, B; Lake, P; Abdallah, J
2007-11-01
Plasma plumes produced by laser ablation are an established method for manufacturing the high quality stoichiometrically complex thin films used for a variety of optical, photoelectric, and superconducting applications. The state and reproducibility of the plasma close to the surface of the irradiated target plays a critical role in producing high quality thin films. Unfortunately, this dense plasma has historically eluded quantifiable characterization. The difficulty in modeling the plume formation arises in the accounting for the small amount of energy deposited into the target when physical properties of these exotic target materials are not known. In this work we obtain the high density state of the plasma plume through the use of an experimental spectroscopic technique and a custom spectroscopic model. In addition to obtaining detailed temperature and density profiles, issues regarding line broadening and opacity for spectroscopic characterization will be addressed for this unique environment.
Evaluation of smoke dispersion from forest fire plumes using lidar experiments and modelling
Energy Technology Data Exchange (ETDEWEB)
Lavrov, Alexander; Utkin, Andrei B. [INOV-INESC-Inovacao, Rua Alves Redol, 9, 1000-029, Lisbon (Portugal); Vilar, Rui; Fernandes, Armando [Departamento de Engenharia de Materiais, Instituto Superior Tecnico, Av. Rovisco Pais, 1, 1049-001, Lisbon (Portugal)
2006-09-15
The dispersion of forest fire smoke was studied using direct-detection lidar measurements and a Reynolds-averaged Navier-Stokes fluid dynamics model. Comparison between experimental and theoretical results showed that the model adequately describes the influence of the main factors affecting the dispersion of a hot smoke plume in the presence of wind, taking into consideration turbulent mixing, the influence of wind, and the action of buoyancy, and proved that lidar measurements are an appropriate tool for the semi-qualitative analysis of forest fire smoke plume evolution and prediction of lidar sensitivity and range for reliable smoke detection. It was also demonstrated that analysis of lidar signals using Klett's inversion method allows the internal three-dimensional structure of the smoke plumes to be semi-quantitatively determined and the absolute value of smoke-particle concentration to be estimated. (author)
Study of the Tagus estuarine plume using coupled hydro and biogeochemical models
Vaz, Nuno; Leitão, Paulo C.; Juliano, Manuela; Mateus, Marcos; Dias, João. Miguel; Neves, Ramiro
2010-05-01
Plumes of buoyant water produced by inflow from rivers and estuaries are common on the continental shelf. Buoyancy associated with estuarine waters is a key mediating factor in the transport and transformation of dissolved and particulate materials in coastal margins. The offshore displacement of the plume is influenced greatly by the local alongshore wind, which will tend to advect the plume either offshore or onshore, consistently with the Ekman transport. Other factor affecting the propagation of an estuarine plume is the freshwater inflow on the landward boundary. In this paper, a coupled three-dimensional ocean circulation and biogeochemical model with realistic high and low frequency forcing is used to get insight on how the Tagus River plume responds to wind and freshwater discharge during winter and spring. A nesting approach based on the MOHID numerical system was implemented for the Tagus estuary near shelf. Realistic hindcast simulations were performed, covering a period from January to June 2007. Model results were evaluated using in-situ and satellite imagery data. The numerical model was implemented using a three level nesting model. The model domain includes the whole Portuguese coast, the Tagus estuary near shelf and the Tagus River estuary, using a realistic coastline and bottom topography. River discharge and wind forcing are considered as landward and surface boundary conditions, respectively. Initial ocean stratification is from the MERCATOR solution. Ambient shelf conditions include tidal motion. As a prior validation, models outputs of salinity and water temperature were compared to available data (January 30th and May 30th, 2007) and were found minor differences between model outputs and data. On January 30th, outside the estuary, the model results reveal a stratified water column, presenting salinity stratification of the order of 3-4. The model also reproduces the hydrography for the May 30th observations. In May, near the Tagus mouth
Energy Technology Data Exchange (ETDEWEB)
Hoejstrup, J. [NEG Micon Project Development A/S, Randers (Denmark); Hansen, K.S. [Denmarks Technical Univ., Dept. of Energy Engineering, Lyngby (Denmark); Pedersen, B.J. [VESTAS Wind Systems A/S, Lem (Denmark); Nielsen, M. [Risoe National Lab., Wind Energy and Atmospheric Physics, Roskilde (Denmark)
1999-03-01
The pdf`s of atmospheric turbulence have somewhat wider tails than a Gaussian, especially regarding accelerations, whereas velocities are close to Gaussian. This behaviour is being investigated using data from a large WEB-database in order to quantify the amount of non-Gaussianity. Models for non-Gaussian turbulence have been developed, by which artificial turbulence can be generated with specified distributions, spectra and cross-correlations. The artificial time series will then be used in load models and the resulting loads in the Gaussian and the non-Gaussian cases will be compared. (au)
American Option Pricing using GARCH models and the Normal Inverse Gaussian distribution
DEFF Research Database (Denmark)
Stentoft, Lars Peter
In this paper we propose a feasible way to price American options in a model with time varying volatility and conditional skewness and leptokurtosis using GARCH processes and the Normal Inverse Gaussian distribution. We show how the risk neutral dynamics can be obtained in this model, we interpre....... In particular, improvements are found when considering the smile in implied standard deviations....
Solid-liquid phase equilibria of the Gaussian core model fluid.
Mausbach, Peter; Ahmed, Alauddin; Sadus, Richard J
2009-11-14
The solid-liquid phase equilibria of the Gaussian core model are determined using the GWTS [J. Ge, G.-W. Wu, B. D. Todd, and R. J. Sadus, J. Chem. Phys. 119, 11017 (2003)] algorithm, which combines equilibrium and nonequilibrium molecular dynamics simulations. This is the first reported use of the GWTS algorithm for a fluid system displaying a reentrant melting scenario. Using the GWTS algorithm, the phase envelope of the Gaussian core model can be calculated more precisely than previously possible. The results for the low-density and the high-density (reentrant melting) sides of the solid state are in good agreement with those obtained by Monte Carlo simulations in conjunction with calculations of the solid free energies. The common point on the Gaussian core envelope, where equal-density solid and liquid phases are in coexistence, could be determined with high precision.
An efficient approach for shadow detection based on Gaussian mixture model
Institute of Scientific and Technical Information of China (English)
韩延祥; 张志胜; 陈芳; 陈恺
2014-01-01
An efficient approach was proposed for discriminating shadows from moving objects. In the background subtraction stage, moving objects were extracted. Then, the initial classification for moving shadow pixels and foreground object pixels was performed by using color invariant features. In the shadow model learning stage, instead of a single Gaussian distribution, it was assumed that the density function computed on the values of chromaticity difference or bright difference, can be modeled as a mixture of Gaussian consisting of two density functions. Meanwhile, the Gaussian parameter estimation was performed by using EM algorithm. The estimates were used to obtain shadow mask according to two constraints. Finally, experiments were carried out. The visual experiment results confirm the effectiveness of proposed method. Quantitative results in terms of the shadow detection rate and the shadow discrimination rate (the maximum values are 85.79%and 97.56%, respectively) show that the proposed approach achieves a satisfying result with post-processing step.
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
MacKenzie, Donald; Spears, Taylor
2014-06-01
Drawing on documentary sources and 114 interviews with market participants, this and a companion article discuss the development and use in finance of the Gaussian copula family of models, which are employed to estimate the probability distribution of losses on a pool of loans or bonds, and which were centrally involved in the credit crisis. This article, which explores how and why the Gaussian copula family developed in the way it did, employs the concept of 'evaluation culture', a set of practices, preferences and beliefs concerning how to determine the economic value of financial instruments that is shared by members of multiple organizations. We identify an evaluation culture, dominant within the derivatives departments of investment banks, which we call the 'culture of no-arbitrage modelling', and explore its relation to the development of Gaussian copula models. The article suggests that two themes from the science and technology studies literature on models (modelling as 'impure' bricolage, and modelling as articulating with heterogeneous objectives and constraints) help elucidate the history of Gaussian copula models in finance.
Simulating the Black Saturday 2009 UTLS Smoke Plume with an Interactive Composition-Climate Model
Field, R. D.; Luo, M.; Fromm, M. D.; Voulgarakis, A.; Mangeon, S.; Worden, J. R.
2015-12-01
Pyroconvective smoke plumes from large fires can be injected directly into the geostrophic flow and dry air at high altitudes. As a result, they are usually longer-lived, can be transported thousands of kilometers, and can cross the tropopause into the lower stratosphere. Because the emissions pulses are so abrupt relative to other non-volcanic sources, their evolution and decay can be easily separated from background levels of aerosols and trace gases. This makes them interesting natural experiments against which to evaluate models, and understand the fate and effects of surface emissions pulses. We have simulated the well-observed February 2009 Black Saturday smoke plume from southeast Australia using the NASA GISS Earth System Model. To the best of our knowledge, this represents the first simulation of a high altitude smoke plume with a full-complexity composition-climate model. We compared simulated CO to a joint retrieval from the Aura Tropospheric Emission Spectrometer and Microwave Limb Sounder instruments. Using an upper tropospheric injection height, we were able to simulate the plume's eastward transport and ascent over New Zealand, anticyclonic circulation and ascent over the Coral Sea, westward transport in the lower tropical stratosphere, and arrival over Africa at the end of February. Simulations were improved by taking into account hourly variability in emissions associated with extreme fire behavior observed by fire management agencies. We considered a range of emissions amounts, based on different assumptions about which of the Black Saturday fires were explosive enough to inject smoke to high altitudes, and accounting for emissions factor uncertainty. The best agreement between plume concentrations at the end of February was found for the highest emissions scenario. Three days after the fire, there was a linear relationship between emissions amount and plume concentration. Three weeks after the fire, the relationship was non-linear; we discuss
Krohn, Olivia; Armbruster, Aaron; Gao, Yongsheng; Atlas Collaboration
2017-01-01
Software tools developed for the purpose of modeling CERN LHC pp collision data to aid in its interpretation are presented. Some measurements are not adequately described by a Gaussian distribution; thus an interpretation assuming Gaussian uncertainties will inevitably introduce bias, necessitating analytical tools to recreate and evaluate non-Gaussian features. One example is the measurements of Higgs boson production rates in different decay channels, and the interpretation of these measurements. The ratios of data to Standard Model expectations (μ) for five arbitrary signals were modeled by building five Poisson distributions with mixed signal contributions such that the measured values of μ are correlated. Algorithms were designed to recreate probability distribution functions of μ as multi-variate Gaussians, where the standard deviation (σ) and correlation coefficients (ρ) are parametrized. There was good success with modeling 1-D likelihood contours of μ, and the multi-dimensional distributions were well modeled within 1- σ but the model began to diverge after 2- σ due to unmerited assumptions in developing ρ. Future plans to improve the algorithms and develop a user-friendly analysis package will also be discussed. NSF International Research Experiences for Students
Observation and modeling of the evolution of Texas power plant plumes
Directory of Open Access Journals (Sweden)
W. Zhou
2012-01-01
Full Text Available During the second Texas Air Quality Study 2006 (TexAQS II, a full range of pollutants was measured by aircraft in eastern Texas during successive transects of power plant plumes (PPPs. A regional photochemical model is applied to simulate the physical and chemical evolution of the plumes. The observations reveal that SO_{2} and NO_{y} were rapidly removed from PPPs on a cloudy day but not on the cloud-free days, indicating efficient aqueous processing of these compounds in clouds. The model reasonably represents observed NO_{x} oxidation and PAN formation in the plumes, but fails to capture the rapid loss of SO_{2} (0.37 h^{−1} and NO_{y} (0.24 h^{−1} in some plumes on the cloudy day. Adjustments to the cloud liquid water content (QC and the default metal concentrations in the cloud module could explain some of the SO_{2} loss. However, NO_{y} in the model was insensitive to QC. These findings highlight cloud processing as a major challenge to atmospheric models. Model-based estimates of ozone production efficiency (OPE in PPPs are 20–50 % lower than observation-based estimates for the cloudy day.
Institute of Scientific and Technical Information of China (English)
LIN ZhenQuan; KONG XiangMu; JIN JinShuang; YANG ZhanRu
2001-01-01
The Gaussian spin model with periodic interactions on the diamond-type hierarchical lattices is constructed by generalizing that with uniform interactions on translationally invariant lattices according to a class of substitution sequences. The Gaussian distribution constants and imposed external magnetic fields are also periodic depending on the periodic characteristic of the interaction bonds. The critical behaviors of this generalized Gaussian model in external magnetic fields are studied by the exact renormalization-group approach and spin rescaling method. The critical points and all the critical exponents are obtained. The critical behaviors are found to be determined by the Gaussian distribution constants and the fractal dimensions of the lattices. When all the Gaussian distribution constants are the same, the dependence of the critical exponents on the dimensions of the lattices is the same as that of the Gaussian model with uniform interactions on translationally invariant lattices.
Tests for, origins of, and corrections to non-Gaussian statistics. The dipole-flip model.
Schile, Addison J; Thompson, Ward H
2017-04-21
Linear response approximations are central to our understanding and simulations of nonequilibrium statistical mechanics. Despite the success of these approaches in predicting nonequilibrium dynamics, open questions remain. Laird and Thompson [J. Chem. Phys. 126, 211104 (2007)] previously formalized, in the context of solvation dynamics, the connection between the static linear-response approximation and the assumption of Gaussian statistics. The Gaussian statistics perspective is useful in understanding why linear response approximations are still accurate for perturbations much larger than thermal energies. In this paper, we use this approach to address three outstanding issues in the context of the "dipole-flip" model, which is known to exhibit nonlinear response. First, we demonstrate how non-Gaussian statistics can be predicted from purely equilibrium molecular dynamics (MD) simulations (i.e., without resort to a full nonequilibrium MD as is the current practice). Second, we show that the Gaussian statistics approximation may also be used to identify the physical origins of nonlinear response residing in a small number of coordinates. Third, we explore an approach for correcting the Gaussian statistics approximation for nonlinear response effects using the same equilibrium simulation. The results are discussed in the context of several other examples of nonlinear responses throughout the literature.
Gaskinetic Modeling on Dilute Gaseous Plume Impingement Flows
Directory of Open Access Journals (Sweden)
Chunpei Cai
2016-12-01
Full Text Available This paper briefly reviews recent work on gaseous plume impingement flows. As the major part of this paper, also included are new comprehensive studies on high-speed, collisionless, gaseous, circular jet impinging on a three-dimensional, inclined, diffuse or specular flat plate. Gaskinetic theories are adopted to study the problems, and several crucial geometry-location and velocity-direction relations are used. The final complete results include impingement surface properties such as pressure, shear stress, and heat flux. From these surface properties, averaged coefficients of pressure, friction, heat flux, moment over the entire flat plate, and the distance from the moment center to the flat plate center are obtained. The final results include accurate integrations involving the geometry and specific speed ratios, inclination angle, and the temperature ratio. Several numerical simulations with the direct simulation Monte Carlo method validate these analytical results, and the results are essentially identical. The gaskinetic method and processes are heuristic and can be used to investigate other external high Knudsen (Kn number impingement flow problems, including the flow field and surface properties for a high Knudsen number jet from an exit and flat plate of arbitrary shapes. The results are expected to find many engineering applications, especially in aerospace and space engineering.
Propagation of Gaussian Schell-model Array beams in free space and atmospheric turbulence
Mao, Yonghua; Mei, Zhangrong; Gu, Juguan
2016-12-01
Based on the extended Huygens-Fresnel principle, the evolution behavior of the spectral density and the spectral degree of coherence of the beam produced by a recently introduced novel class of Gaussian Schell-model Arrays (GSMA) source in free space and turbulence atmospheric are explored and comparatively analyzed. And the influence of the fractal constant of the atmospheric power spectrum and refractive-index structure constant on the spectral density and the spectral degree of coherence of beams are analyzed. It is shown that the optical lattice profile is stable when beams propagate in free space, but the spectral density eventually is suppressed and transformed into a Gaussian profiles when it passes at sufficiently large distances through the turbulent atmosphere. The distributions of the spectral degree of coherence in far field eventually transformed into a shrink Gaussian profile relative to free space which means that the degree of spatial coherence turns worse.
Infrared image segmentation based on region of interest extraction with Gaussian mixture modeling
Yeom, Seokwon
2017-05-01
Infrared (IR) imaging has the capability to detect thermal characteristics of objects under low-light conditions. This paper addresses IR image segmentation with Gaussian mixture modeling. An IR image is segmented with Expectation Maximization (EM) method assuming the image histogram follows the Gaussian mixture distribution. Multi-level segmentation is applied to extract the region of interest (ROI). Each level of the multi-level segmentation is composed of the k-means clustering, the EM algorithm, and a decision process. The foreground objects are individually segmented from the ROI windows. In the experiments, various methods are applied to the IR image capturing several humans at night.
A program for computing the exact Fisher information matrix of a Gaussian VARMA model
Klein, A.; Mélard, G.; Niemczyk, J.; Zahaf, T.
2004-01-01
A program in the MATLAB environment is described for computing the Fisher information matrix of the exact information matrix of a Gaussian vector autoregressive moving average (VARMA) model. A computationally efficient procedure is used on the basis of a state space representation. It relies heavily
Gaussian Schell Source as Model for Slit-Collimated Atomic and Molecular Beams
McMorran, Ben
2008-01-01
We show how to make a Gaussian Schell-model (GSM) beam. Then we compare the intensity profile, the transverse coherence width and the divergence angle of a GSM beam with those same properties of a beam that is collimated with two hard-edged slits. This work offers an intuitive way to understand various interferometer designs, and we compare our results with data.
Scintillation reduction in pseudo Multi-Gaussian Schell Model beams in the maritime environment
Nelson, C.; Avramov-Zamurovic, S.; Korotkova, O.; Guth, S.; Malek-Madani, R.
2016-04-01
Irradiance fluctuations of a pseudo Multi-Gaussian Schell Model beam propagating in the maritime environment is explored as a function of spatial light modulator cycling rate and estimated atmospheric turnover rate. Analysis of the data demonstrates a strong negative correlation between the scintillation index of received optical intensity and cycling speed for the estimated atmospheric turnover rate.
Gaussian wave packet dynamics and the Landau-Zener model for nonadiabatic transitions
DEFF Research Database (Denmark)
Henriksen, Niels Engholm
1992-01-01
The Landau-Zener model for transitions between two linear diabatic potentials is examined. We derive, in the weak-coupling limit, an expression for the transition probability where the classical trajectory and the constant velocity approximations are abandoned and replaced by quantum dynamics...... described by a Gaussian wavepacket. A remarkable agreement with the results from the simple Landau-Zener formula is observed....
Infinitely many states and stochastic symmetry in a Gaussian Potts-Hopfield model
van Enter, ACD; Schaap, HG
2002-01-01
We study a Gaussian Potts-Hopfield model. Whereas for Ising spins and two disorder variables per site the chaotic pair scenario is realized, we find that for q-state Potts spins q (q - 1)-tuples occur. Beyond the breaking of a continuous stochastic symmetry, we study the fluctuations and obtain the
Modeling non-Gaussian time-varying vector autoregressive process
National Aeronautics and Space Administration — We present a novel and general methodology for modeling time-varying vector autoregressive processes which are widely used in many areas such as modeling of chemical...
Models of the SL9 Impacts I. Ballistic Monte-Carlo Plume
Harrington, J; Harrington, Joseph; Deming, Drake
2001-01-01
We model the Comet Shoemaker-Levy 9 - Jupiter impact plumes to calculate synthetic plume views, atmospheric infall fluxes, and debris patterns. Our plume is a swarm of ballistic particles with one of several mass-velocity distributions (MVD). The swarm is ejected instantaneously and uniformly into a cone from its apex. Upon falling to the ejection altitude, particles slide with horizontal deceleration following one of several schemes. The model ignores hydrodynamic and Coriolis effects. We adjust plume tilt, opening angle, and minimum velocity, and choose MVD and sliding schemes, to create impact patterns that match observations. Our best match uses the power-law MVD from the numerical impact model of Zahnle and Mac Low, with velocity cutoffs at 4.5 and 11.8 km/sec, cone opening angle of 75 degrees, cone tilt of 30 degrees from vertical, and a sliding constant deceleration of 1.74 m/sec^2. A mathematically-derived feature of Zahnle and Mac Low's published cumulative MVD is a thin shell of mass at the maximum ...
Strutzenberg, L. L.; Dougherty, N. S.; Liever, P. A.; West, J. S.; Smith, S. D.
2007-01-01
This paper details advances being made in the development of Reynolds-Averaged Navier-Stokes numerical simulation tools, models, and methods for the integrated Space Shuttle Vehicle at launch. The conceptual model and modeling approach described includes the development of multiple computational models to appropriately analyze the potential debris transport for critical debris sources at Lift-Off. The conceptual model described herein involves the integration of propulsion analysis for the nozzle/plume flow with the overall 3D vehicle flowfield at Lift-Off. Debris Transport Analyses are being performed using the Shuttle Lift-Off models to assess the risk to the vehicle from Lift-Off debris and appropriately prioritized mitigation of potential debris sources to continue to reduce vehicle risk. These integrated simulations are being used to evaluate plume-induced debris environments where the multi-plume interactions with the launch facility can potentially accelerate debris particles toward the vehicle.
Ma, Denglong; Zhang, Zaoxiao
2016-07-05
Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.
Invariant Gaussian Process Latent Variable Models and Application in Causal Discovery
Zhang, Kun; Janzing, Dominik
2012-01-01
In nonlinear latent variable models or dynamic models, if we consider the latent variables as confounders (common causes), the noise dependencies imply further relations between the observed variables. Such models are then closely related to causal discovery in the presence of nonlinear confounders, which is a challenging problem. However, generally in such models the observation noise is assumed to be independent across data dimensions, and consequently the noise dependencies are ignored. In this paper we focus on the Gaussian process latent variable model (GPLVM), from which we develop an extended model called invariant GPLVM (IGPLVM), which can adapt to arbitrary noise covariances. With the Gaussian process prior put on a particular transformation of the latent nonlinear functions, instead of the original ones, the algorithm for IGPLVM involves almost the same computational loads as that for the original GPLVM. Besides its potential application in causal discovery, IGPLVM has the advantage that its estimat...
Shi, J Q; Wang, B; Will, E J; West, R M
2012-11-20
We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime.
Krems, Roman; Cui, Jie; Li, Zhiying
2016-05-01
We show how statistical learning techniques based on kriging (Gaussian Process regression) can be used for improving the predictions of classical and/or quantum scattering theory. In particular, we show how Gaussian Process models can be used for: (i) efficient non-parametric fitting of multi-dimensional potential energy surfaces without the need to fit ab initio data with analytical functions; (ii) obtaining scattering observables as functions of individual PES parameters; (iii) using classical trajectories to interpolate quantum results; (iv) extrapolation of scattering observables from one molecule to another; (v) obtaining scattering observables with error bars reflecting the inherent inaccuracy of the underlying potential energy surfaces. We argue that the application of Gaussian Process models to quantum scattering calculations may potentially elevate the theoretical predictions to the same level of certainty as the experimental measurements and can be used to identify the role of individual atoms in determining the outcome of collisions of complex molecules. We will show examples and discuss the applications of Gaussian Process models to improving the predictions of scattering theory relevant for the cold molecules research field. Work supported by NSERC of Canada.
Physical conditions for sources radiating a cosh-Gaussian model beam
Institute of Scientific and Technical Information of China (English)
LI Jia
2011-01-01
Based on the coherence theory of diffracted optical field and the model for partially coherent beams, analytical expressions for the cross-spectral density and the irradiance spectral density in the far zone are derived, respectively. Utilizing the theoretical model of radiation from secondary planar sources, the physical conditions for sources generating a cosh-Gaussian (CHG) beam are investigated. Analytical results demonstrate that the parametric conditions strongly depend on the coherence property of sources. When almost coherence property is satisfied in the source plane, the conditions are the same as those for fundamental Gaussian beams; when partial coherence or almost incoherence property is satisfied in the spatial source plane, the conditions are the same as those for Gaussian-Schell model beams. The results also indicate that the variance of cosine parameters has no influence on the conditions. Our results may provide potential applications for some investigations such as the modulations of cosh-Gaussian beams and the designs of source beam parameters.
Chaudhari, M I; Paulaitis, M E
2014-01-01
Parallel-tempering MD results for a CH$_3$(CH$_2$-O-CH$_2$)$_m$CH$_3$ chain in water are exploited as a data-base for analysis of collective structural characteristics of the PEO globule with a goal of defining models permitting statistical thermodynamic analysis of dispersants of Corexit type. The chain structure factor, relevant to neutron scattering from a deuterated chain in neutral water, is considered specifically. The traditional continuum-Gaussian structure factor is inconsistent with the simple $k \\rightarrow \\infty$ behavior, but we consider a discrete-Gaussian model that does achieve that consistency. Shifting-and-scaling the discrete-Gaussian model helps to identify the low-$k$ to high-$k$ transition near $k \\approx 2\\pi/0.6 \\mathrm{nm}$ when an empirically matched number of Gaussian links is about one-third of the total number of effective-atom sites. This short distance-scale boundary of 0.6 nm is directly verified with the $r$-space distributions, and this distance is thus identified with a nat...
Critical behavior of the Gaussian model on fractal lattices in external magnetic field
Institute of Scientific and Technical Information of China (English)
孔祥木; 林振权; 朱建阳
2000-01-01
For inhomogeneous lattices we generalize the classical Gaussian model, i. e. it is pro-posed that the Gaussian type distribution constant and the external magnetic field of site / in this model depend on the coordination number q, of site i, and that the relation bq1/bq1 = q1/q1 holds among bq1s, where bq1 is the Gaussian type distribution constant of site /. Using the decimation real-spacerenormalization group following the spin-rescaling method, the critical points and critical exponents of the Gaussian model are calculated on some Koch type curves and a family of the diamond-type hierar-chical (or DH) lattices. At the critical points, it is found that the nearest-neighbor interaction and the magnetic field of site i can be expressed in the form K’ = bq1/q1 and hq =0, respectively. it is also found that most critical exponents depend on the fractal dimensionality of a fractal system. For the family of the DH lattices, the results are identical with the exact results on translation symmetric lattices,
Discussion: the design and analysis of the Gaussian process model
Energy Technology Data Exchange (ETDEWEB)
Williams, Brian J [Los Alamos National Laboratory; Loeppky, Jason L [UNIV OF BC-OKANAGAN
2008-01-01
The investigation of complex physical systems utilizing sophisticated computer models has become commonplace with the advent of modern computational facilities. In many applications, experimental data on the physical systems of interest is extremely expensive to obtain and hence is available in limited quantities. The mathematical systems implemented by the computer models often include parameters having uncertain values. This article provides an overview of statistical methodology for calibrating uncertain parameters to experimental data. This approach assumes that prior knowledge about such parameters is represented as a probability distribution, and the experimental data is used to refine our knowledge about these parameters, expressed as a posterior distribution. Uncertainty quantification for computer model predictions of the physical system are based fundamentally on this posterior distribution. Computer models are generally not perfect representations of reality for a variety of reasons, such as inadequacies in the physical modeling of some processes in the dynamic system. The statistical model includes components that identify and adjust for such discrepancies. A standard approach to statistical modeling of computer model output for unsampled inputs is introduced for the common situation where limited computer model runs are available. Extensions of the statistical methods to functional outputs are available and discussed briefly.
Accounting for non-linear chemistry of ship plumes in the GEOS-Chem global chemistry transport model
Vinken, G.C.M.; Boersma, K.F.; Jacob, D.J.; Meijer, E.W.
2011-01-01
We present a computationally efficient approach to account for the non-linear chemistry occurring during the dispersion of ship exhaust plumes in a global 3-D model of atmospheric chemistry (GEOS-Chem). We use a plume-in-grid formulation where ship emissions age chemically for 5 h before being relea
Accounting for non-linear chemistry of ship plumes in the GEOS-Chem global chemistry transport model
Meijer, E.W.; Vinken, G.C.M.; Boersma, K.F.; Jacob, D.J.
2011-01-01
Abstract. We present a computationally efficient approach to account for the non-linear chemistry occurring during the dispersion of ship exhaust plumes in a global 3-D model of atmospheric chemistry (GEOS-Chem). We use a plume-ingrid formulation where ship emissions age chemically for 5 h before be
Modeling of Laser Vaporization and Plume Chemistry in a Boron Nitride Nanotube Production Rig
Gnoffo, Peter A.; Fay, Catharine C.
2012-01-01
Flow in a pressurized, vapor condensation (PVC) boron nitride nanotube (BNNT) production rig is modeled. A laser provides a thermal energy source to the tip of a boron ber bundle in a high pressure nitrogen chamber causing a plume of boron-rich gas to rise. The buoyancy driven flow is modeled as a mixture of thermally perfect gases (B, B2, N, N2, BN) in either thermochemical equilibrium or chemical nonequilibrium assuming steady-state melt and vaporization from a 1 mm radius spot at the axis of an axisymmetric chamber. The simulation is intended to define the macroscopic thermochemical environment from which boron-rich species, including nanotubes, condense out of the plume. Simulations indicate a high temperature environment (T > 4400K) for elevated pressures within 1 mm of the surface sufficient to dissociate molecular nitrogen and form BN at the base of the plume. Modifications to Program LAURA, a finite-volume based solver for hypersonic flows including coupled radiation and ablation, are described to enable this simulation. Simulations indicate that high pressure synthesis conditions enable formation of BN vapor in the plume that may serve to enhance formation of exceptionally long nanotubes in the PVC process.
Numerical Modeling of Water Thermal Plumes Emitted by Thermal Power Plants
Directory of Open Access Journals (Sweden)
Azucena Durán-Colmenares
2016-10-01
Full Text Available This work focuses on the study of thermal dispersion of plumes emitted by power plants into the sea. Wastewater discharge from power stations causes impacts that require investigation or monitoring. A study to characterize the physical effects of thermal plumes into the sea is carried out here by numerical modeling and field measurements. The case study is the thermal discharges of the Presidente Adolfo López Mateos Power Plant, located in Veracruz, on the coast of the Gulf of Mexico. This plant is managed by the Federal Electricity Commission of Mexico. The physical effects of such plumes are related to the increase of seawater temperature caused by the hot water discharge of the plant. We focus on the implementation, calibration, and validation of the Delft3D-FLOW model, which solves the shallow-water equations. The numerical simulations consider a critical scenario where meteorological and oceanographic parameters are taken into account to reproduce the proper physical conditions of the environment. The results show a local physical effect of the thermal plumes within the study zone, given the predominant strong winds conditions of the scenario under study.
Early Earth plume-lid tectonics: A high-resolution 3D numerical modelling approach
Fischer, R.; Gerya, T.
2016-10-01
Geological-geochemical evidence point towards higher mantle potential temperature and a different type of tectonics (global plume-lid tectonics) in the early Earth (>3.2 Ga) compared to the present day (global plate tectonics). In order to investigate tectono-magmatic processes associated with plume-lid tectonics and crustal growth under hotter mantle temperature conditions, we conduct a series of 3D high-resolution magmatic-thermomechanical models with the finite-difference code I3ELVIS. No external plate tectonic forces are applied to isolate 3D effects of various plume-lithosphere and crust-mantle interactions. Results of the numerical experiments show two distinct phases in coupled crust-mantle evolution: (1) a longer (80-100 Myr) and relatively quiet 'growth phase' which is marked by growth of crust and lithosphere, followed by (2) a short (∼20 Myr) and catastrophic 'removal phase', where unstable parts of the crust and mantle lithosphere are removed by eclogitic dripping and later delamination. This modelling suggests that the early Earth plume-lid tectonic regime followed a pattern of episodic growth and removal also called episodic overturn with a periodicity of ∼100 Myr.
Graphical Gaussian models with edge and vertex symmetries
DEFF Research Database (Denmark)
Højsgaard, Søren; Lauritzen, Steffen L
2008-01-01
study the properties of such models and derive the necessary algorithms for calculating maximum likelihood estimates. We identify conditions for restrictions on the concentration and correlation matrices being equivalent. This is for example the case when symmetries are generated by permutation...... of variable labels. For such models a particularly simple maximization of the likelihood function is available...
DEFF Research Database (Denmark)
Højstrup, Jørgen; Hansen, Kurt S.; Pedersen, Bo Juul;
1999-01-01
The pdf's of atmosperic turbulence have somewhat wider tails than a Gaussian, especially regarding accelerations, whereas velocities are close to Gaussian. This behaviour has been investigated using data from a large WEB-database in order to quantify the amount of non-gaussianity. Models for non-...
Bayes factor between Student t and Gaussian mixed models within an animal breeding context
Directory of Open Access Journals (Sweden)
García-Cortés Luis
2008-07-01
Full Text Available Abstract The implementation of Student t mixed models in animal breeding has been suggested as a useful statistical tool to effectively mute the impact of preferential treatment or other sources of outliers in field data. Nevertheless, these additional sources of variation are undeclared and we do not know whether a Student t mixed model is required or if a standard, and less parameterized, Gaussian mixed model would be sufficient to serve the intended purpose. Within this context, our aim was to develop the Bayes factor between two nested models that only differed in a bounded variable in order to easily compare a Student t and a Gaussian mixed model. It is important to highlight that the Student t density converges to a Gaussian process when degrees of freedom tend to infinity. The twomodels can then be viewed as nested models that differ in terms of degrees of freedom. The Bayes factor can be easily calculated from the output of a Markov chain Monte Carlo sampling of the complex model (Student t mixed model. The performance of this Bayes factor was tested under simulation and on a real dataset, using the deviation information criterion (DIC as the standard reference criterion. The two statistical tools showed similar trends along the parameter space, although the Bayes factor appeared to be the more conservative. There was considerable evidence favoring the Student t mixed model for data sets simulated under Student t processes with limited degrees of freedom, and moderate advantages associated with using the Gaussian mixed model when working with datasets simulated with 50 or more degrees of freedom. For the analysis of real data (weight of Pietrain pigs at six months, both the Bayes factor and DIC slightly favored the Student t mixed model, with there being a reduced incidence of outlier individuals in this population.
Non-Gaussianity in single field models without slow-roll
Noller, Johannes
2011-01-01
We investigate non-Gaussianity in general single field models without assuming slow-roll conditions or the exact scale-invariance of the scalar power spectrum. The models considered include general single field inflation (e.g. DBI and canonical inflation) as well as bimetric models. We compute the full non-Gaussian amplitude, its size fnl, its shape, and the running with scale n_{NG}. In doing so we show that observational constraints allow significant violations of slow roll conditions and we derive explicit bounds on slow-roll parameters for fast-roll single field scenarios. A variety of new observational signatures is found for models respecting these bounds. We also explicitly construct concrete model implementations giving rise to this new phenomenology.
Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes
Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin
2014-05-01
We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all
Waveform model of a laser altimeter for an elliptical Gaussian beam.
Yue, Ma; Mingwei, Wang; Guoyuan, Li; Xiushan, Lu; Fanlin, Yang
2016-03-10
The current waveform model of a laser altimeter is based on the Gaussian laser beam of the fundamental mode, whose cross section is a circular spot, whereas some of the cross sections of Geoscience Laser Altimeter System lasers are closer to elliptical spots. Based on the expression of the elliptical Gaussian beam and the waveform theory of laser altimeters, the primary parameters of an echo waveform were derived. In order to examine the deduced expressions, a laser altimetry waveform simulator and waveform processing software were programmed and improved under the circumstance of an elliptical Gaussian beam. The result shows that all the biases between the theoretical and simulated waveforms were less than 0.5%, and the derived model of an elliptical spot is universal and can also be used for the conventional circular spot. The shape of the waveforms is influenced by the ellipticity of the laser spot, the target slope, and the "azimuth angle" between the major axis and the slope direction. This article provides the waveform theoretical basis of a laser altimeter under an elliptical Gaussian beam.
Perturbative corrections for approximate inference in gaussian latent variable models
DEFF Research Database (Denmark)
Opper, Manfred; Paquet, Ulrich; Winther, Ole
2013-01-01
but intractable correction, and can be applied to the model's partition function and other moments of interest. The correction is expressed over the higher-order cumulants which are neglected by EP's local matching of moments. Through the expansion, we see that EP is correct to first order. By considering higher...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1977-01-01
Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.
Modelling nanoparticles formation in the plasma plume induced by nanosecond pulsed lasers
Energy Technology Data Exchange (ETDEWEB)
Girault, M. [Laboratoire Interdisciplinaire Carnot de Bourgogne (ICB), UMR 6303 CNRS-Universite de Bourgogne, 9 Av. A. Savary, BP 47 870, F-21078 Dijon Cedex (France); Centre Lasers Intenses et Applications (CELIA), Universite de Bordeaux 1, 43 rue Pierre Noailles, Talence (France); Hallo, L., E-mail: hallo@celia.u-bordeaux1.fr [CEA CESTA, 15 Avenue des Sablieres CS 60001, 33116 Le Barp Cedex (France); Centre Lasers Intenses et Applications (CELIA), Universite de Bordeaux 1, 43 rue Pierre Noailles, Talence (France); Lavisse, L.; Lucas, M.C. Marco de [Laboratoire Interdisciplinaire Carnot de Bourgogne (ICB), UMR 6303 CNRS-Universite de Bourgogne, 9 Av. A. Savary, BP 47 870, F-21078 Dijon Cedex (France); Hebert, D. [CEA CESTA, 15 Avenue des Sablieres CS 60001, 33116 Le Barp Cedex (France); Potin, V.; Jouvard, J.-M. [Laboratoire Interdisciplinaire Carnot de Bourgogne (ICB), UMR 6303 CNRS-Universite de Bourgogne, 9 Av. A. Savary, BP 47 870, F-21078 Dijon Cedex (France)
2012-09-15
Highlights: Black-Right-Pointing-Pointer Nanoparticles spatial localization in the plume induced by a pulsed laser. Black-Right-Pointing-Pointer Plasma plume obtained by laser irradiation. Black-Right-Pointing-Pointer Particles and debris formation. Black-Right-Pointing-Pointer Powder generation. Black-Right-Pointing-Pointer Conditions of formation. - Abstract: Nanoparticles formation in a laser-induced plasma plume in the ambient air has been investigated by using numerical simulations and physical models. For high irradiances, or for ultrashort laser pulses, nanoparticles are formed by condensation, as fine powders, in the expanding plasma for very high pairs of temperature and pressure. At lower irradiances, or nanosecond laser pulses, another thermodynamic paths are possible, which cross the liquid-gas transition curve while laser is still heating the target and the induced plasma. In this work, we explore the growth of nanoparticles in the plasma plume induced by nanosecond pulsed lasers as a function of the laser irradiance. Moreover, the influence of the ambient gas has also been investigated.
A Framework for Non-Gaussian Signal Modeling and Estimation
1999-06-01
the minimum entropy estimator," Trabajos de Estadistica , vol. 19, pp. 55-65, 1968. XI_ ILlllgl_____l)___11-_11^· -^_X II- _ -- _ _ . III·III...Nonparametric Function Estimation, Modeling, and Simulation. Philadelphia: Society for Industrial and Applied Mathematics, 1990. [200] D. M. Titterington
Energy Technology Data Exchange (ETDEWEB)
Dunn, W.E.; Policastro, A.J.; Paddock, R.A.
1975-08-01
This report evaluates mathematical models that may be used to predict the flow and temperature distributions resulting from heated surface discharges from power-plant outfalls. Part One discusses the basic physics of surface-plume dispersion and provides a critical review of 11 of the most popular and promising plume models developed to predict the near- and complete-field plume. Part Two compares predictions from the models to prototype data, laboratory data, or both. Part Two also provides a generic discussion of the issues surrounding near- and complete-field modeling. The principal conclusion of the report is that the available models, in their present stage of development, may be used to give only general estimates of plume characteristics; precise predictions are not currently possible. The Shirazi-Davis and Pritchard (No. 1) models appear superior to the others tested and are capable of correctly predicting general plume characteristics. (The predictions show roughly factor-of-two accuracy in centerline distance to a given isotherm, factor-of-two accuracy in plume width, and factor-of-five accuracy in isotherm areas.) The state of the art can best be improved by pursuing basic laboratory studies of plume dispersion along with further development of numerical-modeling techniques.
Modelling Inverse Gaussian Data with Censored Response Values: EM versus MCMC
Directory of Open Access Journals (Sweden)
R. S. Sparks
2011-01-01
Full Text Available Low detection limits are common in measure environmental variables. Building models using data containing low or high detection limits without adjusting for the censoring produces biased models. This paper offers approaches to estimate an inverse Gaussian distribution when some of the data used are censored because of low or high detection limits. Adjustments for the censoring can be made if there is between 2% and 20% censoring using either the EM algorithm or MCMC. This paper compares these approaches.
Versatility and robustness of Gaussian random fields for modelling random media
Quintanilla, John A.; Chen, Jordan T.; Reidy, Richard F.; Allen, Andrew J.
2007-06-01
One of the authors (JAQ) has recently introduced a method of modelling random materials using excursion sets of Gaussian random fields. This method uses convex quadratic programming to find the optimal admissible field autocorrelation function, providing both theoretical and computational advantages over other techniques such as simulated annealing. In this paper, we discuss the application of this algorithm to model various aerogel systems given small-angle neutron scattering data. We also present new results concerning the robustness of this method.
Simple Scaling of Multi-Stream Jet Plumes for Aeroacoustic Modeling
Bridges, James
2015-01-01
When creating simplified, semi-empirical models for the noise of simple single-stream jets near surfaces it has proven useful to be able to generalize the geometry of the jet plume. Having a model that collapses the mean and turbulent velocity fields for a range of flows allows the problem to become one of relating the normalized jet field and the surface. However, most jet flows of practical interest involve jets of two or more co-annular flows for which standard models for the plume geometry do not exist. The present paper describes one attempt to relate the mean and turbulent velocity fields of multi-stream jets to that of an equivalent single-stream jet. The normalization of single-stream jets is briefly reviewed, from the functional form of the flow model to the results of the modeling. Next, PIV (Particle Image Velocimetry) data from a number of multi-stream jets is analyzed in a similar fashion. The results of several single-stream approximations of the multi-stream jet plume are demonstrated, with a 'best' approximation determined and the shortcomings of the model highlighted.
Directory of Open Access Journals (Sweden)
Yli-Harja Olli
2009-05-01
Full Text Available Abstract Background Cluster analysis has become a standard computational method for gene function discovery as well as for more general explanatory data analysis. A number of different approaches have been proposed for that purpose, out of which different mixture models provide a principled probabilistic framework. Cluster analysis is increasingly often supplemented with multiple data sources nowadays, and these heterogeneous information sources should be made as efficient use of as possible. Results This paper presents a novel Beta-Gaussian mixture model (BGMM for clustering genes based on Gaussian distributed and beta distributed data. The proposed BGMM can be viewed as a natural extension of the beta mixture model (BMM and the Gaussian mixture model (GMM. The proposed BGMM method differs from other mixture model based methods in its integration of two different data types into a single and unified probabilistic modeling framework, which provides a more efficient use of multiple data sources than methods that analyze different data sources separately. Moreover, BGMM provides an exceedingly flexible modeling framework since many data sources can be modeled as Gaussian or beta distributed random variables, and it can also be extended to integrate data that have other parametric distributions as well, which adds even more flexibility to this model-based clustering framework. We developed three types of estimation algorithms for BGMM, the standard expectation maximization (EM algorithm, an approximated EM and a hybrid EM, and propose to tackle the model selection problem by well-known model selection criteria, for which we test the Akaike information criterion (AIC, a modified AIC (AIC3, the Bayesian information criterion (BIC, and the integrated classification likelihood-BIC (ICL-BIC. Conclusion Performance tests with simulated data show that combining two different data sources into a single mixture joint model greatly improves the clustering
Crevillén-García, D.; Wilkinson, R. D.; Shah, A. A.; Power, H.
2017-01-01
Numerical groundwater flow and dissolution models of physico-chemical processes in deep aquifers are usually subject to uncertainty in one or more of the model input parameters. This uncertainty is propagated through the equations and needs to be quantified and characterised in order to rely on the model outputs. In this paper we present a Gaussian process emulation method as a tool for performing uncertainty quantification in mathematical models for convection and dissolution processes in porous media. One of the advantages of this method is its ability to significantly reduce the computational cost of an uncertainty analysis, while yielding accurate results, compared to classical Monte Carlo methods. We apply the methodology to a model of convectively-enhanced dissolution processes occurring during carbon capture and storage. In this model, the Gaussian process methodology fails due to the presence of multiple branches of solutions emanating from a bifurcation point, i.e., two equilibrium states exist rather than one. To overcome this issue we use a classifier as a precursor to the Gaussian process emulation, after which we are able to successfully perform a full uncertainty analysis in the vicinity of the bifurcation point.
In situ measurements and modeling of reactive trace gases in a small biomass burning plume
Müller, Markus; Anderson, Bruce E.; Beyersdorf, Andreas J.; Crawford, James H.; Diskin, Glenn S.; Eichler, Philipp; Fried, Alan; Keutsch, Frank N.; Mikoviny, Tomas; Thornhill, Kenneth L.; Walega, James G.; Weinheimer, Andrew J.; Yang, Melissa; Yokelson, Robert J.; Wisthaler, Armin
2016-03-01
An instrumented NASA P-3B aircraft was used for airborne sampling of trace gases in a plume that had emanated from a small forest understory fire in Georgia, USA. The plume was sampled at its origin to derive emission factors and followed ˜ 13.6 km downwind to observe chemical changes during the first hour of atmospheric aging. The P-3B payload included a proton-transfer-reaction time-of-flight mass spectrometer (PTR-ToF-MS), which measured non-methane organic gases (NMOGs) at unprecedented spatiotemporal resolution (10 m spatial/0.1 s temporal). Quantitative emission data are reported for CO2, CO, NO, NO2, HONO, NH3, and 16 NMOGs (formaldehyde, methanol, acetonitrile, propene, acetaldehyde, formic acid, acetone plus its isomer propanal, acetic acid plus its isomer glycolaldehyde, furan, isoprene plus isomeric pentadienes and cyclopentene, methyl vinyl ketone plus its isomers crotonaldehyde and methacrolein, methylglyoxal, hydroxy acetone plus its isomers methyl acetate and propionic acid, benzene, 2,3-butanedione, and 2-furfural) with molar emission ratios relative to CO larger than 1 ppbV ppmV-1. Formaldehyde, acetaldehyde, 2-furfural, and methanol dominated NMOG emissions. No NMOGs with more than 10 carbon atoms were observed at mixing ratios larger than 50 pptV ppmV-1 CO. Downwind plume chemistry was investigated using the observations and a 0-D photochemical box model simulation. The model was run on a nearly explicit chemical mechanism (MCM v3.3) and initialized with measured emission data. Ozone formation during the first hour of atmospheric aging was well captured by the model, with carbonyls (formaldehyde, acetaldehyde, 2,3-butanedione, methylglyoxal, 2-furfural) in addition to CO and CH4 being the main drivers of peroxy radical chemistry. The model also accurately reproduced the sequestration of NOx into peroxyacetyl nitrate (PAN) and the OH-initiated degradation of furan and 2-furfural at an average OH concentration of 7.45 ± 1.07 × 106 cm-3 in the
General model selection estimation of a periodic regression with a Gaussian noise
Konev, Victor; 10.1007/s10463-008-0193-1
2010-01-01
This paper considers the problem of estimating a periodic function in a continuous time regression model with an additive stationary gaussian noise having unknown correlation function. A general model selection procedure on the basis of arbitrary projective estimates, which does not need the knowledge of the noise correlation function, is proposed. A non-asymptotic upper bound for quadratic risk (oracle inequality) has been derived under mild conditions on the noise. For the Ornstein-Uhlenbeck noise the risk upper bound is shown to be uniform in the nuisance parameter. In the case of gaussian white noise the constructed procedure has some advantages as compared with the procedure based on the least squares estimates (LSE). The asymptotic minimaxity of the estimates has been proved. The proposed model selection scheme is extended also to the estimation problem based on the discrete data applicably to the situation when high frequency sampling can not be provided.
Novel Methods for Surface EMG Analysis and Exploration Based on Multi-Modal Gaussian Mixture Models.
Directory of Open Access Journals (Sweden)
Anna Magdalena Vögele
Full Text Available This paper introduces a new method for data analysis of animal muscle activation during locomotion. It is based on fitting Gaussian mixture models (GMMs to surface EMG data (sEMG. This approach enables researchers/users to isolate parts of the overall muscle activation within locomotion EMG data. Furthermore, it provides new opportunities for analysis and exploration of sEMG data by using the resulting Gaussian modes as atomic building blocks for a hierarchical clustering. In our experiments, composite peak models representing the general activation pattern per sensor location (one sensor on the long back muscle, three sensors on the gluteus muscle on each body side were identified per individual for all 14 horses during walk and trot in the present study. Hereby we show the applicability of the method to identify composite peak models, which describe activation of different muscles throughout cycles of locomotion.
Frequency domain wave equation forward modeling using gaussian elimination with static pivoting
Jian-Yong, Song; Xiao-Dong, Zheng; Yan, Zhang; Ji-Xiang, Xu; Zhen, Qin; Xue-Juan, Song
2011-03-01
Frequency domain wave equation forward modeling is a problem of solving large scale linear sparse systems which is often subject to the limits of computational efficiency and memory storage. Conventional Gaussian elimination cannot resolve the parallel computation of huge data. Therefore, we use the Gaussian elimination with static pivoting (GESP) method for sparse matrix decomposition and multi-source finite-difference modeling. The GESP method does not only improve the computational efficiency but also benefit the distributed parallel computation of matrix decomposition within a single frequency point. We test the proposed method using the classic Marmousi model. Both the single-frequency wave field and time domain seismic section show that the proposed method improves the simulation accuracy and computational efficiency and saves and makes full use of memory. This method can lay the basis for waveform inversion.
Filling the gaps: Gaussian mixture models from noisy, truncated or incomplete samples
Melchior, Peter
2016-01-01
We extend the common mixtures-of-Gaussians density estimation approach to account for a known sample incompleteness by simultaneous imputation from the current model. The method called GMMis generalizes existing Expectation-Maximization techniques for truncated data to arbitrary truncation geometries and probabilistic rejection. It can incorporate an uniform background distribution as well as independent multivariate normal measurement errors for each of the observed samples, and recovers an estimate of the error-free distribution from which both observed and unobserved samples are drawn. We compare GMMis to the standard Gaussian mixture model for simple test cases with different types of incompleteness, and apply it to observational data from the NASA Chandra X-ray telescope. The python code is capable of performing density estimation with millions of samples and thousands of model components and is released as an open-source package at https://github.com/pmelchior/pyGMMis
ITAC volume assessment through a Gaussian hidden Markov random field model-based algorithm.
Passera, Katia M; Potepan, Paolo; Brambilla, Luca; Mainardi, Luca T
2008-01-01
In this paper, a semi-automatic segmentation method for volume assessment of Intestinal-type adenocarcinoma (ITAC) is presented and validated. The method is based on a Gaussian hidden Markov random field (GHMRF) model that represents an advanced version of a finite Gaussian mixture (FGM) model as it encodes spatial information through the mutual influences of neighboring sites. To fit the GHMRF model an expectation maximization (EM) algorithm is used. We applied the method to a magnetic resonance data sets (each of them composed by T1-weighted, Contrast Enhanced T1-weighted and T2-weighted images) for a total of 49 tumor-contained slices. We tested GHMRF performances with respect to FGM by both a numerical and a clinical evaluation. Results show that the proposed method has a higher accuracy in quantifying lesion area than FGM and it can be applied in the evaluation of tumor response to therapy.
Dropka, Natasha; Holena, Martin
2017-08-01
In directional solidification of silicon, the solid-liquid interface shape plays a crucial role for the quality of crystals. The interface shape can be influenced by forced convection using travelling magnetic fields. Up to now, there is no general and explicit methodology to identify the relation and the optimum combination of magnetic and growth parameters e.g., frequency, phase shift, current magnitude and interface deflection in a buoyancy regime. In the present study, 2D CFD modeling was used to generate data for the design and training of artificial neural networks and for Gaussian process modeling. The aim was to quickly assess the complex nonlinear dependences among the parameters and to optimize them for the interface flattening. The first encouraging results are presented and the pros and cons of artificial neural networks and Gaussian process modeling discussed.
DEFF Research Database (Denmark)
Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der
2002-01-01
The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals...... are considered to be Gaussian. Conventional FORM analysis yields the linearization point of the idealized limit-state surface. A model correction factor is then introduced to push the idealized limit-state surface onto the actual limit-state surface. A few iterations yield a good approximation of the reliability...... reliability method; Model correction factor method; Nataf field integration; Non-Gaussion random field; Random field integration; Structural reliability; Pile foundation reliability...
Gaussian estimation for discretely observed Cox-Ingersoll-Ross model
Wei, Chao; Shu, Huisheng; Liu, Yurong
2016-07-01
This paper is concerned with the parameter estimation problem for Cox-Ingersoll-Ross model based on discrete observation. First, a new discretized process is built based on the Euler-Maruyama scheme. Then, the parameter estimators are obtained by employing the maximum likelihood method and the explicit expressions of the error of estimation are given. Subsequently, the consistency property of all parameter estimators are proved by applying the law of large numbers for martingales, Holder's inequality, B-D-G inequality and Cauchy-Schwarz inequality. Finally, a numerical simulation example for estimators and the absolute error between estimators and true values is presented to demonstrate the effectiveness of the estimation approach used in this paper.
Modeling basin- and plume-scale processes of CO2 storage for full-scale deployment
Energy Technology Data Exchange (ETDEWEB)
Zhou, Q.; Birkholzer, J.T.; Mehnert, E.; Lin, Y.-F.; Zhang, K.
2009-08-15
Integrated modeling of basin- and plume-scale processes induced by full-scale deployment of CO{sub 2} storage was applied to the Mt. Simon Aquifer in the Illinois Basin. A three-dimensional mesh was generated with local refinement around 20 injection sites, with approximately 30 km spacing. A total annual injection rate of 100 Mt CO{sub 2} over 50 years was used. The CO{sub 2}-brine flow at the plume scale and the single-phase flow at the basin scale were simulated. Simulation results show the overall shape of a CO{sub 2} plume consisting of a typical gravity-override subplume in the bottom injection zone of high injectivity and a pyramid-shaped subplume in the overlying multilayered Mt. Simon, indicating the important role of a secondary seal with relatively low-permeability and high-entry capillary pressure. The secondary-seal effect is manifested by retarded upward CO{sub 2} migration as a result of multiple secondary seals, coupled with lateral preferential CO{sub 2} viscous fingering through high-permeability layers. The plume width varies from 9.0 to 13.5 km at 200 years, indicating the slow CO{sub 2} migration and no plume interference between storage sites. On the basin scale, pressure perturbations propagate quickly away from injection centers, interfere after less than 1 year, and eventually reach basin margins. The simulated pressure buildup of 35 bar in the injection area is not expected to affect caprock geomechanical integrity. Moderate pressure buildup is observed in Mt. Simon in northern Illinois. However, its impact on groundwater resources is less than the hydraulic drawdown induced by long-term extensive pumping from overlying freshwater aquifers.
The influence of model resolution on ozone in industrial volatile organic compound plumes.
Henderson, Barron H; Jeffries, Harvey E; Kim, Byeong-Uk; Vizuete, William G
2010-09-01
Regions with concentrated petrochemical industrial activity (e.g., Houston or Baton Rouge) frequently experience large, localized releases of volatile organic compounds (VOCs). Aircraft measurements suggest these released VOCs create plumes with ozone (O3) production rates 2-5 times higher than typical urban conditions. Modeling studies found that simulating high O3 productions requires superfine (1-km) horizontal grid cell size. Compared with fine modeling (4-kmin), the superfine resolution increases the peak O3 concentration by as much as 46%. To understand this drastic O3 change, this study quantifies model processes for O3 and "odd oxygen" (Ox) in both resolutions. For the entire plume, the superfine resolution increases the maximum O3 concentration 3% but only decreases the maximum Ox concentration 0.2%. The two grid sizes produce approximately equal Ox mass but by different reaction pathways. Derived sensitivity to oxides of nitrogen (NOx) and VOC emissions suggests resolution-specific sensitivity to NOx and VOC emissions. Different sensitivity to emissions will result in different O3 responses to subsequently encountered emissions (within the city or downwind). Sensitivity of O3 to emission changes also results in different simulated O3 responses to the same control strategies. Sensitivity of O3 to NOx and VOC emission changes is attributed to finer resolved Eulerian grid and finer resolved NOx emissions. Urban NOx concentration gradients are often caused by roadway mobile sources that would not typically be addressed with Plume-in-Grid models. This study shows that grid cell size (an artifact of modeling) influences simulated control strategies and could bias regulatory decisions. Understanding the dynamics of VOC plume dependence on grid size is the first step toward providing more detailed guidance for resolution. These results underscore VOC and NOx resolution interdependencies best addressed by finer resolution. On the basis of these results, the
Observation and modeling of the evolution of Texas power plant plumes
Directory of Open Access Journals (Sweden)
W. Zhou
2011-07-01
Full Text Available During the second Texas Air Quality Study 2006 (TexAQS II, a full range of pollutants was measured by aircraft in eastern Texas during successive transects of power plant plumes (PPPs. A regional photochemical model is applied to simulate the physical and chemical evolution of the plumes. The observations reveal that SO_{2} and NO_{y} were rapidly removed from PPPs on a cloudy day but not on the cloud-free days, indicating efficient aqueous processing of these compounds in clouds. The model reasonably represents observed NO_{x} oxidation and PAN formation in the plumes, but fails to capture the rapid loss of SO_{2} (0.37 h^{−1} and NO_{y} (0.24 h^{−1} in some plumes on the cloudy day. Adjustments to the cloud liquid water content (QC and the default metal concentrations in the cloud module could explain some of the SO_{2} loss. However, NO_{y} in the model was insensitive to QC. These findings highlight cloud processing as a major challenge to atmospheric models. Model-based estimates of ozone production efficiency (OPE in PPPs are 20–50 % lower than observation-based estimates. Possible explanations for this discrepancy include the observed rapid NO_{y} loss which biases high some observation-based OPE estimates, and the model's under-prediction of isoprene emissions.
Energy Technology Data Exchange (ETDEWEB)
Dunn, W E; Policastro, A J; Paddock, R A
1975-05-01
This report evaluates mathematical models that may be used to predict the flow and temperature distributions resulting from heated surface discharges from power-plant outfalls. Part One discusses the basic physics of surface-plume dispersion and provides a critical review of 11 of the most popular and promising plume models developed to predict the near- and complete-field plume. The principal conclusion of the report is that the available models, in their present stage of development, may be used to give only general estimates of plume characteristics; precise predictions are not currently possible. The Shirazi-Davis and Pritchard (No. 1) models appear superior to the others tested and are capable of correctly predicting general plume characteristics. (The predictions show roughly factor-of-two accuracy in centerline distance to a given isotherm, factor-of-two accuracy in plume width, and factor-of-five accuracy in isotherm areas.) The state of the art can best be improved by pursuing basic laboratory studies of plume dispersion along with further development of numerical-modeling techniques.
A Hybrid DSMC/Free-Molecular Model of the Enceldus South Polar Plume
Keat Yeoh, Seng; Chapman, T. A.; Goldstein, D. B.; Varghese, P. L.; Trafton, L. M.
2012-10-01
Cassini first detected a gas-particle plume over the south pole of Enceladus in 2005. Since then, the plume has been a very active area of research because unlocking its mystery may help answer many lingering questions and open doors to new possibilities, such as the existence of extra-terrestrial life. Here, we present a hybrid model of the Enceladus gas-particle plume. Our model places eight sources on the surface of Enceladus based on the locations and jet orientations determined by Spitale and Porco (2007). We simulate the expansion of water vapor into vacuum, in the presence of dust particles from each source. The expansion is divided into two regions: the dense, collisional region near the source is simulated using the direct simulation Monte Carlo method, and the rarefied, collisionless region farther out is simulated using a free-molecular model. We also incorporate the effects of a sublimation atmosphere, a sputtered atmosphere and the background E-ring. Our model results are matched with the Cassini in-situ data, especially the Ion and Neutral Mass Spectrometer (INMS) water density data collected during the E2, E3, E5 and E7 flybys and the Ultraviolet Imaging Spectrograph (UVIS) stellar occultation observation made in 2005. Furthermore, we explore the time-variability of the plume by adjusting the individual source strengths to obtain a best curve-fit to the water density data in each flyby. We also analyze the effects of grains on the gas through a parametric study. We attempt to constrain the source conditions and gain insight on the nature of the source via our detailed models.
CSIR Research Space (South Africa)
Miya, WS
2008-10-01
Full Text Available In this paper, a comparison between Extension Neural Network (ENN), Gaussian Mixture Model (GMM) and Hidden Markov model (HMM) is conducted for bushing condition monitoring. The monitoring process is a two-stage implementation of a classification...
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao
2016-05-27
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model\\'s estimates of the plume\\'s trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate\\'s contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
van Breukelen, Boris M; Griffioen, Jasper; Röling, Wilfred F M; van Verseveld, Henk W
2004-06-01
The biogeochemical processes governing leachate attenuation inside a landfill leachate plume (Banisveld, the Netherlands) were revealed and quantified using the 1D reactive transport model PHREEQC-2. Biodegradation of dissolved organic carbon (DOC) was simulated assuming first-order oxidation of two DOC fractions with different reactivity, and was coupled to reductive dissolution of iron oxide. The following secondary geochemical processes were required in the model to match observations: kinetic precipitation of calcite and siderite, cation exchange, proton buffering and degassing. Rate constants for DOC oxidation and carbonate mineral precipitation were determined, and other model parameters were optimized using the nonlinear optimization program PEST by means of matching hydrochemical observations closely (pH, DIC, DOC, Na, K, Ca, Mg, NH4, Fe(II), SO4, Cl, CH4, saturation index of calcite and siderite). The modelling demonstrated the relevance and impact of various secondary geochemical processes on leachate plume evolution. Concomitant precipitation of siderite masked the act of iron reduction. Cation exchange resulted in release of Fe(II) from the pristine anaerobic aquifer to the leachate. Degassing, triggered by elevated CO2 pressures caused by carbonate precipitation and proton buffering at the front of the plume, explained the observed downstream decrease in methane concentration. Simulation of the carbon isotope geochemistry independently supported the proposed reaction network.
Modeling of Heat Transfer and Ablation of Refractory Material Due to Rocket Plume Impingement
Harris, Michael F.; Vu, Bruce T.
2012-01-01
CR Tech's Thermal Desktop-SINDA/FLUINT software was used in the thermal analysis of a flame deflector design for Launch Complex 39B at Kennedy Space Center, Florida. The analysis of the flame deflector takes into account heat transfer due to plume impingement from expected vehicles to be launched at KSC. The heat flux from the plume was computed using computational fluid dynamics provided by Ames Research Center in Moffet Field, California. The results from the CFD solutions were mapped onto a 3-D Thermal Desktop model of the flame deflector using the boundary condition mapping capabilities in Thermal Desktop. The ablation subroutine in SINDA/FLUINT was then used to model the ablation of the refractory material.
Kasprzyk, I; Walanus, A
2014-01-01
The characteristics of a pollen season, such as timing and magnitude, depend on a number of factors such as the biology of the plant and environmental conditions. The main aim of this study was to develop mathematical models that explain dynamics in atmospheric concentrations of pollen and fungal spores recorded in Rzeszów (SE Poland) in 2000-2002. Plant taxa with different characteristics in the timing, duration and curve of their pollen seasons, as well as several fungal taxa were selected for this analysis. Gaussian, gamma and logistic distribution models were examined, and their effectiveness in describing the occurrence of airborne pollen and fungal spores was compared. The Gaussian and differential logistic models were very good at describing pollen seasons with just one peak. These are typically for pollen types with just one dominant species in the flora and when the weather, in particular temperature, is stable during the pollination period. Based on s parameter of the Gaussian function, the dates of the main pollen season can be defined. In spite of the fact that seasonal curves are often characterised by positive skewness, the model based on the gamma distribution proved not to be very effective.
The Gaussian approximation for multi-color generalized Friedman’s urn model
Institute of Scientific and Technical Information of China (English)
2009-01-01
The generalized Friedman’s urn model is a popular urn model which is widely used in many disciplines.In particular,it is extensively used in treatment allocation schemes in clinical trials.In this paper,we show that both the urn composition process and the allocation proportion process can be approximated by a multi-dimensional Gaussian process almost surely for a multi-color generalized Friedman’s urn model with both homogeneous and non-homogeneous generating matrices.The Gaussian process is a solution of a stochastic differential equation.This Gaussian approximation is important for the understanding of the behavior of the urn process and is also useful for statistical inferences.As an application,we obtain the asymptotic properties including the asymptotic normality and the law of the iterated logarithm for a multi-color generalized Friedman’s urn model as well as the randomized-play-the-winner rule as a special case.
2.5-D/3-D resistivity modelling in anisotropic media using Gaussian quadrature grids
Zhou, Bing; Greenhalgh, Mark; Greenhalgh, S. A.
2009-01-01
We present a new numerical scheme for 2.5-D/3-D direct current resistivity modelling in heterogeneous, anisotropic media. This method, named the `Gaussian quadrature grid' (GQG) method, cooperatively combines the solution of the Variational Principle of the partial differential equation, Gaussian quadrature abscissae and local cardinal functions so that it has the main advantages of the spectral element method. The formulation shows that the GQG method is a modification of the spectral element method but does not employ the constant elements or require the mesh generator to match the Earth's surface. This makes it much easier to deal with geological models having a 2-D/3-D complex topography than using traditional numerical methods. The GQG technique can achieve a similar convergence rate to the spectral element method. We show it transforms the 2.5-D/3-D resistivity modelling problem into a sparse and symmetric linear equation system that can be solved by an iterative or matrix inversion method. Comparison with analytic solutions for homogeneous isotropic and anisotropic models shows that the error depends on the Gaussian quadrature order (abscissa number) and the subdomain size. The higher the order or the smaller the subdomain size that is employed, the more accurate are the results obtained. Several other synthetic examples, both homogeneous and inhomogeneous, incorporating sloping, undulating and severe topography, are presented and found to yield results comparable to finite element solutions involving a dense mesh.
Institute of Scientific and Technical Information of China (English)
Li Ya-Qing; Wu Zhen-Sen
2012-01-01
On the basis of the extended Huygens Fresnel principle and the model of the refractive-index structure constant in the atmospheric turbulence proposed by the International Telecommunication Union-Radio Communication Sector,the characteristics of the partially coherent Gaussian Schell-model(GSM)beams propagating in slanted atmospheric turbulence are studied.Using the cross-spectral density function(CSDF),we derive the expressions for the effective beam radius,the spreading angle,and the average intersity.The variance of the angle-of-arrival fluctuation and the wander effect of the GSM beam in the turbulence are calculated numerically.The influences of the coherence degree,the propagation distance,the propagation height,and the waist radius on the propagation characteristics of the partially coherent beams are discussed and compared with those of the fully coherent Gaussian beams.
Automated sleep spindle detection using IIR filters and a Gaussian Mixture Model.
Patti, Chanakya Reddy; Penzel, Thomas; Cvetkovic, Dean
2015-08-01
Sleep spindle detection using modern signal processing techniques such as the Short-Time Fourier Transform and Wavelet Analysis are common research methods. These methods are computationally intensive, especially when analysing data from overnight sleep recordings. The authors of this paper propose an alternative using pre-designed IIR filters and a multivariate Gaussian Mixture Model. Features extracted with IIR filters are clustered using a Gaussian Mixture Model without the use of any subject independent thresholds. The Algorithm was tested on a database consisting of overnight sleep PSG of 5 subjects and an online public spindles database consisting of six 30 minute sleep excerpts. An overall sensitivity of 57% and a specificity of 98.24% was achieved in the overnight database group and a sensitivity of 65.19% at a 16.9% False Positive proportion for the 6 sleep excerpts.
Mean-field dynamic criticality and geometric transition in the Gaussian core model
Coslovich, Daniele; Ikeda, Atsushi; Miyazaki, Kunimasa
2016-04-01
We use molecular dynamics simulations to investigate dynamic heterogeneities and the potential energy landscape of the Gaussian core model (GCM). Despite the nearly Gaussian statistics of particles' displacements, the GCM exhibits giant dynamic heterogeneities close to the dynamic transition temperature. The divergence of the four-point susceptibility is quantitatively well described by the inhomogeneous version of the mode-coupling theory. Furthermore, the potential energy landscape of the GCM is characterized by large energy barriers, as expected from the lack of activated, hopping dynamics, and display features compatible with a geometric transition. These observations demonstrate that all major features of mean-field dynamic criticality can be observed in a physically sound, three-dimensional model.
Non-gaussian Test Models for Prediction and State Estimation with Model Errors
Institute of Scientific and Technical Information of China (English)
Michal BRANICKI; Nan CHEN; Andrew J.MAJDA
2013-01-01
Turbulent dynamical systems involve dynamics with both a large dimensional phase space and a large number of positive Lyapunov exponents.Such systems are ubiquitous in applications in contemporary science and engineering where the statistical ensemble prediction and the real time filtering/state estimation are needed despite the underlying complexity of the system.Statistically exactly solvable test models have a crucial role to provide firm mathematical underpinning or new algorithms for vastly more complex scientific phenomena.Here,a class of statistically exactly solvable non-Gaussian test models is introduced,where a generalized Feynman-Kac formulation reduces the exact behavior of conditional statistical moments to the solution to inhomogeneous Fokker-Planck equations modified by linear lower order coupling and source terms.This procedure is applied to a test model with hidden instabilities and is combined with information theory to address two important issues in the contemporary statistical prediction of turbulent dynamical systems:the coarse-gained ensemble prediction in a perfect model and the improving long range forecasting in imperfect models.The models discussed here should be useful for many other applications and algorithms for the real time prediction and the state estimation.
Simulation of Mexico City plumes during the MIRAGE-Mex field campaign using the WRF-Chem model
Directory of Open Access Journals (Sweden)
X. Tie
2009-07-01
Full Text Available The quantification of tropospheric O_{3} production in the downwind of the Mexico City plume is a major objective of the MIRAGE-Mex field campaign. We used a regional chemistry-transport model (WRF-Chem to predict the distribution of O_{3} and its precursors in Mexico City and the surrounding region during March 2006, and compared the model with in-situ aircraft measurements of O_{3}, CO, VOCs, NO_{x}, and NO_{y} concentrations. The comparison shows that the model is capable of capturing the timing and location of the measured city plumes, and the calculated variability along the flights is generally consistent with the measured results, showing a rapid increase in O_{3} and its precursors when city plumes are detected. However, there are some notable differences between the calculated and measured values, suggesting that, during transport from the surface of the city to the outflow plume, ozone mixing ratios are underestimated by about 0–25% during different flights. The calculated O_{3}-NO_{x}, O_{3}-CO, and O_{3}-NO_{z} correlations generally agree with the measured values, and the analyses of these correlations suggest that photochemical O_{3} production continues in the plume downwind of the city (aged plume, adding to the O_{3} already produced in the city and exported with the plume. The model is also used to quantify the contributions to OH reactivity from various compounds in the aged plume. This analysis suggests that oxygenated organics (OVOCs have the highest OH reactivity and play important roles for the O_{3} production in the aging plume. Furthermore, O_{3} production per NO_{x} molecule consumed (O_{3} production efficiency is more efficient in the aged plume than in the young plume near the city. The major contributor to the high O_{3} production efficiency in the aged plume is the
Institute of Scientific and Technical Information of China (English)
戈迪; 蔡阳健; 林强
2005-01-01
By use of a tensor method, the transform formulae for the beam coherence-polarization matrix of the partially polarized Gaussian Schell-model (GSM) beams through aligned and misaligned optical systems are derived. As an example, the propagation properties of the partially polarized GSM beam passing through a misaligned thin lens are illustrated numerically and discussed in detail. The derived formulae provide a convenient way to study the propagation properties of the partially polarized GSM beams through aligned and misaligned optical systems.
Latent Gaussian modeling and INLA: A review with focus on space-time applications
Opitz, Thomas
2016-01-01
Bayesian hierarchical models with latent Gaussian layers have proven very flexible in capturing com- plex stochastic behavior and hierarchical structures in high-dimensional spatial and spatio-temporal data. Whereas simulation-based Bayesian inference through Markov Chain Monte Carlo may be hampered by slow convergence and numerical instabilities, the inferential framework of Integrated Nested Laplace Approximation (INLA) is capable to provide accurate and relatively fast analytical approxima...
Focusing properties of Gaussian Schell-model beams by an astigmatic aperture lens
Institute of Scientific and Technical Information of China (English)
Pan Liu-Zhan; Ding Chao-Liang
2007-01-01
This paper studies the focusing properties of Gaussian Schell-model (GSM) beams by an astigmatic aperture lens.It is shown that the axial irradiance distribution, the maximum axial irradiance and its position of focused GSM beams by an astigmatic aperture lens depend upon the astigmatism of the lens, the coherence of partially coherent light, the truncation parameter of the aperture and Fresnel number. The numerical calculation results are given to illustrate how these parameters affect the focusing property.
Gaussian and Affine Approximation of Stochastic Diffusion Models for Interest and Mortality Rates
Directory of Open Access Journals (Sweden)
Marcus C. Christiansen
2013-10-01
Full Text Available In the actuarial literature, it has become common practice to model future capital returns and mortality rates stochastically in order to capture market risk and forecasting risk. Although interest rates often should and mortality rates always have to be non-negative, many authors use stochastic diffusion models with an affine drift term and additive noise. As a result, the diffusion process is Gaussian and, thus, analytically tractable, but negative values occur with positive probability. The argument is that the class of Gaussian diffusions would be a good approximation of the real future development. We challenge that reasoning and study the asymptotics of diffusion processes with affine drift and a general noise term with corresponding diffusion processes with an affine drift term and an affine noise term or additive noise. Our study helps to quantify the error that is made by approximating diffusive interest and mortality rate models with Gaussian diffusions and affine diffusions. In particular, we discuss forward interest and forward mortality rates and the error that approximations cause on the valuation of life insurance claims.
Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian
Teneng, Dean
2013-09-01
We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.
A Rough Set Bounded Spatially Constrained Asymmetric Gaussian Mixture Model for Image Segmentation.
Ji, Zexuan; Huang, Yubo; Sun, Quansen; Cao, Guo; Zheng, Yuhui
2017-01-01
Accurate image segmentation is an important issue in image processing, where Gaussian mixture models play an important part and have been proven effective. However, most Gaussian mixture model (GMM) based methods suffer from one or more limitations, such as limited noise robustness, over-smoothness for segmentations, and lack of flexibility to fit data. In order to address these issues, in this paper, we propose a rough set bounded asymmetric Gaussian mixture model with spatial constraint for image segmentation. First, based on our previous work where each cluster is characterized by three automatically determined rough-fuzzy regions, we partition the target image into three rough regions with two adaptively computed thresholds. Second, a new bounded indicator function is proposed to determine the bounded support regions of the observed data. The bounded indicator and posterior probability of a pixel that belongs to each sub-region is estimated with respect to the rough region where the pixel lies. Third, to further reduce over-smoothness for segmentations, two novel prior factors are proposed that incorporate the spatial information among neighborhood pixels, which are constructed based on the prior and posterior probabilities of the within- and between-clusters, and considers the spatial direction. We compare our algorithm to state-of-the-art segmentation approaches in both synthetic and real images to demonstrate the superior performance of the proposed algorithm.
Performance modeling and analysis of parallel Gaussian elimination on multi-core computers
Directory of Open Access Journals (Sweden)
Fadi N. Sibai
2014-01-01
Full Text Available Gaussian elimination is used in many applications and in particular in the solution of systems of linear equations. This paper presents mathematical performance models and analysis of four parallel Gaussian Elimination methods (precisely the Original method and the new Meet in the Middle –MiM– algorithms and their variants with SIMD vectorization on multi-core systems. Analytical performance models of the four methods are formulated and presented followed by evaluations of these models with modern multi-core systems’ operation latencies. Our results reveal that the four methods generally exhibit good performance scaling with increasing matrix size and number of cores. SIMD vectorization only makes a large difference in performance for low number of cores. For a large matrix size (n ⩾ 16 K, the performance difference between the MiM and Original methods falls from 16× with four cores to 4× with 16 K cores. The efficiencies of all four methods are low with 1 K cores or more stressing a major problem of multi-core systems where the network-on-chip and memory latencies are too high in relation to basic arithmetic operations. Thus Gaussian Elimination can greatly benefit from the resources of multi-core systems, but higher performance gains can be achieved if multi-core systems can be designed with lower memory operation, synchronization, and interconnect communication latencies, requirements of utmost importance and challenge in the exascale computing age.
Song, C. H.; Kim, H. S.; von Glasow, R.; Brimblecombe, P.; Kim, J.; Park, R. J.; Woo, J. H.
2010-06-01
Elevated levels of formaldehyde (HCHO) along the ship corridors have been observed by satellite sensors, such as ESA/ERS-2 GOME (Global Ozone Monitoring Experiment), and were also predicted by global 3-D chemistry-transport models. In this study, three likely sources of the elevated HCHO levels were investigated to identify the detailed sources and examine the contributions of the sources (budget) of the elevated levels of HCHO in the ship corridors using a newly-developed ship-plume photochemical/dynamic model: (1) primary HCHO emission from ships; (2) secondary HCHO production via the atmospheric oxidation of Non-methane volatile organic compounds (NMVOCs) emitted from ships; and (3) atmospheric oxidation of CH4 within the ship plumes. From multiple ship-plume model simulations, CH4 oxidation by elevated levels of in-plume OH radicals was found to be the main factor responsible for the elevated levels of HCHO in the ship corridors. More than ~91% of the HCHO for the base ship plume case (ITCT 2K2 ship-plume case) is produced by this atmospheric chemical process, except in the areas close to the ship stacks where the main source of the elevated HCHO levels would be primary HCHO from the ships (due to the deactivation of CH4 oxidation from the depletion of in-plume OH radicals). Because of active CH4 oxidation (chemical destruction of CH4) by OH radicals, the instantaneous chemical lifetime of CH4 (τ CH4) decreased to ~0.45 yr inside the ship plume, which is in contrast to τ CH4 of ~1.1 yr in the background (up to ~41% decrease). A variety of likely ship-plume situations at three locations at different latitudes within the global ship corridors was also studied to determine the extent of the enhancements in the HCHOlevels in the marine boundary layer (MBL) influenced by ship emissions. It was found that the ship-plume HCHO levels could be 20.5-434.9 pptv higher than the background HCHO levels depending on the latitudinal locations of the ship plumes (i
Modeling of Homogeneous Condensation in High Density Thruster Plumes
2010-06-04
the nucleation process starting from the dimer formation and up using the elementary kinetic theory for cluster-cluster and cluster-monomer collisions... Astrophysical Letters and Communications, Vol. 34, 1997, pp. 245-250. 33C. Borgnakke and P.S. Larsen, “Statistical collision model for Monte Carlo simulation
An inverse problem approach to modelling coastal effluent plumes
Lam, D. C. L.; Murthy, C. R.; Miners, K. C.
Formulated as an inverse problem, the diffusion parameters associated with length-scale dependent eddy diffusivities can be viewed as the unknowns in the mass conservation equation for coastal zone transport problems. The values of the diffusion parameters can be optimized according to an error function incorporated with observed concentration data. Examples are given for the Fickian, shear diffusion and inertial subrange diffusion models. Based on a new set of dyeplume data collected in the coastal zone off Bronte, Lake Ontario, it is shown that the predictions of turbulence closure models can be evaluated for different flow conditions. The choice of computational schemes for this diagnostic approach is based on tests with analytic solutions and observed data. It is found that the optimized shear diffusion model produced a better agreement with observations for both high and low advective flows than, e.g., the unoptimized semi-empirical model, Ky=0.075 σy1.2, described by Murthy and Kenney.
Modeling tools to Account for Ethanol Impacts on BTEX Plumes
Widespread usage of ethanol in gasoline leads to impacts at leak sites which differ from those of non-ethanol gasolines. The presentation reviews current research results on the distribution of gasoline and ethanol, biodegradation, phase separation and cosolvancy. Model results f...
An integrated numerical model for the prediction of Gaussian and billet shapes
DEFF Research Database (Denmark)
Hattel, Jesper; Pryds, Nini; Pedersen, Trine Bjerre
2004-01-01
Separate models for the atomisation and the deposition stages were recently integrated by the authors to form a unified model describing the entire spray-forming process. In the present paper, the focus is on describing the shape of the deposited material during the spray-forming process, obtained...... by this model. After a short review of the models and their coupling, the important factors which influence the resulting shape, i.e. Gaussian or billet, are addressed. The key parameters, which are utilized to predict the geometry and dimension of the deposited material, are the sticking efficiency...
Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski functionals
Buchert, Thomas; France, Martin J.; Steiner, Frank
2017-05-01
Despite the wealth of Planck results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependences. Aiming at detecting the NGs of the CMB temperature anisotropy δ T , while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function P(δ T) , related to v0, the first Minkowski Functional (MF), and the two other MFs, v1 and v2. From their analytical Gaussian predictions we build the discrepancy functions {{ Δ }k} (k = P, 0, 1, 2) which are applied to an ensemble of 105 CMB realization maps of the Λ CDM model and to the Planck CMB maps. In our analysis we use general Hermite expansions of the {{ Δ }k} up to the 12th order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the second order expansions of Matsubara to arbitrary order in the standard deviation {σ0} for P(δ T) and v0, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the Λ CDM map sample and the Planck data. We confirm the weak level of non-Gaussianity (1-2)σ of the foreground corrected masked Planck 2015 maps.
Predicting effects of cold shock: modeling the decline of a thermal plume
Energy Technology Data Exchange (ETDEWEB)
Becker, C.D.; Trent, D.S.; Schneider, M.J.
1977-10-01
Predicting direct impact of cold shock on aquatic organisms after termination of power plant thermal discharges requires thermal tests that provide quantitative data on the resistance of acclimated species to lower temperatures. Selected examples from the literature on cold shock resistance of freshwater and marine fishes are illustrated to show predictive use. Abrupt cold shock data may be applied to field situations involving either abrupt or gradual temperature declines but yield conservative estimates under the latter conditions. Gradual cold shock data may be applied where heated plumes gradually dissipate because poikilotherms partially compensate for lowering temperature regimes. A simplified analytical model is presented for estimating thermal declines in terminated plumes originating from offshore, submerged discharges where shear current and boundary effects are minimal. When applied to site-specific conditions, the method provides time-temperature distributions for correlation with cold resistance data and, therefore, aids in assessing cold shock impact on aquatic biota.
Rakesh, P T; Venkatesan, R; Hedde, Thierry; Roubin, Pierre; Baskaran, R; Venkatraman, B
2015-07-01
FLEXPART-WRF is a versatile model for the simulation of plume dispersion over a complex terrain in a mesoscale region. This study deals with its application to the dispersion of a hypothetical air borne gaseous radioactivity over a topographically complex nuclear site in southeastern France. A computational method for calculating plume gamma dose to the ground level receptor is introduced in FLEXPART using the point kernel method. Comparison with another similar dose computing code SPEEDI is carried out. In SPEEDI the dose is calculated for specific grid sizes, the lowest available being 250 m, whereas in FLEXPART it is grid independent. Spatial distribution of dose by both the models is analyzed. Due to the ability of FLEXPART to utilize the spatio-temporal variability of meteorological variables as input, particularly the height of the PBL, the simulated dose values were higher than SPEEDI estimates. The FLEXPART-WRF in combination with point kernel dose module gives a more realistic picture of plume gamma dose distribution in a complex terrain, a situation likely under accidental release of radioactivity in a mesoscale range.
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M; Derocher, Andrew E; Lewis, Mark A; Jonsen, Ian D; Mills Flemming, Joanna
2016-05-25
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.
Modeling of particulate plumes transportation in boundary layers with obstacles
Karelsky, K. V.; Petrosyan, A. S.
2012-04-01
This presentation is aimed at creating and realization of new physical model of impurity transfer (solid particles and heavy gases) in areas with non-flat and/or nonstationary boundaries. The main idea of suggested method is to use non-viscous equations for solid particles transport modeling in the vicinity of complex boundary. In viscous atmosphere with as small as one likes coefficient of molecular viscosity, the non-slip boundary condition on solid surface must be observed. This postulates the reduction of velocity to zero at a solid surface. It is unconditionally in this case Prandtle hypothesis must be observed: for rather wide range of conditions in the surface neighboring layers energy dissipation of atmosphere flows is comparable by magnitude with manifestation of inertia forces. That is why according to Prandtle hypothesis in atmosphere movement characterizing by a high Reynolds number the boundary layer is forming near a planet surface, within which the required transition from zero velocities at the surface to magnitudes at the external boundary of the layer that are quite close to ones in ideal atmosphere flow. In that layer fast velocity gradients cause viscous effects to be comparable in magnitude with inertia forces influence. For conditions considered essential changes of hydrodynamic fields near solid boundary caused not only by nonslip condition but also by a various relief of surface: mountains, street canyons, individual buildings. Transport of solid particles, their ascent and precipitation also result in dramatic changes of meteorological fields. As dynamic processes of solid particles transfer accompanying the flow past of complex relief surface by wind flows is of our main interest we are to use equations of non-viscous hydrodynamic. We should put up with on the one hand idea of high wind gradients in the boundary layer and on the other hand disregard of molecular viscosity in two-phase atmosphere equations. We deal with describing high
EXACT MINIMAX ESTIMATION OF THE PREDICTIVE DENSITY IN SPARSE GAUSSIAN MODELS.
Mukherjee, Gourab; Johnstone, Iain M
We consider estimating the predictive density under Kullback-Leibler loss in an ℓ0 sparse Gaussian sequence model. Explicit expressions of the first order minimax risk along with its exact constant, asymptotically least favorable priors and optimal predictive density estimates are derived. Compared to the sparse recovery results involving point estimation of the normal mean, new decision theoretic phenomena are seen. Suboptimal performance of the class of plug-in density estimates reflects the predictive nature of the problem and optimal strategies need diversification of the future risk. We find that minimax optimal strategies lie outside the Gaussian family but can be constructed with threshold predictive density estimates. Novel minimax techniques involving simultaneous calibration of the sparsity adjustment and the risk diversification mechanisms are used to design optimal predictive density estimates.
Propagation of Coherent Gaussian Schell-Model Beam Array in a Misaligned Optical System
Institute of Scientific and Technical Information of China (English)
ZHOU Pu; WANG Xiao-Lin; MA Yan-Xing; MA Hao-Tong; XU Xiao-Jun; LIU Ze-Jin
2011-01-01
@@ Based on a generalized Collins formula,the analytical formula for the propagation property of coherent Gaussian Schell-rnodel(GSM) beam array through a misaligned optical system is derived.As numerical examples,the propagation of a coherent GSM beam array in a typical misaligned optical system with a thin lens is evaluated.The influence of different misalignment parameters is calculated and the normalized-intensity distribution is graphically illustrated.%Based on a generalized Collins formula, the analytical formula for the propagation property of coherent Gaussian Schell-model (GSM) beam array through a misaligned optical system is derived. As numerical examples, the propagation of a coherent GSM beam array in a typical misaligned optical system with a thin lens is evaluated.The influence of different misalignment parameters is calculated and the normalized-intensity distribution is graphically illustrated.
Álvarez-Romero, Jorge G; Devlin, Michelle; Teixeira da Silva, Eduardo; Petus, Caroline; Ban, Natalie C; Pressey, Robert L; Kool, Johnathan; Roberts, Jason J; Cerdeira-Estrada, Sergio; Wenger, Amelia S; Brodie, Jon
2013-04-15
Increased loads of land-based pollutants are a major threat to coastal-marine ecosystems. Identifying the affected marine areas and the scale of influence on ecosystems is critical to assess the impacts of degraded water quality and to inform planning for catchment management and marine conservation. Studies using remotely-sensed data have contributed to our understanding of the occurrence and influence of river plumes, and to our ability to assess exposure of marine ecosystems to land-based pollutants. However, refinement of plume modeling techniques is required to improve risk assessments. We developed a novel, complementary, approach to model exposure of coastal-marine ecosystems to land-based pollutants. We used supervised classification of MODIS-Aqua true-color satellite imagery to map the extent of plumes and to qualitatively assess the dispersal of pollutants in plumes. We used the Great Barrier Reef (GBR), the world's largest coral reef system, to test our approach. We combined frequency of plume occurrence with spatially distributed loads (based on a cost-distance function) to create maps of exposure to suspended sediment and dissolved inorganic nitrogen. We then compared annual exposure maps (2007-2011) to assess inter-annual variability in the exposure of coral reefs and seagrass beds to these pollutants. We found this method useful to map plumes and qualitatively assess exposure to land-based pollutants. We observed inter-annual variation in exposure of ecosystems to pollutants in the GBR, stressing the need to incorporate a temporal component into plume exposure/risk models. Our study contributes to our understanding of plume spatial-temporal dynamics of the GBR and offers a method that can also be applied to monitor exposure of coastal-marine ecosystems to plumes and explore their ecological influences. Copyright © 2013 Elsevier Ltd. All rights reserved.
Influence of mass transfer on bubble plume hydrodynamics
Directory of Open Access Journals (Sweden)
IRAN E. LIMA NETO
2016-03-01
Full Text Available ABSTRACT This paper presents an integral model to evaluate the impact of gas transfer on the hydrodynamics of bubble plumes. The model is based on the Gaussian type self-similarity and functional relationships for the entrainment coefficient and factor of momentum amplification due to turbulence. The impact of mass transfer on bubble plume hydrodynamics is investigated considering different bubble sizes, gas flow rates and water depths. The results revealed a relevant impact when fine bubbles are considered, even for moderate water depths. Additionally, model simulations indicate that for weak bubble plumes (i.e., with relatively low flow rates and large depths and slip velocities, both dissolution and turbulence can affect plume hydrodynamics, which demonstrates the importance of taking the momentum amplification factor relationship into account. For deeper water conditions, simulations of bubble dissolution/decompression using the present model and classical models available in the literature resulted in a very good agreement for both aeration and oxygenation processes. Sensitivity analysis showed that the water depth, followed by the bubble size and the flow rate are the most important parameters that affect plume hydrodynamics. Lastly, dimensionless correlations are proposed to assess the impact of mass transfer on plume hydrodynamics, including both the aeration and oxygenation modes.
Aerosol size distribution in a coagulating plume: Analytical behavior and modeling applications
Turco, Richard P.; Yu, Fangqun
In a previous paper (Turco and Yu, 1997), a series of analytical solutions were derived for the problem of aerosol coagulation in an expanding plume, as from a jet engine. Those solutions were shown to depend on a single dimensionless time-dependent number, NT, which is related to the particle coagulation kernel and the plume volume. Here, we derive a new analytical expression that describes the particle size distribution in an expanding plume in terms of NT. We show how this solution can be extended to include the effects of soot particles on the evolving volatile sulfuric acid aerosols in an aircraft wake. Our solutions apply primarily to cases where changes in the size distribution—beyond an initial period encompassing emission and prompt nucleation/condensation—is controlled mainly by coagulation. The analytical size distributions allow most of the important properties of an evolving aerosol population—mean size, number greater than a minimum size, surface area density, size dependent reactivities, and optical properties—to be estimated objectively. We have applied our analytical solution to evaluate errors associated with numerical diffusion in a detailed microphysical code, and demonstrate that, if care is not exercised in solving the coagulation equation, substantial errors can result in the predictions at large particle sizes. This effect is particularly important when comparisons between models and field observations are carried out. The analytical expressions derived here can also be employed to initialize models that do not resolve individual aircraft plumes, by providing a simple means for parameterizing the initial aerosol properties after an appropriate mixing time.
Revisiting Gaussian Process Regression Modeling for Localization in Wireless Sensor Networks.
Richter, Philipp; Toledano-Ayala, Manuel
2015-09-08
Signal strength-based positioning in wireless sensor networks is a key technology for seamless, ubiquitous localization, especially in areas where Global Navigation Satellite System (GNSS) signals propagate poorly. To enable wireless local area network (WLAN) location fingerprinting in larger areas while maintaining accuracy, methods to reduce the effort of radio map creation must be consolidated and automatized. Gaussian process regression has been applied to overcome this issue, also with auspicious results, but the fit of the model was never thoroughly assessed. Instead, most studies trained a readily available model, relying on the zero mean and squared exponential covariance function, without further scrutinization. This paper studies the Gaussian process regression model selection for WLAN fingerprinting in indoor and outdoor environments. We train several models for indoor/outdoor- and combined areas; we evaluate them quantitatively and compare them by means of adequate model measures, hence assessing the fit of these models directly. To illuminate the quality of the model fit, the residuals of the proposed model are investigated, as well. Comparative experiments on the positioning performance verify and conclude the model selection. In this way, we show that the standard model is not the most appropriate, discuss alternatives and present our best candidate.
Revisiting Gaussian Process Regression Modeling for Localization in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Philipp Richter
2015-09-01
Full Text Available Signal strength-based positioning in wireless sensor networks is a key technology for seamless, ubiquitous localization, especially in areas where Global Navigation Satellite System (GNSS signals propagate poorly. To enable wireless local area network (WLAN location fingerprinting in larger areas while maintaining accuracy, methods to reduce the effort of radio map creation must be consolidated and automatized. Gaussian process regression has been applied to overcome this issue, also with auspicious results, but the fit of the model was never thoroughly assessed. Instead, most studies trained a readily available model, relying on the zero mean and squared exponential covariance function, without further scrutinization. This paper studies the Gaussian process regression model selection for WLAN fingerprinting in indoor and outdoor environments. We train several models for indoor/outdoor- and combined areas; we evaluate them quantitatively and compare them by means of adequate model measures, hence assessing the fit of these models directly. To illuminate the quality of the model fit, the residuals of the proposed model are investigated, as well. Comparative experiments on the positioning performance verify and conclude the model selection. In this way, we show that the standard model is not the most appropriate, discuss alternatives and present our best candidate.
Paugam, R.; Wooster, M.; Atherton, J.; Freitas, S. R.; Schultz, M. G.; Kaiser, J. W.
2015-03-01
Biomass burning is one of a relatively few natural processes that can inject globally significant quantities of gases and aerosols into the atmosphere at altitudes well above the planetary boundary layer, in some cases at heights in excess of 10 km. The "injection height" of biomass burning emissions is therefore an important parameter to understand when considering the characteristics of the smoke plumes emanating from landscape scale fires, and in particular when attempting to model their atmospheric transport. Here we further extend the formulations used within a popular 1D plume rise model, widely used for the estimation of landscape scale fire smoke plume injection height, and develop and optimise the model both so that it can run with an increased set of remotely sensed observations. The model is well suited for application in atmospheric Chemistry Transport Models (CTMs) aimed at understanding smoke plume downstream impacts, and whilst a number of wildfire emission inventories are available for use in such CTMs, few include information on plume injection height. Since CTM resolutions are typically too spatially coarse to capture the vertical transport induced by the heat released from landscape scale fires, approaches to estimate the emissions injection height are typically based on parametrizations. Our extensions of the existing 1D plume rise model takes into account the impact of atmospheric stability and latent heat on the plume up-draft, driving it with new information on active fire area and fire radiative power (FRP) retrieved from MODIS satellite Earth Observation (EO) data, alongside ECMWF atmospheric profile information. We extend the model by adding an equation for mass conservation and a new entrainment scheme, and optimise the values of the newly added parameters based on comparison to injection heights derived from smoke plume height retrievals made using the MISR EO sensor. Our parameter optimisation procedure is based on a twofold approach
A Single Field Inflation Model with Large Local Non-Gaussianity
Chen, Xingang; Namjoo, Mohammad Hossein; Sasaki, Misao
2013-01-01
A detection of large local form non-Gaussianity is considered to be able to rule out all single field inflation models. This statement is based on a single field consistency condition. Despite the awareness of some implicit assumptions in the derivation of this condition and the demonstration of corresponding examples that illustrate these caveats, to date there is still no explicit and self-consistent model which can serve as a counterexample to this statement. We present such a model in this Letter.
Precise comparison of the Gaussian expansion method and the Gamow shell model
Masui, Hiroshi; Michel, Nicola; Płoszajczak, Marek
2014-01-01
We perform a detailed comparison of results of the Gamow Shell Model (GSM) and the Gaussian Expansion Method (GEM) supplemented by the complex scaling (CS) method for the same translationally-invariant cluster-orbital shell model (COSM) Hamiltonian. As a benchmark test, we calculate the ground state $0^{+}$ and the first excited state $2^{+}$ of mirror nuclei $^{6}$He and $^{6}$Be in the model space consisting of two valence nucleons in $p$-shell outside of a $^{4}$He core. We find a good overall agreement of results obtained in these two different approaches, also for many-body resonances.
Critical Behavior of Gaussian Model on X Fractal Lattices in External Magnetic Fields
Institute of Scientific and Technical Information of China (English)
LI Ying; KONG Xiang-Mu; HUANG Jia-Yin
2003-01-01
Using the renormalization group method, the critical behavior of Gaussian model is studied in external magnetic fields on X fractal lattices embedded in two-dimensional and d-dimensional (d ＞ 2) Euclidean spaces, respectively. Critical points and exponents are calculated. It is found that there is long-range order at finite temperature for this model, and that the critical points do not change with the space dimensionality d (or the fractal dimensionality dr). It is also found that the critical exponents are very different from results of Ising model on the same lattices, and that the exponents on X lattices are different from the exact results on translationally symmetric lattices.
Novel pseudo-divergence of Gaussian mixture models based speaker clustering method
Institute of Scientific and Technical Information of China (English)
Wang Bo; Xu Yiqiong; Li Bicheng
2006-01-01
Serial structure is applied to speaker recognition to reduce the algorithm delay and computational complexity. The speech is first classified into speaker class, and then searches the most likely one inside the class.Difference between Gaussian Mixture Models (GMMs) is widely applied in speaker classification. The paper proposes a novel mean of pseudo-divergence, the ratio of Inter-Model dispersion to Intra-Model dispersion, to present the difference between GMMs, to perform speaker cluster. Weight, mean and variance, GMM's components, are involved in the dispersion. Experiments indicate that the measurement can well present the difference of GMMs and has improved performance of speaker clustering.
Computing arbitrage-free yields in multi-factor Gaussian shadow-rate term structure models
Marcel A. Priebsch
2013-01-01
This paper develops a method to approximate arbitrage-free bond yields within a term structure model in which the short rate follows a Gaussian process censored at zero (a "shadow-rate model" as proposed by Black, 1995). The censoring ensures that model-implied yields are constrained to be positive, but it also introduces non-linearity that renders standard bond pricing formulas inapplicable. In particular, yields are not linear functions of the underlying state vector as they are in affine t...
Energy Technology Data Exchange (ETDEWEB)
Georgopoulos, P.G.; Seinfeld, J.H.
1986-01-01
Calculations performed with the Turbulent Reacting plume model (TPRM) developed in Part I are compared with the experimental data of Builtjes (1981, Netherlands Organization for Applied Scientific Research, Div. of Technology for Soc., Ref. No. 81-013563) for the reaction between NO in a point source plume and ambient O/sub 3/, taking place in a wind tunnel simulating a neutral atmospheric boundary layer. The comparison shows the TRPM capable of quantitatively predicting the retardation imposed on the evolution of nonlinear plume chemistry by incomplete mixing. (authors).
Tan, Yi; Dallmann, Timothy R.; Robinson, Allen L.; Presto, Albert A.
2016-06-01
Mobile monitoring of traffic-related air pollutants was conducted in Pittsburgh, PA. The data show substantial spatial variability of particle-bound polycyclic aromatic hydrocarbons (PB-PAH) and black carbon (BC). This variability is driven in large part by pollutant plumes from high emitting vehicles (HEVs). These plumes contribute a disproportionately large fraction of the near-road exposures of PB-PAH and BC. We developed novel statistical models to describe the spatial patterns of PB-PAH and BC exposures. The models consist of two layers: a plume layer to describe the contributions of high emitting vehicles using a near-roadway kernel, and an urban-background layer that predicts the spatial pattern of other sources using land use regression. This approach leverages unique information content of highly time resolved mobile monitoring data and provides insight into source contributions. The two-layer model describes 76% of observed PB-PAH variation and 61% of BC variation. On average, HEVs contribute at least 32% of outdoor PB-PAH and 14% of BC. The transferability of the models was examined using measurements from 36 hold-out validation sites. The plume layer performed well at validation sites, but the background layer showed little transferability due to the large difference in land use between the city and outer suburbs.
Energy Technology Data Exchange (ETDEWEB)
Moellhoff, M.; Hendricks, J.; Lippert, E.; Petry, H. [Koeln Univ. (Germany). Inst. fuer Geophysik und Meteorologie; Sausen, R. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Oberpfaffenhofen (Germany). Inst. fuer Physik der Atmosphaere
1997-12-31
A box model and two different one-dimensional models are used to investigate the chemical conversion of exhaust species in the dispersing plume of a subsonic aircraft flying at cruise altitude. The effect of varying daytime of release as well as the impact of changing dispersion time is studied with special respect to the aircraft induced O{sub 3} production. Effective emission amounts for consideration in mesoscale and global models are calculated. Simulations with modified photolysis rates are performed to show the sensitivity of the photochemistry to the occurrence of cirrus clouds. (author) 8 refs.
Modeling plasma plumes generated from laser solid interactions
Wilks, Scott C.; Higginson, D. P.; Link, A. J.; Park, H.-S.; Ping, Y.; Rinderknecht, H. G.; Ross, J. S.; Orban, C.; Hua, R.
2016-10-01
Laser pulses interacting with solid targets sitting in a vacuum form the basis for a large class of High Energy Density physics experiments. The resulting hydrodynamical evolution of the target during and after this interaction can be modeled using myriad techniques. These techniques range from pure particle-in-cell (PIC) to pure radiation-hydrodynamics, and include a large number of hybrid techniques in between. The particular method employed depends predominately on laser intensity. We compare and contrast several methods relevant for a large range of laser intensities (from Iλ2 1 ×1012W . μm2 /cm2 to Iλ2 1 ×1019W . μm2 /cm2) and energies (from E 100 mJ to E 100 kJ .) Density, temperature, and velocity profiles are benchmarked against recent experimental data. These experimental data include proton radiographs, time resolved x-ray images, and neutron yield and spectra. Methods to self-consistently handle backscatter and detailed energy deposition will also be discussed. LLNL-ABS-697767. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Nannofossils in 2011 El Hierro eruptive products reinstate plume model for Canary Islands
Zaczek, Kirsten; Troll, Valentin R.; Cachao, Mario; Ferreira, Jorge; Deegan, Frances M.; Carracedo, Juan Carlos; Soler, Vicente; Meade, Fiona C.; Burchardt, Steffi
2015-01-01
The origin and life cycle of ocean islands have been debated since the early days of Geology. In the case of the Canary archipelago, its proximity to the Atlas orogen led to initial fracture-controlled models for island genesis, while later workers cited a Miocene-Quaternary east-west age-progression to support an underlying mantle-plume. The recent discovery of submarine Cretaceous volcanic rocks near the westernmost island of El Hierro now questions this systematic age-progression within the archipelago. If a mantle-plume is indeed responsible for the Canaries, the onshore volcanic age-progression should be complemented by progressively younger pre-island sedimentary strata towards the west, however, direct age constraints for the westernmost pre-island sediments are lacking. Here we report on new age data obtained from calcareous nannofossils in sedimentary xenoliths erupted during the 2011 El Hierro events, which date the sub-island sedimentary rocks to between late Cretaceous and Pliocene in age. This age-range includes substantially younger pre-volcanic sedimentary rocks than the Jurassic to Miocene strata known from the older eastern islands and now reinstate the mantle-plume hypothesis as the most plausible explanation for Canary volcanism. The recently discovered Cretaceous submarine volcanic rocks in the region are, in turn, part of an older, fracture-related tectonic episode.
Microwave interrogation of an air plasma plume as a model system for hot spots in explosives
Kane, Ronald J.; Tringe, Joseph W.; Klunder, Gregory L.; Baluyot, Emer V.; Densmore, John M.; Converse, Mark C.
2017-01-01
The evolution of hot spots within explosives is critical to understand for predicting how detonation waves form and propagate. However, it is challenging to observe hot spots directly because they are small (˜micron diameter), form quickly (much less than a microsecond), and many explosives of interest are optically opaque. Microwaves are well-suited to characterize hot spots because they readily penetrate most explosives. They also have sufficient temporal and spatial resolution to measure the coalescence of an ensemble of hot spots inside explosives. Here we employ 94 GHz microwaves to characterize the evolution of individual plasma plumes formed by laser ionization of air. We use interferometry to obtain plume diameter as a function of time. Although the plasma plumes are larger than individual hot spots in explosives, they expand rapidly and predictably, and their structure can be optically imaged. They are therefore useful model systems to establish the spatial and temporal limits of microwave interferometry (MI) for understanding more complex hot spot behavior in solid explosives.
On the radiative forcing of volcanic plumes: modelling the impact of Mount Etna in the Mediterranean
Directory of Open Access Journals (Sweden)
Pasquale Sellitto
2015-12-01
Full Text Available The impact of small to moderate volcanic eruptions on the regional to global radiative forcing and climate is still largely unknown and thought to be presently underestimated. In this work, daily average shortwave radiative forcing efficiencies at the surface (RFEdSurf, at top of the atmosphere (RFEdTOA and their ratio (f, for upper tropospheric volcanic plumes with different optical characterization, are derived using the radiative transfer model UVSPEC and the LibRadtran suite. The optical parameters of the simulated aerosol layer, i.e., the Ångströem coefficient (alpha, the single scattering albedo (SSA and the asymmetry factor (g, have been varied to mimic volcanic ash (bigger and more absorbing particles, sulphate aerosols (smaller and more reflective particles and intermediate/mixed conditions. The characterization of the plume and its vertical distribution have been set-up to simulate Mount Etna, basing on previous studies. The radiative forcing and in particular the f ratio is strongly affected by the SSA and g, and to a smaller extent by alpha, especially for sulphates-dominated plumes. The impact of the altitude and thickness of the plume on the radiative forcing, for a fixed optical characterization of the aerosol layer, has been found negligible (less than 1% for RFEdSurf, RFEdTOA and f. The simultaneous presence of boundary layer/lower tropospheric marine or dust aerosols, like expected in the Mediterranean area, modulates only slightly (up to 12 and 14% for RFEdSurf and RFEdTOA, and 3 to 4% of the f ratio the radiative effects of the upper tropospheric volcanic layer.
Simulation of Mexico City plumes during the MIRAGE-Mex field campaign using the WRF-Chem model
Directory of Open Access Journals (Sweden)
X. Tie
2009-04-01
Full Text Available The quantification of tropospheric O_{3} production in the Mexico City outflow is a major objective of the MIRAGE-Mex field campaign. We used a regional chemistry-transport model (WRF-Chem to predict the distribution of O_{3} and its precursors in Mexico City and the surrounding region during March 2006, and compared with in-situ aircraft measurement of O_{3}, CO, VOCs, NO_{x}, and NO_{y} concentrations. The comparison shows that the model is capable of capturing the timing/location of the measured city plumes, and the calculated variability along the flights is generally consistent with the measured results, showing a rapid enhancement of O_{3} and its precursors when city plumes are detected. However, there are some notable differences between the calculated and measured values, suggesting that, during transport from the surface of the city to the outflow plume, pollution levels are underestimated by about 0–25% during different flights. The calculated O_{3}-NO_{x}, O_{3}-CO, and O_{3}-NO_{z} correlations generally agree with the measured values, and the analysis of these correlations suggest that photochemical O_{3} production continues in the plume downwind of the city (aged plume, adding to the O_{3} already produced in the city and exported with the plume. The model is also used to quantify the contributions to OH reactivity from various compounds in the aged plume. This analysis suggests that oxygenated organics (OVOCs have the highest OH reactivity and play important roles for the O_{3} production in the aging plume. Furthermore, O_{3} production per NO_{x} molecule consumed (O_{3} production efficiency is more efficient in the aged plume than in the young plume near the city. The major contributor to the high O_{3} production efficiency in the aged plume is the reaction RO_{2}+NO. By
Directory of Open Access Journals (Sweden)
Yongqiang Liu
2012-07-01
Full Text Available Smoke plume rise is critically dependent on plume updraft structure. Smoke plumes from landscape burns (forest and agricultural burns are typically structured into “sub-plumes” or multiple-core updrafts with the number of updraft cores depending on characteristics of the landscape, fire, fuels, and weather. The number of updraft cores determines the efficiency of vertical transport of heat and particulate matter and therefore plume rise. Daysmoke, an empirical-stochastic plume rise model designed for simulating wildland fire plumes, requires updraft core number as an input. In this study, updraft core number was gained via a cellular automata fire model applied to an aerial ignition prescribed burn conducted at Eglin AFB on 6 February 2011. Typically four updraft cores were simulated in agreement with a photo-image of the plume showing three/four distinct sub-plumes. Other Daysmoke input variables were calculated including maximum initial updraft core diameter, updraft core vertical velocity, and relative emissions production. Daysmoke simulated a vertical tower that mushroomed 1,000 m above the mixing height. Plume rise was validated by ceilometer. Simulations with two temperature profiles found 89–93 percent of the PM_{2.5} released during the flaming phase was transported into the free atmosphere above the mixing layer. The minimal ground-level smoke concentrations were verified by a small network of particulate samplers. Implications of these results for inclusion of wildland fire smoke in air quality models are discussed.
Institute of Scientific and Technical Information of China (English)
TIAN Yu; LI Wei; ZHANG Ai-qun
2013-01-01
This paper presents a computational model of simulating a deep-sea hydrothermal plume based on a Lagrangian particle random walk algorithm.This model achieves the efficient process to calculate a numerical plume developed in a fluid-advected environment with the characteristics such as significant filament intermittency and significant plume meander due to flow variation with both time and location.Especially,this model addresses both non-buoyant and buoyant features of a deep-sea hydrothermal plume in three dimensions,which significantly challenge a strategy for tracing the deep-sea hydrothermal plume and localizing its source.This paper also systematically discusses stochastic initial and boundary conditions that are critical to generate a proper numerical plume.The developed model is a powerful tool to evaluate and optimize strategies for the tracking of a deep-sea hydrothermal plume via an autonomous underwater vehicle (AUV).
Piringer, Martin; Knauder, Werner; Petz, Erwin; Schauberger, Günther
2016-09-01
Direction-dependent separation distances to avoid odour annoyance, calculated with the Gaussian Austrian Odour Dispersion Model AODM and the Lagrangian particle diffusion model LASAT at two sites, are analysed and compared. The relevant short-term peak odour concentrations are calculated with a stability-dependent peak-to-mean algorithm. The same emission and meteorological data, but model-specific atmospheric stability classes are used. The estimate of atmospheric stability is obtained from three-axis ultrasonic anemometers using the standard deviations of the three wind components and the Obukhov stability parameter. The results are demonstrated for the Austrian villages Reidling and Weissbach with very different topographical surroundings and meteorological conditions. Both the differences in the wind and stability regimes as well as the decrease of the peak-to-mean factors with distance lead to deviations in the separation distances between the two sites. The Lagrangian model, due to its model physics, generally calculates larger separation distances. For worst-case calculations necessary with environmental impact assessment studies, the use of a Lagrangian model is therefore to be preferred over that of a Gaussian model. The study and findings relate to the Austrian odour impact criteria.
Directory of Open Access Journals (Sweden)
Nicolas Hengartner
2006-12-01
Full Text Available The performance of weak gaseous plume-detection methods in hyperspectral long-wave infrared imagery depends on scene-specific conditions such at the ability to properly estimate atmospheric transmission, the accuracy of estimated chemical signatures, and background clutter. This paper reviews commonly-applied physical models in the context of weak plume identification and quantification, identifies inherent error sources as well as those introduced by making simplifying assumptions, and indicates research areas.
Modelling the impact of wind stress and river discharge on Danshuei River plume
Liu, W.-C.; Chen, W.-B.; Cheng, R.T.; Hsu, M.-H.
2008-01-01
A three-dimensional, time-dependent, baroclinic, hydrodynamic and salinity model, UnTRIM, was performed and applied to the Danshuei River estuarine system and adjacent coastal sea in northern Taiwan. The model forcing functions consist of tidal elevations along the open boundaries and freshwater inflows from the main stream and major tributaries in the Danshuei River estuarine system. The bottom friction coefficient was adjusted to achieve model calibration and verification in model simulations of barotropic and baroclinic flows. The turbulent diffusivities were ascertained through comparison of simulated salinity time series with observations. The model simulation results are in qualitative agreement with the available field data. The validated model was then used to investigate the influence of wind stress and freshwater discharge on Dasnhuei River plume. As the absence of wind stress, the anticyclonic circulation is prevailed along the north to west coast. The model results reveal when winds are downwelling-favorable, the surface low-salinity waters are flushed out and move to southwest coast. Conversely, large amounts of low-salinity water flushed out the Danshuei River mouth during upwelling-favorable winds, as the buoyancy-driven circulation is reversed. Wind stress and freshwater discharge are shown to control the plume structure. ?? 2007 Elsevier Inc. All rights reserved.
A model for wet aggregation of ash particles in volcanic plumes and clouds: 2. Model application
Folch, A.; Costa, A.; Durant, A.; Macedonio, G.
2010-09-01
The occurrence of particle aggregation has a dramatic effect on the transport and sedimentation of volcanic ash. The aggregation process is complex and can occur under different conditions and in multiple regions of the plume and in the ash cloud. In the companion paper, Costa et al. develop an aggregation model based on a fractal relationship to describe the rate particles are incorporated into ash aggregates. The model includes the effects of both magmatic and atmospheric water present in the volcanic cloud and demonstrates that the rate of aggregation depends on the characteristics of the initial particle size distribution. The aggregation model includes two parameters, the fractal exponent Df, which describes the efficiency of the aggregation process, and the aggregate settling velocity correction factor ψe, which influences the distance at which distal mass deposition maxima form. Both parameters are adjusted using features of the observed deposits. Here this aggregation model is implemented in the FALL3D volcanic ash transport model and applied to the 18 May 1980 Mount St. Helens and the 17-18 September 1992 Crater Peak eruptions. For both eruptions, the optimized values for Df (2.96-3.00) and ψe (0.27-0.33) indicate that the ash aggregates had a bulk density of 700-800 kg m-3. The model provides a higher degree of agreement than previous fully empirical aggregation models and successfully reproduces the depositional characteristics of the deposits investigated over a large range of scales, including the position and thickness of the secondary maxima.
Content-adaptive pentary steganography using the multivariate generalized Gaussian cover model
Sedighi, Vahid; Fridrich, Jessica; Cogranne, Rémi
2015-03-01
The vast majority of steganographic schemes for digital images stored in the raster format limit the amplitude of embedding changes to the smallest possible value. In this paper, we investigate the possibility to further improve the empirical security by allowing the embedding changes in highly textured areas to have a larger amplitude and thus embedding there a larger payload. Our approach is entirely model driven in the sense that the probabilities with which the cover pixels should be changed by a certain amount are derived from the cover model to minimize the power of an optimal statistical test. The embedding consists of two steps. First, the sender estimates the cover model parameters, the pixel variances, when modeling the pixels as a sequence of independent but not identically distributed generalized Gaussian random variables. Then, the embedding change probabilities for changing each pixel by 1 or 2, which can be transformed to costs for practical embedding using syndrome-trellis codes, are computed by solving a pair of non-linear algebraic equations. Using rich models and selection-channel-aware features, we compare the security of our scheme based on the generalized Gaussian model with pentary versions of two popular embedding algorithms: HILL and S-UNIWARD.
Recursive Gaussian Process Regression Model for Adaptive Quality Monitoring in Batch Processes
Directory of Open Access Journals (Sweden)
Le Zhou
2015-01-01
Full Text Available In chemical batch processes with slow responses and a long duration, it is time-consuming and expensive to obtain sufficient normal data for statistical analysis. With the persistent accumulation of the newly evolving data, the modelling becomes adequate gradually and the subsequent batches will change slightly owing to the slow time-varying behavior. To efficiently make use of the small amount of initial data and the newly evolving data sets, an adaptive monitoring scheme based on the recursive Gaussian process (RGP model is designed in this paper. Based on the initial data, a Gaussian process model and the corresponding SPE statistic are constructed at first. When the new batches of data are included, a strategy based on the RGP model is used to choose the proper data for model updating. The performance of the proposed method is finally demonstrated by a penicillin fermentation batch process and the result indicates that the proposed monitoring scheme is effective for adaptive modelling and online monitoring.
Wind-Forced Baroclinic Beta-Plumes
Belmadani, A.; Maximenko, N. A.; Melnichenko, O.; Schneider, N.; Di Lorenzo, E.
2011-12-01
A planetary beta-plume is a classical example of oceanic circulation induced by a localized vorticity source or sink that allows an analytical description in simplistic cases. Its barotropic structure is a zonally-elongated, gyre-like cell governed by the Sverdrup circulation on the beta-plane. The dominant zonal currents, found west of the source/sink, are often referred to as zonal jets. This simple picture describes the depth-integrated flow. Previous studies have investigated beta-plumes in a reduced-gravity framework or using other simple models with a small number of vertical layers, thereby lacking representation of the vertical structure. In addition, most previous studies use a purely linear regime without considering the role of eddies. However, these jets are often associated with strong lateral shear that makes them unstable under increased forcing. The circulation in such a nonlinear regime may involve eddy-mean flow interactions, which modify the time-averaged circulation. Here, the baroclinic structures of linear and nonlinear wind-forced beta-plumes are studied using a continuously-stratified, primitive equation, eddy-permitting ocean model (ROMS). The model is configured in an idealized rectangular domain for the subtropical ocean with a flat bottom. The surface wind forcing is a steady anticyclonic Gaussian wind vortex, which provides a localized vorticity source in the center of the domain. The associated wind stress curl and Ekman pumping comprise downwelling in the vortex center surrounded by a ring of weaker upwelling. Under weak forcing, the simulated steady-state circulation corresponds well with a theoretical linear beta-plume. While its depth-integrated transport exhibits a set of zonal jets, consistent with Sverdrup theory, the baroclinic structure of the plume is remarkably complex. Relatively fast westward decay of the surface currents occurs simultaneously with the deepening of the lower boundary of the plume. This deepening suggests
Measuring the Mass of Kepler-78b Using a Gaussian Process Model
Grunblatt, Samuel Kai; Howard, Andrew; Haywood, Raphaëlle
2015-01-01
Kepler-78b is a transiting planet that is 1.2 times the size of Earth and orbits a young K dwarf every 8 hours. Howard et al. (2013) and Pepe et al. (2013) independently reported the mass of Kepler-78b based on radial velocity measurements using the HIRES and HARPS-N spectrographs, respectively. In this study, a nonparametric model of the stellar activity observed in radial velocity measurements is made using Gaussian process regression, a novel technique in the field of radial velocity analysis, allowing the planetary Doppler signal to be modeled more accurately. By fitting the stellar activity with various Gaussian process regression models, we find a more precise measurement of the planet Doppler amplitude. We identify a superior Gaussian process model, and reanalyze both radial velocity datasets acquired by Howard et al. (2013) and Pepe et al. (2013) with this new technique. The Doppler amplitude of Kepler-78b is measured to be 1.92 +/- 0.25 m s-1, which corresponds to a mass of 1.93 +/- 0.27 ME, a 2.5-sigma improvement on the measurement of Howard et al (2013). This corresponds to a density of 6.1+1.9/-1.4 g cm-3, and an iron mass fraction of 0.32 +/- 0.26, assuming a two component rock-iron composition. This is consistent with an Earth-like composition, with uncertainties ranging from Moon-like to Mercury-like. Better understanding of the composition of Kepler-78b is an integral part of understanding rocky planet formation.
Directory of Open Access Journals (Sweden)
Erick J Canales-Rodríguez
Full Text Available Spherical deconvolution (SD methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data.
Canales-Rodríguez, Erick J; Daducci, Alessandro; Sotiropoulos, Stamatios N; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond
2015-01-01
Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data.
Druken, K. A.; Kincaid, C. R.; Griffiths, R. W.
2009-12-01
We present results from laboratory modeling addressing the question of whether a plume is required for reconciling the existing data sets of the Cascade subduction system in the Northwest U.S. Three-dimensional analog models are used to map the spatial and temporal patterns of subduction-induced upwelling associated with decompression melting. A series of experiments with varied combinations of down-dip, rollback and steepening plate motions, as well as extension in the overriding plate, were run with particle tracking techniques to focus on vertical velocities (e.g. favorable to decompression melting) in the mantle wedge. An overriding plate with varied depth is also incorporated to the model in order to more accurately approximate the lithosphere structure of the Northwest U.S. Glucose syrup, with a temperature dependent viscosity, and a phenolic plate were used to model the upper mantle and subducting plate, respectively. Hydraulic pistons control longitudinal, translational and steepening motions of the slab as a simplified kinematic approach to mimic dynamic experiments. Results show that the strongest vertical velocities occur in response to the onset of trench retreat and extension of the overriding plate, independent of the lithospheric “bottom topography”, with the largest occurring when there is an asymmetric style of extension. Spatial and temporal melt patterns mapped from these upwelling events, in addition to experiments with a buoyant plume source, are compared with the Northwest U.S. volcanism over the last 20 Ma. Preliminary results show non-plume melt patterns initially follow a trench parallel (north/south) orientation, which is progressively distorted trench-normal (east/west) with continued rollback subduction.
On the Bayesian Treed Multivariate Gaussian Process with Linear Model of Coregionalization
Energy Technology Data Exchange (ETDEWEB)
Konomi, Bledar A.; Karagiannis, Georgios; Lin, Guang
2015-02-01
The Bayesian treed Gaussian process (BTGP) has gained popularity in recent years because it provides a straightforward mechanism for modeling non-stationary data and can alleviate computational demands by fitting models to less data. The extension of BTGP to the multivariate setting requires us to model the cross-covariance and to propose efficient algorithms that can deal with trans-dimensional MCMC moves. In this paper we extend the cross-covariance of the Bayesian treed multivariate Gaussian process (BTMGP) to that of linear model of Coregionalization (LMC) cross-covariances. Different strategies have been developed to improve the MCMC mixing and invert smaller matrices in the Bayesian inference. Moreover, we compare the proposed BTMGP with existing multiple BTGP and BTMGP in test cases and multiphase flow computer experiment in a full scale regenerator of a carbon capture unit. The use of the BTMGP with LMC cross-covariance helped to predict the computer experiments relatively better than existing competitors. The proposed model has a wide variety of applications, such as computer experiments and environmental data. In the case of computer experiments we also develop an adaptive sampling strategy for the BTMGP with LMC cross-covariance function.
A bivariate quantitative genetic model for a linear Gaussian trait and a survival trait
Directory of Open Access Journals (Sweden)
Damgaard Lars
2005-12-01
Full Text Available Abstract With the increasing use of survival models in animal breeding to address the genetic aspects of mainly longevity of livestock but also disease traits, the need for methods to infer genetic correlations and to do multivariate evaluations of survival traits and other types of traits has become increasingly important. In this study we derived and implemented a bivariate quantitative genetic model for a linear Gaussian and a survival trait that are genetically and environmentally correlated. For the survival trait, we considered the Weibull log-normal animal frailty model. A Bayesian approach using Gibbs sampling was adopted. Model parameters were inferred from their marginal posterior distributions. The required fully conditional posterior distributions were derived and issues on implementation are discussed. The twoWeibull baseline parameters were updated jointly using a Metropolis-Hastingstep. The remaining model parameters with non-normalized fully conditional distributions were updated univariately using adaptive rejection sampling. Simulation results showed that the estimated marginal posterior distributions covered well and placed high density to the true parameter values used in the simulation of data. In conclusion, the proposed method allows inferring additive genetic and environmental correlations, and doing multivariate genetic evaluation of a linear Gaussian trait and a survival trait.
A bivariate quantitative genetic model for a linear Gaussian trait and a survival trait.
Damgaard, Lars Holm; Korsgaard, Inge Riis
2006-01-01
With the increasing use of survival models in animal breeding to address the genetic aspects of mainly longevity of livestock but also disease traits, the need for methods to infer genetic correlations and to do multivariate evaluations of survival traits and other types of traits has become increasingly important. In this study we derived and implemented a bivariate quantitative genetic model for a linear Gaussian and a survival trait that are genetically and environmentally correlated. For the survival trait, we considered the Weibull log-normal animal frailty model. A Bayesian approach using Gibbs sampling was adopted. Model parameters were inferred from their marginal posterior distributions. The required fully conditional posterior distributions were derived and issues on implementation are discussed. The two Weibull baseline parameters were updated jointly using a Metropolis-Hasting step. The remaining model parameters with non-normalized fully conditional distributions were updated univariately using adaptive rejection sampling. Simulation results showed that the estimated marginal posterior distributions covered well and placed high density to the true parameter values used in the simulation of data. In conclusion, the proposed method allows inferring additive genetic and environmental correlations, and doing multivariate genetic evaluation of a linear Gaussian trait and a survival trait.
Germs, W. Chr.; van der Holst, J. J. M.; van Mensfoort, S. L. M.; Bobbert, P. A.; Coehoorn, R.
2011-10-01
The charge-carrier mobility in organic semiconductors is often studied using non-steady-state experiments. However, energetic disorder can severely hamper the analysis due to the occurrence of a strong time dependence of the mobility caused by carrier relaxation. The multiple-trapping model is known to provide an accurate description of this effect. However, the value of the conduction level energy and the hopping attempt rate, which enter the model as free parameters, are not a priori known for a given material. We show how for the case of a Gaussian density of states both parameters can be deduced from the parameter values used to describe the measured dc current-voltage characteristics within the framework of the extended Gaussian disorder model. The approach is validated using three-dimensional Monte Carlo modeling. In the analysis, the charge-density dependence of the time-dependent mobility is included. The model is shown to successfully predict the low-frequency differential capacitance of sandwich-type devices based on a polyfluorene copolymer.
Steinberger, B. M.; Gassmoeller, R.; Mulyukova, E.
2012-12-01
We present geodynamic models featuring mantle plumes that are almost exclusively created at the margins of large thermo-chemical piles in the lowermost mantle. The models are based on global plate reconstructions since 300 Ma. Sinking subducted slabs not only push a heavy chemical layer ahead, such that dome-shaped structures form, but also push the thermal boundary layer (TBL) toward the chemical domes. At the steep edges it is forced upwards and begins to rise — in the lower part of the mantle as sheets, which then split into individual plumes higher in the mantle. The models explain why Large Igneous Provinces - commonly assumed to be caused by plumes forming in the TBL above the core-mantle boundary (CMB) - and kimberlites during the last few hundred Myr erupted mostly above the margins of the African and Pacific Large Low Shear Velocity Provinces (LLSVPs) of the lowermost mantle, which are probably chemically distinct from and heavier than the overlying mantle. Computations are done with two different codes, one based on spherical harmonic expansion, and CITCOM-S. The latter is combined with a self-consistent thermodynamic material model for basalt, harzburgite, and peridotite, which is used to derive a temperature- and presssure dependent database for parameters like density, thermal expansivity and specific heat. In terms of number and distribution of plumes, results are similar in both cases, but in the latter model, plume conduits are narrower, due to consideration of realistic lateral - in addition to radial - viscosity variations. For the latter case, we quantitatively compare the computed plume locations with actual hotspots and find that the good agreement is very unlikely (probability geometry, we also show results obtained with a 2-D finite element code. These results allow us to assess how much the computed long-term stability of the piles is affected by numerical diffusion. We have also conducted a systematic investigation, which configurations
Chen, Yunjie; Zhan, Tianming; Zhang, Ji; Wang, Hongyuan
2016-01-01
We propose a novel segmentation method based on regional and nonlocal information to overcome the impact of image intensity inhomogeneities and noise in human brain magnetic resonance images. With the consideration of the spatial distribution of different tissues in brain images, our method does not need preestimation or precorrection procedures for intensity inhomogeneities and noise. A nonlocal information based Gaussian mixture model (NGMM) is proposed to reduce the effect of noise. To reduce the effect of intensity inhomogeneity, the multigrid nonlocal Gaussian mixture model (MNGMM) is proposed to segment brain MR images in each nonoverlapping multigrid generated by using a new multigrid generation method. Therefore the proposed model can simultaneously overcome the impact of noise and intensity inhomogeneity and automatically classify 2D and 3D MR data into tissues of white matter, gray matter, and cerebral spinal fluid. To maintain the statistical reliability and spatial continuity of the segmentation, a fusion strategy is adopted to integrate the clustering results from different grid. The experiments on synthetic and clinical brain MR images demonstrate the superior performance of the proposed model comparing with several state-of-the-art algorithms.
Application of Gaussian moment method to a gene autoregulation model of rational vector field
Kang, Yan-Mei; Chen, Xi
2016-07-01
We take a lambda expression autoregulation model driven by multiplicative and additive noises as example to extend the Gaussian moment method from nonlinear stochastic systems of polynomial vector field to noisy biochemical systems of rational polynomial vector field. As a direct application of the extended method, we also disclose the phenomenon of stochastic resonance. It is found that the transcription rate can inhibit the stochastic resonant effect, but the degradation rate may enhance the phenomenon. These observations should be helpful in understanding the functional role of noise in gene autoregulation.
Institute of Scientific and Technical Information of China (English)
Jixiong Pu
2006-01-01
@@ The propagation of polychromatic electromagnetic Gaussian Schell-model (EGSM) beams in free space is investigated. It is shown that the spectral degree of polarization, spectral degree of coherence, and normalized spectrum change generally on propagation. The conditions of keeping the spectral invariance and keeping polarization invariance for the polychromatic EGSM beams are derived respectively. The results indicate that the constraints on the parameters of EGSM source to keep polarization invariance on propagation are more rigorous than those to keep invariance of the normalized spectrum.
Observation of nonspecular effects for Gaussian Schell-model light beams
Merano, Michele; Mistura, Giampaolo
2012-01-01
We investigate experimentally the role of spatial coherence on optical beam shifts. This topic has been the subject of recent theoretical debate. We consider Gaussian Schell-model beams, with different spatial degrees of coherence, reflected at an air-glass interface. We prove that the angular Goos-H\\"anchen and the angular Imbert-Fedorov effects are affected by the spatial degree of coherence of the incident beam, whereas the spatial Goos-H\\"anchen effect does not depend on incoherence. Our data unambiguously resolve the theoretical debate in favour of one specific theory.
Chen, Chunyi; Yang, Huamin; Lou, Yan; Tong, Shoufeng
2011-08-01
Novel analytical expressions for the cross-spectral density function of a Gaussian Schell-model pulsed (GSMP) beam propagating through atmospheric turbulence are derived. Based on the cross-spectral density function, the average spectral density and the spectral degree of coherence of a GSMP beam in atmospheric turbulence are in turn examined. The dependence of the spectral degree of coherence on the turbulence strength measured by the atmospheric spatial coherence length is calculated numerically and analyzed in depth. The results obtained are useful for applications involving spatially and spectrally partially coherent pulsed beams propagating through atmospheric turbulence.
Semi-Supervised Classification based on Gaussian Mixture Model for remote imagery
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
Semi-Supervised Classification (SSC),which makes use of both labeled and unlabeled data to determine classification borders in feature space,has great advantages in extracting classification information from mass data.In this paper,a novel SSC method based on Gaussian Mixture Model (GMM) is proposed,in which each class’s feature space is described by one GMM.Experiments show the proposed method can achieve high classification accuracy with small amount of labeled data.However,for the same accuracy,supervised classification methods such as Support Vector Machine,Object Oriented Classification,etc.should be provided with much more labeled data.
Propagation of specular and anti-specular Gaussian Schell-model beams in oceanic turbulence
Zhou, Zhaotao; Guo, Mengwen; Zhao, Daomu
2017-01-01
On the basis of the extended Huygens-Fresnel principle and the unified theory of coherence and polarization of light, we investigate the propagation properties of the specular and anti-specular Gaussian Schell-model (GSM) beams through oceanic turbulence. It is shown that the specularity of specular GSM beams and the anti-specularity of anti-specular GSM beams are destroyed on propagation in oceanic turbulence. The spectral density and the spectral degree of coherence are also studied in detail. The results may be helpful for underwater communication.
Directory of Open Access Journals (Sweden)
Nsiri Benayad
2010-01-01
Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.
Zhu; Yang
2000-06-01
A generalizing formulation of dynamical real-space renormalization that is appropriate for arbitrary spin systems is suggested. The alternative version replaces single-spin flipping Glauber dynamics with single-spin transition dynamics. As an application, in this paper we mainly investigate the critical slowing down of the Gaussian spin model on three fractal lattices, including nonbranching, branching, and multibranching Koch curves. The dynamical critical exponent z is calculated for these lattices using an exact decimation renormalization transformation in the assumption of the magneticlike perturbation, and a universal result z=1/nu is found.
Mao, Haidan; Du, Xinyue; Chen, Linfei; Zhao, Daomu
2011-06-01
On the basis of the fact that a hard-edged aperture function can be expressed as finite matrices with different weighting coefficients, we obtain the analytical formula for the propagation of the broadband gaussian Schell-model (BGSM) beam through the apertured fractional Fourier transformation (AFrFT) system. It is shown by numerical examples that the intensity distribution in the plane of a small fractional order is obviously influenced by the bandwidth when the BGSM beams propagate through the AFrFT system. Further extensions are also pointed out.
Modelling of river plume dynamics in Öre estuary (Baltic Sea) with Telemac-3D hydrodynamic model
Sokolov, Alexander
2016-04-01
The main property of river plumes is their buoyancy, fresh water discharged by rivers is less dense than the receiving, saline waters. To study the processes of plume formation in case of river discharge into a brackish estuary where salinity is low (3.5 - 5 psu) a three dimensional hydrodynamic model was applied to the Öre estuary in the Baltic Sea. This estuary is a small fjord-like bay in the north part of the Baltic Sea. Size of the bay is about 8 by 8 km with maximum depth of 35 metres. River Öre has a small average freshwater discharge of 35 m3/s. But in spring during snowmelt the discharge can be many times higher. For example, in April 2015 the discharge increased from 8 m3/s to 160 m3/s in 18 days. To study river plume dynamics a finite element based three dimensional baroclinic model TELEMAC - 3D is used. The TELEMAC modelling suite is developed by the National Laboratory of Hydraulics and Environment (LNHE) of Electricité de France (EDF). Modelling domain was approximated by an unstructured mesh with element size varies from 50 to 500 m. In vertical direction a sigma-coordinate with 20 layers was used. Open sea boundary conditions were obtained from the Baltic Sea model HIROMB-BOOS using COPERNICUS marine environment monitoring service. Comparison of modelling results with observations obtained by BONUS COCOA project's field campaign in Öre estuary in 2015 shows that the model plausible simulate river plume dynamics. Modelling of age of freshwater is also discussed. This work resulted from the BONUS COCOA project was supported by BONUS (Art 185), funded jointly by the EU and the Swedish Research Council Formas.
Directory of Open Access Journals (Sweden)
M. de Reus
2005-01-01
Full Text Available An intensive field measurement campaign was performed in July/August 2002 at the Global Atmospheric Watch station Izaña on Tenerife to study the interaction of mineral dust aerosol and tropospheric chemistry (MINATROC. A dense Saharan dust plume, with aerosol masses exceeding 500 µg m-3, persisted for three days. During this dust event strongly reduced mixing ratios of ROx (HO2, CH3O2 and higher organic peroxy radicals, H2O2, NOx (NO and NO2 and O3 were observed. A chemistry boxmodel, constrained by the measurements, has been used to study gas phase and heterogeneous chemistry. It appeared to be difficult to reproduce the observed HCHO mixing ratios with the model, possibly related to the representation of precursor gas concentrations or the absence of dry deposition. The model calculations indicate that the reduced H2O2 mixing ratios in the dust plume can be explained by including the heterogeneous removal reaction of HO2 with an uptake coefficient of 0.2, or by assuming heterogeneous removal of H2O2 with an accommodation coefficient of 5x10-4. However, these heterogeneous reactions cannot explain the low ROx mixing ratios observed during the dust event. Whereas a mean daytime net ozone production rate (NOP of 1.06 ppbv/hr occurred throughout the campaign, the reduced ROx and NOx mixing ratios in the Saharan dust plume contributed to a reduced NOP of 0.14-0.33 ppbv/hr, which likely explains the relatively low ozone mixing ratios observed during this event.
Using Gaussian Processes to Model Noise in Eclipsing Binary Light Curves
Prsa, Andrej; Hambleton, Kelly M.
2017-01-01
The most precise data we have at hand arguably comes from NASA's Kepler mission, for which there is no good flux calibration available since it was designed to measure relative flux changes down to ~20ppm level. Instrumental artifacts thus abound in the data, and they vary with the module, location on the CCD, target brightness, electronic cross-talk, etc. In addition, Kepler's near-uninterrupted mode of observation reveals astrophysical signals and transient phenomena (i.e. spots, flares, protuberances, pulsations, magnetic field features, etc) that are not accounted for in the models. These "nuisance" signals, along with instrumental artifacts, are considered noise when modeling light curves; this noise is highly correlated and it cannot be considered poissonian or gaussian. Detrending non-white noise from light curve data has been an ongoing challenge in modeling eclipsing binary star and exoplanet transit light curves. Here we present an approach using Gaussian Processes (GP) to model noise as part of the overall likelihood function. The likelihood function consists of the eclipsing binary light curve generator PHOEBE, correlated noise model using GP, and a poissonian (shot) noise attributed to the actual stochastic component of the entire noise model. We consider GP parameters and poissonian noise amplitude as free parameters that are being sampled within the likelihood function, so the end result is the posterior probability not only for eclipsing binary model parameters, but for the noise parameters as well. We show that the posteriors of principal parameters are significantly more robust when noise is modeled rigorously compared to modeling detrended data with an eclipsing binary model alone. This work has been funded by NSF grant #1517460.
Hernández-Espriú, Antonio; Martínez-Santos, Pedro; Sánchez-León, Emilio; Marín, Luis E.
Light non-aqueous phase liquids (LNAPL) represent one of the most serious problems in aquifers contaminated with petroleum hydrocarbons liquids. To design an appropriate remediation strategy it is essential to understand the behavior of the plume. The aim of this paper is threefold: (1) to characterize the fluid distribution of an LNAPL plume detected in a volcanic low-conductivity aquifer (∼0.4 m/day from slug tests interpretation), (2) to simulate the recovery processes of the free-product contamination and (3) to evaluate the primary recovery efficiency of the following alternatives: skimming, dual-phase extraction, Bioslurping and multi-phase extraction wells. The API/Charbeneau analytical model was used to investigate the recovery feasibility based on the geological properties and hydrogeological conditions with a multi-phase (water, air, LNAPL) transport approach in the vadose zone. The modeling performed in this research, in terms of LNAPL distribution in the subsurface, show that oil saturation is 7% in the air-oil interface, with a maximum value of 70% in the capillary fringe. Equilibrium between water and LNAPL phases is reached at a depth of 1.80 m from the air-oil interface. On the other hand, the LNAPL recovery model results suggest a remarkable enhancement of the free-product recovery when simultaneous extra-phase extraction was simulated from wells, in addition to the LNAPL lens. Recovery efficiencies were 27%, 65%, 66% and 67% for skimming, dual-phase extraction, Bioslurping and multi-phase extraction, respectively. During a 3-year simulation, skimmer wells and multi-phase extraction showed the lowest and highest LNAPL recovery rates, with expected values from 207 to 163 and 2305 to 707 l-LNAPL/day, respectively. At a field level we are proposing a well distribution arrangement that alternates pairs of dual-phase well-Bioslurping well. This not only improves the recovery of the free-product plume, but also pumps the dissolve plume and enhances in
Ben Alaya, M. A.; Chebana, F.; Ouarda, T. B. M. J.
2016-09-01
Statistical downscaling techniques are required to refine atmosphere-ocean global climate data and provide reliable meteorological information such as a realistic temporal variability and relationships between sites and variables in a changing climate. To this end, the present paper introduces a modular structure combining two statistical tools of increasing interest during the last years: (1) Gaussian copula and (2) quantile regression. The quantile regression tool is employed to specify the entire conditional distribution of downscaled variables and to address the limitations of traditional regression-based approaches whereas the Gaussian copula is performed to describe and preserve the dependence between both variables and sites. A case study based on precipitation and maximum and minimum temperatures from the province of Quebec, Canada, is used to evaluate the performance of the proposed model. Obtained results suggest that this approach is capable of generating series with realistic correlation structures and temporal variability. Furthermore, the proposed model performed better than a classical multisite multivariate statistical downscaling model for most evaluation criteria.
Assessing clustering strategies for Gaussian mixture filtering a subsurface contaminant model
Liu, Bo
2016-02-03
An ensemble-based Gaussian mixture (GM) filtering framework is studied in this paper in term of its dependence on the choice of the clustering method to construct the GM. In this approach, a number of particles sampled from the posterior distribution are first integrated forward with the dynamical model for forecasting. A GM representation of the forecast distribution is then constructed from the forecast particles. Once an observation becomes available, the forecast GM is updated according to Bayes’ rule. This leads to (i) a Kalman filter-like update of the particles, and (ii) a Particle filter-like update of their weights, generalizing the ensemble Kalman filter update to non-Gaussian distributions. We focus on investigating the impact of the clustering strategy on the behavior of the filter. Three different clustering methods for constructing the prior GM are considered: (i) a standard kernel density estimation, (ii) clustering with a specified mixture component size, and (iii) adaptive clustering (with a variable GM size). Numerical experiments are performed using a two-dimensional reactive contaminant transport model in which the contaminant concentration and the heterogenous hydraulic conductivity fields are estimated within a confined aquifer using solute concentration data. The experimental results suggest that the performance of the GM filter is sensitive to the choice of the GM model. In particular, increasing the size of the GM does not necessarily result in improved performances. In this respect, the best results are obtained with the proposed adaptive clustering scheme.
Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei
2016-02-01
Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM".
Shen, Bichuan; Chen, Chi-Hau; Marchisio, Giovanni B.
2012-06-01
In this paper, we aim to study the detection of vehicles from WorldView-2 satellite imagery. For this purpose, accurate modeling of vehicle features and signatures and efficient learning of vehicle hypotheses are critical. We present a joint Gaussian and maximum likelihood based modeling and machine learning approach using SVM and neural network algorithms to describe the local appearance densities and classify vehicles from non-vehicle buildings, objects, and backgrounds. Vehicle hypotheses are fitted by elliptical Gaussians and the bottom-up features are grouped by Gabor orientation filtering based on multi-scale analysis and distance transform. Global contextual information such as road networks and vehicle distributions can be used to enhance the recognition. In consideration of the problem complexity the practical vehicle detection task faces due to dense and overlapping vehicle distributions, partial occlusion and clutters by building, shadows, and trees, we employ a spectral clustering strategy jointly combined with bootstrapped learning to estimate the parameters of centroid, orientation, and extents for local densities. We demonstrate a high detection rate 94.8%,with a missing rate 5.2% and a false alarm rate 5.3% on the WorldView-2 satellite imagery. Experimental results show that our method is quite effective to model and detect vehicles.
Models of discretized moduli spaces, cohomological field theories, and Gaussian means
Andersen, Jørgen Ellegaard; Norbury, Paul; Penner, Robert C
2015-01-01
We prove combinatorially the explicit relation between genus filtrated $s$-loop means of the Gaussian matrix model and terms of the genus expansion of the Kontsevich--Penner matrix model (KPMM). The latter is the generating function for volumes of discretized (open) moduli spaces $M_{g,s}^{\\mathrm{disc}}$ given by $N_{g,s}(P_1,\\dots,P_s)$ for $(P_1,\\dots,P_s)\\in{\\mathbb Z}_+^s$. This generating function therefore enjoys the topological recursion, and we prove that it is simultaneously the generating function for ancestor invariants of a cohomological field theory thus enjoying the Givental decomposition. We use another Givental-type decomposition obtained for this model by the second authors in 1995 in terms of special times related to the discretisation of moduli spaces thus representing its asymptotic expansion terms (and therefore those of the Gaussian means) as finite sums over graphs weighted by lower-order monomials in times thus giving another proof of (quasi)polynomiality of the discrete volumes. As a...
Martian Atmospheric Plumes: Behavior, Detectability and Plume Tracing
Banfield, Don; Mischna, M.; Sykes, R.; Dissly, R.
2013-10-01
We will present our recent work simulating neutrally buoyant plumes in the martian atmosphere. This work is primarily directed at understanding the behavior of discrete plumes of biogenic tracer gases, and thus increasing our understanding of their detectability (both from orbit and from in situ measurements), and finally how to use the plumes to identify their precise source locations. We have modeled the detailed behavior of martian atmospheric plumes using MarsWRF for the atmospheric dynamics and SCIPUFF (a terrestrial state of the art plume modeling code that we have modified to represent martian conditions) for the plume dynamics. This combination of tools allows us to accurately simulate plumes not only from a regional scale from which an orbital observing platform would witness the plume, but also from an in situ perspective, with the instantaneous concentration variations that a turbulent flow would present to a point sampler in situ instrument. Our initial work has focused on the detectability of discrete plumes from an orbital perspective and we will present those results for a variety of notional orbital trace gas detection instruments. We have also begun simulating the behavior of the plumes from the perspective of a sampler on a rover within the martian atmospheric boundary layer. The detectability of plumes within the boundary layer has a very strong dependence on the atmospheric stability, with plume concentrations increasing by a factor of 10-1000 during nighttime when compared to daytime. In the equatorial regions of the planet where we have simulated plumes, the diurnal tidal “clocking” of the winds is strongly evident in the plume trail, which similarly “clocks” around its source. This behavior, combined with the strong diurnal concentration variations suggests that a rover hunting a plume source would be well suited to approach it from a particular azimuth (downwind at night) to maximize detectability of the plume and the ability to
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Winokur, Justin; Knio, Omar M.
2016-05-01
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model's estimates of the plume's trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate's contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
Mixed Platoon Flow Dispersion Model Based on Speed-Truncated Gaussian Mixture Distribution
Directory of Open Access Journals (Sweden)
Weitiao Wu
2013-01-01
Full Text Available A mixed traffic flow feature is presented on urban arterials in China due to a large amount of buses. Based on field data, a macroscopic mixed platoon flow dispersion model (MPFDM was proposed to simulate the platoon dispersion process along the road section between two adjacent intersections from the flow view. More close to field observation, truncated Gaussian mixture distribution was adopted as the speed density distribution for mixed platoon. Expectation maximum (EM algorithm was used for parameters estimation. The relationship between the arriving flow distribution at downstream intersection and the departing flow distribution at upstream intersection was investigated using the proposed model. Comparison analysis using virtual flow data was performed between the Robertson model and the MPFDM. The results confirmed the validity of the proposed model.
Using convex quadratic programming to model random media with Gaussian random fields
Quintanilla, John A.; Jones, W. Max
2007-04-01
Excursion sets of Gaussian random fields (GRFs) have been frequently used in the literature to model two-phase random media with measurable phase autocorrelation functions. The goal of successful modeling is finding the optimal field autocorrelation function that best approximates the prescribed phase autocorrelation function. In this paper, we present a technique which uses convex quadratic programming to find the best admissible field autocorrelation function under a prescribed discretization. Unlike previous methods, this technique efficiently optimizes over all admissible field autocorrelation functions, instead of optimizing only over a predetermined parametrized family. The results from using this technique indicate that the GRF model is significantly more versatile than observed in previous studies. An application to modeling a base-catalyzed tetraethoxysilane aerogel system given small-angle neutron scattering data is also presented
Xu, Y. F.; Zhu, W. D.; Smith, S. A.
2017-07-01
Mode shapes have been extensively used to identify structural damage. This paper presents a new non-model-based method that uses principal, mean and Gaussian curvature mode shapes (CMSs) to identify damage in plates; the method is applicable to mode shapes associated with low and high elastic modes on dense and coarse measurement grids and robust against measurement noise. A multi-scale discrete differential-geometry scheme is proposed to calculate principal, mean and Gaussian CMSs associated with a mode shape of a plate, which can alleviate adverse effects of measurement noise on calculating the CMSs. Principal, mean and Gaussian CMSs of a damaged plate and those of an undamaged one are used to yield four curvature damage indices (CDIs), including Maximum-CDIs, Minimum-CDIs, Mean-CDIs and Gaussian-CDIs. Damage can be identified near regions with consistently higher values of the CDIs. It is shown that a mode shape of an undamaged plate can be well approximated using a polynomial of a properly determined order that fits a mode shape of a damaged one, provided that the undamaged plate has a smooth geometry and is made of material that has no stiffness and mass discontinuities. Fitting and convergence indices are introduced to quantify the level of approximation of a mode shape from a polynomial fit to that of a damaged plate and to determine the proper order of the polynomial fit, respectively. A weight function is applied to the proposed CDIs to alleviate adverse effects of measurement noise on the CDIs and manifest existence of damage in the CDIs. A mode shape of an aluminum plate with damage in the form of a machined thickness reduction area was measured to experimentally investigate effectiveness of the proposed CDIs in damage identification; the damage on the plate was successfully identified. The experimental damage identification results were numerically verified by applying the proposed method to the mode shape associated with the same mode as that of the
Chang, L.; Nie, L.; Xian, Y.; Lu, X.
2016-12-01
One of the distinguishable features of plasma jets compared with the traditional streamers is their repeatable propagation. As an initial objective, the effect of seed electrons on the repeatability of plasma plume propagation is investigated numerically. Besides residual electrons left from previous pulses, the electrons detached from O2 - ions could also be a significant source of the seed electrons to affect the repeatability of plasma plume propagation when an electronegative gas admixture is presented. In this investigation, a global plasma chemical kinetics model is developed to investigate the temporal evolution of the electron and O2 - ions in the afterglow of a plasma plume driven by microsecond pulse direct current voltages, at a total gas pressure of 2 × 104 Pa or 4 × 103 Pa in helium or helium-oxygen mixtures with an air impurity of 0.025%. In addition, a Monte Carlo technique has been applied to calculate the O2 - detachment rate coefficient. Accordingly, the seed electron density due to detachment from O2 - ions for different percentages of oxygen is obtained. Finally, the minimum seed electron density required for the plasma bullets to propagate in a repeatable mode is obtained according to the critical frequency from the experiments. It is found that the order of minimum seed electron number density required for repeatable propagation mode is independent of oxygen concentrations in the helium-oxygen mixture. It is 10 8 cm - 3 for 20 kPa and 10 7 cm - 3 for 4 kPa. Furthermore, for the helium with an air impurity of 0.025%, the residual electrons left over from previous discharges are the main source of seed electrons. On the other hand, when 0.5% of O2 is added, the detachment of O2 - is the main source of the seed electrons.
Childhood malnutrition in Egypt using geoadditive Gaussian and latent variable models.
Khatab, Khaled
2010-04-01
Major progress has been made over the last 30 years in reducing the prevalence of malnutrition amongst children less than 5 years of age in developing countries. However, approximately 27% of children under the age of 5 in these countries are still malnourished. This work focuses on the childhood malnutrition in one of the biggest developing countries, Egypt. This study examined the association between bio-demographic and socioeconomic determinants and the malnutrition problem in children less than 5 years of age using the 2003 Demographic and Health survey data for Egypt. In the first step, we use separate geoadditive Gaussian models with the continuous response variables stunting (height-for-age), underweight (weight-for-age), and wasting (weight-for-height) as indicators of nutritional status in our case study. In a second step, based on the results of the first step, we apply the geoadditive Gaussian latent variable model for continuous indicators in which the 3 measurements of the malnutrition status of children are assumed as indicators for the latent variable "nutritional status".
Xiao, Yiming; Shah, Mohak; Francis, Simon; Arnold, Douglas L.; Arbel, Tal; Collins, D. Louis
Brain tissue segmentation is important in studying markers in human brain Magnetic Resonance Images (MRI) of patients with diseases such as Multiple Sclerosis (MS). Parametric segmentation approaches typically assume unimodal Gaussian distributions on MRI intensities of individual tissue classes, even in applications on multi-spectral images. However, this assumption has not been rigorously verified especially in the context of MS. In this work, we evaluate the local MRI intensities of both healthy and diseased brain tissues of 21 multi-spectral MRIs (63 volumes in total) of MS patients for adherence to this assumption. We show that the tissue intensities are not uniform across the brain and vary across (anatomical) regions of the brain. Consequently, we show that Gaussian mixtures can better model the multi-spectral intensities. We utilize an Expectation Maximization (EM) based approach to learn the models along with a symmetric Jeffreys divergence criterion to study differences in intensity distributions. The effects of these findings are also empirically verified on automatic segmentation of brains with MS.
Tan, Liying; Li, Mengnan; Yang, Qingbo; Ma, Jing
2015-03-20
In practice, due to the laser device and the inevitable error of the processing technique, the laser source emitted from the communication terminal is partially coherent, and is represented as a Gaussian Schell model (GSM). The cross-spectral density function based on the Gaussian model in previous research is replaced by the GSM. Thus the fiber-coupling efficiency equation of the GSM laser source through atmospheric turbulence is deduced. The GSM equation presents the effect of the source coherent parameter ζ on the fiber-coupling efficiency, which was not included previously. The effects of the source coherent parameter ζ on the spatial coherent radius and the fiber-coupling efficiency through atmospheric turbulence are numerically simulated and analyzed. The result manifests that the fiber-coupling efficiency invariably degrades with increasing ζ. The work in this paper is aimed to improve the redundancy design of fiber-coupling receiver systems by analyzing the fiber-coupling efficiency with the source coherent parameters.
Tzagkarakis, George; Beferull-Lozano, Baltasar; Tsakalides, Panagiotis
2008-07-01
This paper addresses the construction of a novel efficient rotation-invariant texture retrieval method that is based on the alignment in angle of signatures obtained via a steerable sub-Gaussian model. In our proposed scheme, we first construct a steerable multivariate sub-Gaussian model, where the fractional lower-order moments of a given image are associated with those of its rotated versions. The feature extraction step consists of estimating the so-called covariations between the orientation subbands of the corresponding steerable pyramid at the same or at adjacent decomposition levels and building an appropriate signature that can be rotated directly without the need of rotating the image and recalculating the signature. The similarity measurement between two images is performed using a matrix-based norm that includes a signature alignment in angle between the images being compared, achieving in this way the desired rotation-invariance property. Our experimental results show how this retrieval scheme achieves a lower average retrieval error, as compared to previously proposed methods having a similar computational complexity, while at the same time being competitive with the best currently known state-of-the-art retrieval system. In conclusion, our retrieval method provides the best compromise between complexity and average retrieval performance.
Cui, Jie; Li, Zhiying; Krems, Roman V
2015-10-21
We consider a problem of extrapolating the collision properties of a large polyatomic molecule A-H to make predictions of the dynamical properties for another molecule related to A-H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A-X. We assume that the effect of the -H →-X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can be used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C6H5CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He - C6H6 collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C6H5CN with He.
Mixture subclass discriminant analysis link to restricted Gaussian model and other generalizations.
Gkalelis, Nikolaos; Mezaris, Vasileios; Kompatsiaris, Ioannis; Stathaki, Tania
2013-01-01
In this paper, a theoretical link between mixture subclass discriminant analysis (MSDA) and a restricted Gaussian model is first presented. Then, two further discriminant analysis (DA) methods, i.e., fractional step MSDA (FSMSDA) and kernel MSDA (KMSDA) are proposed. Linking MSDA to an appropriate Gaussian model allows the derivation of a new DA method under the expectation maximization (EM) framework (EM-MSDA), which simultaneously derives the discriminant subspace and the maximum likelihood estimates. The two other proposed methods generalize MSDA in order to solve problems inherited from conventional DA. FSMSDA solves the subclass separation problem, that is, the situation in which the dimensionality of the discriminant subspace is strictly smaller than the rank of the inter-between-subclass scatter matrix. This is done by an appropriate weighting scheme and the utilization of an iterative algorithm for preserving useful discriminant directions. On the other hand, KMSDA uses the kernel trick to separate data with nonlinearly separable subclass structure. Extensive experimentation shows that the proposed methods outperform conventional MSDA and other linear discriminant analysis variants.
Energy Technology Data Exchange (ETDEWEB)
Fouque, A.L.; Ciuciu, Ph.; Risser, L. [NeuroSpin/CEA, F-91191 Gif-sur-Yvette (France); Fouque, A.L.; Ciuciu, Ph.; Risser, L. [IFR 49, Institut d' Imagerie Neurofonctionnelle, Paris (France)
2009-07-01
In this paper, a novel statistical parcellation of intra-subject functional MRI (fMRI) data is proposed. The key idea is to identify functionally homogenous regions of interest from their hemodynamic parameters. To this end, a non-parametric voxel-based estimation of hemodynamic response function is performed as a prerequisite. Then, the extracted hemodynamic features are entered as the input data of a Multivariate Spatial Gaussian Mixture Model (MSGMM) to be fitted. The goal of the spatial aspect is to favor the recovery of connected components in the mixture. Our statistical clustering approach is original in the sense that it extends existing works done on univariate spatially regularized Gaussian mixtures. A specific Gibbs sampler is derived to account for different covariance structures in the feature space. On realistic artificial fMRI datasets, it is shown that our algorithm is helpful for identifying a parsimonious functional parcellation required in the context of joint detection estimation of brain activity. This allows us to overcome the classical assumption of spatial stationarity of the BOLD signal model. (authors)
Energy Technology Data Exchange (ETDEWEB)
Pleijel, K.
1998-04-01
The chemical fate of gaseous species in a specific aircraft plume is investigated using an expanding box model. The model treats the gas phase chemical reactions in detail, while other parameters are subject to a high degree of simplification. Model simulations were carried out in a plume up to an age of three days. The role of emitted VOC, NO{sub x} and CO as well as of background concentrations of VOC, NO{sub x} and ozone on aircraft plume chemistry was investigated. Background concentrations were varied in a span of measured values in the free troposphere. High background concentrations of VOC were found to double the average plume production of ozone and organic nitrates. In a high NO{sub x} environment the plume production of ozone and organic nitrates decreased by around 50%. The production of nitric acid was found to be less sensitive to background concentrations of VOC, and increased by up to 50% in a high NO{sub x} environment. Mainly, emitted NO{sub x} caused the plume production of ozone, nitric acid and organic nitrates. The ozone production during the first hours is determined by the relative amount of NO{sub 2} in the NO{sub x} emissions. The impact from emitted VOC was in relative values up to 20% of the ozone production and 65% of the production of organic nitrates. The strongest relative influence from VOC was found in an environment characterized by low VOC and high NO{sub x} background concentrations, where the absolute peak production was lower than in the other scenarios. The effect from emitting VOC and NO{sub x} at the same time added around 5% for ozone, 15% for nitric acid and 10% for organic nitrates to the plume production caused by NO{sub x} and VOC when emitted separately 47 refs, 15 figs, 4 tabs
Thermal radiation of heterogeneous combustion products in the model rocket engine plume
Kuzmin, V. A.; Maratkanova, E. I.; Zagray, I. A.; Rukavishnikova, R. V.
2015-05-01
The work presents a method of complex investigation of thermal radiation emitted by heterogeneous combustion products in the model rocket engine plume. Realization of the method has allowed us to obtain full information on the results in all stages of calculations. Dependence of the optical properties (complex refractive index), the radiation characteristics (coefficients and cross sections) and emission characteristics (flux densities, emissivity factors) of the main determining factors and parameters was analyzed. It was found by the method of computational experiment that the presence of the gaseous phase in the combustion products causes a strongly marked selectivity of emission, due to which the use of gray approximation in the calculation of thermal radiation is unnecessary. The influence of the optical properties, mass fraction, the function of particle size distribution, and the temperature of combustion products on thermal radiation in the model rocket engine plume was investigated. The role of "spotlight" effect-increasing the amount of energy of emission exhaust combustion products due to scattering by condensate particles radiation from the combustion chamber-was established quantitatively.
Finite Size Scaling of the Higgs-Yukawa Model near the Gaussian Fixed Point
Chu, David Y -J; Knippschild, Bastian; Lin, C -J David; Nagy, Attila
2016-01-01
We study the scaling properties of Higgs-Yukawa models. Using the technique of Finite-Size Scaling, we are able to derive scaling functions that describe the observables of the model in the vicinity of a Gaussian fixed point. A feasibility study of our strategy is performed for the pure scalar theory in the weak-coupling regime. Choosing the on-shell renormalisation scheme gives us an advantage to fit the scaling functions against lattice data with only a small number of fit parameters. These formulae can be used to determine the universality of the observed phase transitions, and thus play an essential role in future investigations of Higgs-Yukawa models, in particular in the strong Yukawa coupling region.
Directory of Open Access Journals (Sweden)
A. Aiuppa
2007-01-01
Full Text Available Improving the constraints on the atmospheric fate and depletion rates of acidic compounds persistently emitted by non-erupting (quiescent volcanoes is important for quantitatively predicting the environmental impact of volcanic gas plumes. Here, we present new experimental data coupled with modelling studies to investigate the chemical processing of acidic volcanogenic species during tropospheric dispersion. Diffusive tube samplers were deployed at Mount Etna, a very active open-conduit basaltic volcano in eastern Sicily, and Vulcano Island, a closed-conduit quiescent volcano in the Aeolian Islands (northern Sicily. Sulphur dioxide (SO2, hydrogen sulphide (H2S, hydrogen chloride (HCl and hydrogen fluoride (HF concentrations in the volcanic plumes (typically several minutes to a few hours old were repeatedly determined at distances from the summit vents ranging from 0.1 to ~10 km, and under different environmental conditions. At both volcanoes, acidic gas concentrations were found to decrease exponentially with distance from the summit vents (e.g., SO2 decreases from ~10 000 μg/m3at 0.1 km from Etna's vents down to ~7 μg/m3 at ~10 km distance, reflecting the atmospheric dilution of the plume within the acid gas-free background troposphere. Conversely, SO2/HCl, SO2/HF, and SO2/H2S ratios in the plume showed no systematic changes with plume aging, and fit source compositions within analytical error. Assuming that SO2 losses by reaction are small during short-range atmospheric transport within quiescent (ash-free volcanic plumes, our observations suggest that, for these short transport distances, atmospheric reactions for H2S and halogens are also negligible. The one-dimensional model MISTRA was used to simulate quantitatively the evolution of halogen and sulphur compounds in the plume of Mt. Etna. Model predictions support the hypothesis of minor HCl chemical processing during plume transport, at least in cloud-free conditions. Larger
Non-gaussianity in axion N-flation models Quadratic and $\\lambda\\phi^4$ plus axion potentials
Kamarpour, Mehran
2012-01-01
In this paper we investigate large non-gaussianity in axion N-flation models, taking account while dynamically a large number of axions begin away from the hilltop region(come down from the hill) and so serve only to be the source of the Hubble rate. Therefore the single field stays closest to the hilltop sources the non-Gaussianity. In this case most of axions can be replaced by a single effective field with a quadratic potential. So our potential will contain two fields. The full cosine is responsible for the axion closest to hilltop and quadratic term which is a source for Hubble rate [4]. We obtain power spectrum, spectral index and non-gaussianity parameter, then we impose conditions from WMAP for power spectrum and spectral index and see how large on non-gaussianity parameter it is possible to achieve with such conditions. Finally we swap quadratic term to {\\lambda}{\\phi}^4 and see whether this makes it harder or easier to achieve large non-gaussianity.We find large non-gaussianity is achievable by impo...
Gaussian mixture models and semantic gating improve reconstructions from human brain activity
Directory of Open Access Journals (Sweden)
Sanne eSchoenmakers
2015-01-01
Full Text Available Better acquisition protocols and analysis techniques are making it possible to use fMRI to obtain highly detailed visualizations of brain processes. In particular we focus on the reconstruction of natural images from BOLD responses in visual cortex. We expand our linear Gaussian framework for percept decoding with Gaussian mixture models to better represent the prior distribution of natural images. Reconstruction of such images then boils down to probabilistic inference in a hybrid Bayesian network. In our set-up, different mixture components correspond to different character categories. Our framework can automatically infer higher-order semantic categories from lower-level brain areas. Furthermore the framework can gate semantic information from higher-order brain areas to enforce the correct category during reconstruction. When categorical information is not available, we show that automatically learned clusters in the data give a similar improvement in reconstruction. The hybrid Bayesian network leads to highly accurate reconstructions in both supervised and unsupervised settings.
Directory of Open Access Journals (Sweden)
Qunyi Xie
2016-01-01
Full Text Available Content-based image retrieval has recently become an important research topic and has been widely used for managing images from repertories. In this article, we address an efficient technique, called MNGS, which integrates multiview constrained nonnegative matrix factorization (NMF and Gaussian mixture model- (GMM- based spectral clustering for image retrieval. In the proposed methodology, the multiview NMF scheme provides competitive sparse representations of underlying images through decomposition of a similarity-preserving matrix that is formed by fusing multiple features from different visual aspects. In particular, the proposed method merges manifold constraints into the standard NMF objective function to impose an orthogonality constraint on the basis matrix and satisfy the structure preservation requirement of the coefficient matrix. To manipulate the clustering method on sparse representations, this paper has developed a GMM-based spectral clustering method in which the Gaussian components are regrouped in spectral space, which significantly improves the retrieval effectiveness. In this way, image retrieval of the whole database translates to a nearest-neighbour search in the cluster containing the query image. Simultaneously, this study investigates the proof of convergence of the objective function and the analysis of the computational complexity. Experimental results on three standard image datasets reveal the advantages that can be achieved with the proposed retrieval scheme.
Modeling and statistical analysis of non-Gaussian random fields with heavy-tailed distributions
Nezhadhaghighi, Mohsen Ghasemi; Nakhlband, Abbas
2017-04-01
In this paper, we investigate and develop an alternative approach to the numerical analysis and characterization of random fluctuations with the heavy-tailed probability distribution function (PDF), such as turbulent heat flow and solar flare fluctuations. We identify the heavy-tailed random fluctuations based on the scaling properties of the tail exponent of the PDF, power-law growth of q th order correlation function, and the self-similar properties of the contour lines in two-dimensional random fields. Moreover, this work leads to a substitution for the fractional Edwards-Wilkinson (EW) equation that works in the presence of μ -stable Lévy noise. Our proposed model explains the configuration dynamics of the systems with heavy-tailed correlated random fluctuations. We also present an alternative solution to the fractional EW equation in the presence of μ -stable Lévy noise in the steady state, which is implemented numerically, using the μ -stable fractional Lévy motion. Based on the analysis of the self-similar properties of contour loops, we numerically show that the scaling properties of contour loop ensembles can qualitatively and quantitatively distinguish non-Gaussian random fields from Gaussian random fluctuations.
Approximating Gaussian mixture model or radial basis function network with multilayer perceptron.
Patrikar, Ajay M
2013-07-01
Gaussian mixture models (GMMs) and multilayer perceptron (MLP) are both popular pattern classification techniques. This brief shows that a multilayer perceptron with quadratic inputs (MLPQ) can accurately approximate GMMs with diagonal covariance matrices. The mapping equations between the parameters of GMM and the weights of MLPQ are presented. A similar approach is applied to radial basis function networks (RBFNs) to show that RBFNs with Gaussian basis functions and Euclidean norm can be approximated accurately with MLPQ. The mapping equations between RBFN and MLPQ weights are presented. There are well-established training procedures for GMMs, such as the expectation maximization (EM) algorithm. The GMM parameters obtained by the EM algorithm can be used to generate a set of initial weights of MLPQ. Similarly, a trained RBFN can be used to generate a set of initial weights of MLPQ. MLPQ training can be continued further with gradient-descent based methods, which can lead to improvement in performance compared to the GMM or RBFN from which it is initialized. Thus, the MLPQ can always perform as well as or better than the GMM or RBFN.
Buis, Arjan
2016-01-01
Elevated skin temperature at the body/device interface of lower-limb prostheses is one of the major factors that affect tissue health. The heat dissipation in prosthetic sockets is greatly influenced by the thermal conductive properties of the hard socket and liner material employed. However, monitoring of the interface temperature at skin level in lower-limb prosthesis is notoriously complicated. This is due to the flexible nature of the interface liners used which requires consistent positioning of sensors during donning and doffing. Predicting the residual limb temperature by monitoring the temperature between socket and liner rather than skin and liner could be an important step in alleviating complaints on increased temperature and perspiration in prosthetic sockets. To predict the residual limb temperature, a machine learning algorithm – Gaussian processes is employed, which utilizes the thermal time constant values of commonly used socket and liner materials. This Letter highlights the relevance of thermal time constant of prosthetic materials in Gaussian processes technique which would be useful in addressing the challenge of non-invasively monitoring the residual limb skin temperature. With the introduction of thermal time constant, the model can be optimised and generalised for a given prosthetic setup, thereby making the predictions more reliable. PMID:27695626
Institute of Scientific and Technical Information of China (English)
Chen Bao-Xin
2006-01-01
An elliptical Gaussian wave formalism model of a charged-particle beam is proposed by analogy with an elliptical Gaussian light beam.In the paraxial approximation.the charged-particle beam can be described as a whole by a complex radius of curvature in the real space domains.Therefore,the propagation and transform of charged-particle beam passing through a first-order optical system is represented by the ABCD-like law.As an example of the application of this model,the relation between the beam waist and the minimum beam spot at a fixed target is discussed.The result.well matches that from conventional phase space model,and proves that the Gaussian wave formalism model is highly effective and reasonable.
Primordial non-Gaussianities of gravitational waves in the most general single-field inflation model
Gao, Xian; Yamaguchi, Masahide; Yokoyama, Jun'ichi
2011-01-01
We completely clarify the feature of primordial non-Gaussianities of tensor perturbations in generalized G-inflation, i.e., the most general single-field inflation model with second order field equations. It is shown that the most general cubic action for the tensor perturbation (gravitational wave) $h_{ij}$ is composed only of two contributions, one with two spacial derivatives and the other with one time derivative on each $h_{ij}$. The former is essentially identical to the cubic term that appears in Einstein gravity and predicts a squeezed shape, while the latter newly appears in the presence of the kinetic coupling to the Einstein tensor and predicts an equilateral shape. Thus, only two shapes appear in the graviton bispectrum of the most general single-field inflation model, which could open a new clue to the identification of inflationary gravitational waves in observations of cosmic microwave background anisotropies as well as direct gravitational wave detection experiments.
Beam wander of Gaussian-Schell model beams propagating through oceanic turbulence
Wu, Yuqian; Zhang, Yixin; Li, Ye; Hu, Zhengda
2016-07-01
For Gaussian-Schell model beams propagating in the isotropic turbulent ocean, theoretical expression of beam wander is derived based on the extended Huygens-Fresnel principle. The spatial coherence radius of spherical waves propagating in the paraxial channel of turbulent ocean including inner scale is also developed. Our results show that the beam wander decreases with the increasing rate of dissipation of kinetic energy per unit mass of fluid ɛ, but it increases as the increasing of the dissipation rate of temperature variance χt and the relative strength of temperature and salinity fluctuations ϖ. The salinity fluctuation has greater influence on the beam wander than that of temperature fluctuations. The model can be evaluated submarine-to-submarine/ship optical wireless communication performance.
PySSM: A Python Module for Bayesian Inference of Linear Gaussian State Space Models
Directory of Open Access Journals (Sweden)
Christopher Strickland
2014-04-01
Full Text Available PySSM is a Python package that has been developed for the analysis of time series using linear Gaussian state space models. PySSM is easy to use; models can be set up quickly and efficiently and a variety of different settings are available to the user. It also takes advantage of scientific libraries NumPy and SciPy and other high level features of the Python language. PySSM is also used as a platform for interfacing between optimized and parallelized Fortran routines. These Fortran routines heavily utilize basic linear algebra and linear algebra Package functions for maximum performance. PySSM contains classes for filtering, classical smoothing as well as simulation smoothing.
Loukas, Constantinos; Georgiou, Evangelos
2013-01-01
There is currently great interest in analyzing the workflow of minimally invasive operations performed in a physical or simulation setting, with the aim of extracting important information that can be used for skills improvement, optimization of intraoperative processes, and comparison of different interventional strategies. The first step in achieving this goal is to segment the operation into its key interventional phases, which is currently approached by modeling a multivariate signal that describes the temporal usage of a predefined set of tools. Although this technique has shown promising results, it is challenged by the manual extraction of the tool usage sequence and the inability to simultaneously evaluate the surgeon's skills. In this paper we describe an alternative methodology for surgical phase segmentation and performance analysis based on Gaussian mixture multivariate autoregressive (GMMAR) models of the hand kinematics. Unlike previous work in this area, our technique employs signals from orientation sensors, attached to the endoscopic instruments of a virtual reality simulator, without considering which tools are employed at each time-step of the operation. First, based on pre-segmented hand motion signals, a training set of regression coefficients is created for each surgical phase using multivariate autoregressive (MAR) models. Then, a signal from a new operation is processed with GMMAR, wherein each phase is modeled by a Gaussian component of regression coefficients. These coefficients are compared to those of the training set. The operation is segmented according to the prior probabilities of the surgical phases estimated via GMMAR. The method also allows for the study of motor behavior and hand motion synchronization demonstrated in each phase, a quality that can be incorporated into modern laparoscopic simulators for skills assessment.
Huang, Yi-Fei; Golding, G Brian
2014-01-01
A critical question in biology is the identification of functionally important amino acid sites in proteins. Because functionally important sites are under stronger purifying selection, site-specific substitution rates tend to be lower than usual at these sites. A large number of phylogenetic models have been developed to estimate site-specific substitution rates in proteins and the extraordinarily low substitution rates have been used as evidence of function. Most of the existing tools, e.g. Rate4Site, assume that site-specific substitution rates are independent across sites. However, site-specific substitution rates may be strongly correlated in the protein tertiary structure, since functionally important sites tend to be clustered together to form functional patches. We have developed a new model, GP4Rate, which incorporates the Gaussian process model with the standard phylogenetic model to identify slowly evolved regions in protein tertiary structures. GP4Rate uses the Gaussian process to define a nonparametric prior distribution of site-specific substitution rates, which naturally captures the spatial correlation of substitution rates. Simulations suggest that GP4Rate can potentially estimate site-specific substitution rates with a much higher accuracy than Rate4Site and tends to report slowly evolved regions rather than individual sites. In addition, GP4Rate can estimate the strength of the spatial correlation of substitution rates from the data. By applying GP4Rate to a set of mammalian B7-1 genes, we found a highly conserved region which coincides with experimental evidence. GP4Rate may be a useful tool for the in silico prediction of functionally important regions in the proteins with known structures.
Chertock, A.
2012-02-02
Aquatic bacteria like Bacillus subtilis are heavier than water yet they are able to swim up an oxygen gradient and concentrate in a layer below the water surface, which will undergo Rayleigh-Taylor-type instabilities for sufficiently high concentrations. In the literature, a simplified chemotaxis-fluid system has been proposed as a model for bio-convection in modestly diluted cell suspensions. It couples a convective chemotaxis system for the oxygen-consuming and oxytactic bacteria with the incompressible Navier-Stokes equations subject to a gravitational force proportional to the relative surplus of the cell density compared to the water density. In this paper, we derive a high-resolution vorticity-based hybrid finite-volume finite-difference scheme, which allows us to investigate the nonlinear dynamics of a two-dimensional chemotaxis-fluid system with boundary conditions matching an experiment of Hillesdon et al. (Bull. Math. Biol., vol. 57, 1995, pp. 299-344). We present selected numerical examples, which illustrate (i) the formation of sinking plumes, (ii) the possible merging of neighbouring plumes and (iii) the convergence towards numerically stable stationary plumes. The examples with stable stationary plumes show how the surface-directed oxytaxis continuously feeds cells into a high-concentration layer near the surface, from where the fluid flow (recurring upwards in the space between the plumes) transports the cells into the plumes, where then gravity makes the cells sink and constitutes the driving force in maintaining the fluid convection and, thus, in shaping the plumes into (numerically) stable stationary states. Our numerical method is fully capable of solving the coupled chemotaxis-fluid system and enabling a full exploration of its dynamics, which cannot be done in a linearised framework. © 2012 Cambridge University Press.
Directory of Open Access Journals (Sweden)
Yun Wang
2016-01-01
Full Text Available Gamma Gaussian inverse Wishart cardinalized probability hypothesis density (GGIW-CPHD algorithm was always used to track group targets in the presence of cluttered measurements and missing detections. A multiple models GGIW-CPHD algorithm based on best-fitting Gaussian approximation method (BFG and strong tracking filter (STF is proposed aiming at the defect that the tracking error of GGIW-CPHD algorithm will increase when the group targets are maneuvering. The best-fitting Gaussian approximation method is proposed to implement the fusion of multiple models using the strong tracking filter to correct the predicted covariance matrix of the GGIW component. The corresponding likelihood functions are deduced to update the probability of multiple tracking models. From the simulation results we can see that the proposed tracking algorithm MM-GGIW-CPHD can effectively deal with the combination/spawning of groups and the tracking error of group targets in the maneuvering stage is decreased.
ADAPTIVE BACKGROUND DENGAN METODE GAUSSIAN MIXTURE MODELS UNTUK REAL-TIME TRACKING
Directory of Open Access Journals (Sweden)
Silvia Rostianingsih
2008-01-01
Full Text Available Nowadays, motion tracking application is widely used for many purposes, such as detecting traffic jam and counting how many people enter a supermarket or a mall. A method to separate background and the tracked object is required for motion tracking. It will not be hard to develop the application if the tracking is performed on a static background, but it will be difficult if the tracked object is at a place with a non-static background, because the changing part of the background can be recognized as a tracking area. In order to handle the problem an application can be made to separate background where that separation can adapt to change that occur. This application is made to produce adaptive background using Gaussian Mixture Models (GMM as its method. GMM method clustered the input pixel data with pixel color value as it’s basic. After the cluster formed, dominant distributions are choosen as background distributions. This application is made by using Microsoft Visual C 6.0. The result of this research shows that GMM algorithm could made adaptive background satisfactory. This proofed by the result of the tests that succeed at all condition given. This application can be developed so the tracking process integrated in adaptive background maker process. Abstract in Bahasa Indonesia : Saat ini, aplikasi motion tracking digunakan secara luas untuk banyak tujuan, seperti mendeteksi kemacetan dan menghitung berapa banyak orang yang masuk ke sebuah supermarket atau sebuah mall. Sebuah metode untuk memisahkan antara background dan obyek yang di-track dibutuhkan untuk melakukan motion tracking. Membuat aplikasi tracking pada background yang statis bukanlah hal yang sulit, namun apabila tracking dilakukan pada background yang tidak statis akan lebih sulit, dikarenakan perubahan background dapat dikenali sebagai area tracking. Untuk mengatasi masalah tersebut, dapat dibuat suatu aplikasi untuk memisahkan background dimana aplikasi tersebut dapat
Recent Site-Wide Transport Modeling Related to the Carbon Tetrachloride Plume at the Hanford Site
Energy Technology Data Exchange (ETDEWEB)
Bergeron, Marcel P.; Cole, C R.
2005-11-01
Carbon tetrachloride transport in the unconfined aquifer system at the Hanford Site has been the subject of follow-on studies since the Carbon Tetrachloride Innovative Treatment Remediation Demonstration (ITRD) Program was completed in FY 2002. These scoping analyses were undertaken to provide support for strategic planning and guidance for the more robust modeling needed to obtain a final record of decision (ROD) for the carbon tetrachloride plume in the 200 West Area. This report documents the technical approach and the results of these follow-on, site-wide scale-modeling efforts. The existing site-wide groundwater model was used in this effort. The work extended that performed as part of the ITRD modeling study in which a 200 West Area scale submodel was developed to examine arrival concentrations at an arbitrary boundary between the 200 E and 200 W areas. These scoping analyses extended the analysis to predict the arrival of the carbon tetrachloride plume at the Columbia River. The results of these analyses illustrate the importance of developing field-scale estimates of natural attenuation parameters, abiotic degradation rate and soil/water equilibrium sorption coefficient, for carbon tetrachloride. With these parameters set to zero, carbon tetrachloride concentrations will exceed the compliance limit of 5 ?g/L outside the 200 Area Plateau Waste Management Area, and the aquifer source loading and area of the aquifer affected will continue to grow until arrival rates of carbon tetrachloride equal source release rates, estimated at 33 kg/yr. Results of this scoping analysis show that the natural attenuation parameters are critical in predicting the future movement of carbon tetrachloride from the 200 West Area. Results also show the significant change in predictions between continual source release from the vadose zone and complete source removal.
Li, Lin; Li, Chuan; Alexov, Emil
2014-05-01
Traditional implicit methods for modeling electrostatics in biomolecules use a two-dielectric approach: a biomolecule is assigned low dielectric constant while the water phase is considered as a high dielectric constant medium. However, such an approach treats the biomolecule-water interface as a sharp dielectric border between two homogeneous dielectric media and does not account for inhomogeneous dielectric properties of the macromolecule as well. Recently we reported a new development, a smooth Gaussian-based dielectric function which treats the entire system, the solute and the water phase, as inhomogeneous dielectric medium (J Chem Theory Comput. 2013 Apr 9; 9(4): 2126-2136.). Here we examine various aspects of the modeling of polar solvation energy in such inhomogeneous systems in terms of the solute-water boundary and the inhomogeneity of the solute in the absence of water surrounding. The smooth Gaussian-based dielectric function is implemented in the DelPhi finite-difference program, and therefore the sensitivity of the results with respect to the grid parameters is investigated, and it is shown that the calculated polar solvation energy is almost grid independent. Furthermore, the results are compared with the standard two-media model and it is demonstrated that on average, the standard method overestimates the magnitude of the polar solvation energy by a factor 2.5. Lastly, the possibility of the solute to have local dielectric constant larger than of a bulk water is investigated in a benchmarking test against experimentally determined set of pKa's and it is speculated that side chain rearrangements could result in local dielectric constant larger than 80.
Directory of Open Access Journals (Sweden)
Ting Wang
2016-02-01
Full Text Available Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM, a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM".
Directory of Open Access Journals (Sweden)
S. R. Freitas
2010-01-01
Full Text Available Vegetation fires emit hot gases and particles which are rapidly transported upward by the positive buoyancy generated by the combustion process. In general, the final vertical height that the smoke plumes reach is controlled by the thermodynamic stability of the atmospheric environment and the surface heat flux released by the fire. However, the presence of a strong horizontal wind can enhance the lateral entrainment and induce additional drag, particularly for small fires, impacting the smoke injection height. In this paper, we revisit the parameterization of the vertical transport of hot gases and particles emitted from vegetation fires, described in Freitas et al. (2007, to include the effects of environmental wind on transport and dilution of the smoke plume at its scale. This process is quantitatively represented by introducing an additional entrainment term to account for organized inflow of a mass of cooler and drier ambient air into the plume and its drag by momentum transfer. An extended set of equations including the horizontal motion of the plume and the additional increase of the plume radius is solved to simulate the time evolution of the plume rise and the smoke injection height. One-dimensional (1-D model results are presented for two deforestation fires in the Amazon basin with sizes of 10 and 50 ha under calm and windy atmospheric environments. The results are compared to corresponding simulations generated by the complex non-hydrostatic three-dimensional (3-D Active Tracer High resolution Atmospheric Model (ATHAM. We show that the 1-D model results compare well with the full 3-D simulations. The 1-D model may thus be used in field situations where extensive computing facilities are not available, especially under conditions for which several optional cases must be studied.
Directory of Open Access Journals (Sweden)
S.-H. Lee
2011-07-01
Full Text Available Transport and chemical transformation of well-defined New York City (NYC urban plumes over the North Atlantic Ocean were studied using aircraft measurements collected on 20–21 July 2004 during the ICARTT (International Consortium for Atmospheric Research on Transport and Transformation field campaign and WRF-Chem (Weather Research and Forecasting-Chemistry model simulations. The strong NYC urban plumes were characterized by carbon monoxide (CO mixing ratios of 350–400 parts per billion by volume (ppbv and ozone (O_{3} levels of about 100 ppbv near New York City on 20 July in the WP-3D in-situ and DC-3 lidar aircraft measurements. On 21 July, the two aircraft captured strong urban plumes with about 350 ppbv CO and over 150 ppbv O_{3} (~160 ppbv maximum about 600 km downwind of NYC over the North Atlantic Ocean. The measured urban plumes extended vertically up to about 2 km near New York City, but shrank to 1–1.5 km over the stable marine boundary layer (MBL over the North Atlantic Ocean. The WRF-Chem model reproduced ozone formation processes, chemical characteristics, and meteorology of the measured urban plumes near New York City (20 July and in the far downwind region over the North Atlantic Ocean (21 July. The quasi-Lagrangian analysis of transport and chemical transformation of the simulated NYC urban plumes using WRF-Chem results showed that the pollutants can be efficiently transported in (isentropic layers in the lower atmosphere (<2–3 km over the North Atlantic Ocean while maintaining a dynamic vertical decoupling by cessation of turbulence in the stable MBL. The O_{3} mixing ratio in the NYC urban plumes remained at 80–90 ppbv during nocturnal transport over the stable MBL, then grew to over 100 ppbv by daytime oxidation of nitrogen oxides (NO_{x} = NO + NO_{2} with mixing ratios on the order of 1 ppbv. Efficient transport of reactive nitrogen species (NO_{y}, specifically nitric
Directory of Open Access Journals (Sweden)
S.-H. Lee
2011-05-01
Full Text Available Transport and chemical transformation of well-defined New York City (NYC urban plumes over the North Atlantic Ocean were studied using aircraft measurements collected on 20–21 July 2004 during the ICARTT (International Consortium for Atmospheric Research on Transport and Transformation field campaign and WRF-Chem (Weather Research and Forecasting-Chemistry model simulations. The strong NYC urban plumes were characterized by carbon monoxide (CO mixing ratios of 350–400 parts per billion by volume (ppbv and ozone (O_{3} levels of about 100 ppbv near New York City on 20 July in the WP-3D in-situ and DC-3 lidar aircraft measurements. On 21 July, the two aircraft captured strong urban plumes with about 350 ppbv CO and over 150 ppbv O_{3} (~160 ppbv maximum about 600 km downwind of NYC over the North Atlantic Ocean. The measured urban plumes extended vertically up to about 2 km near New York City, but shrank to 1–1.5 km over the stable marine boundary layer (MBL over the North Atlantic Ocean. The WRF-Chem model reproduced ozone formation processes, chemical characteristics, and meteorology of the measured urban plumes near New York City (20 July and in the far downwind region over the North Atlantic Ocean (21 July. The quasi-Lagrangian analysis of transport and chemical transformation of the simulated NYC urban plumes using WRF-Chem results showed that the pollutants can be efficiently transported in (isentropic layers in the lower atmosphere (<2–3 km over the North Atlantic Ocean while maintaining a dynamic vertical decoupling by cessation of turbulence in the stable MBL. The O_{3} mixing ratio in the NYC urban plumes remained at 80–90 ppbv during nocturnal transport over the stable MBL, then grew to over 100 ppbv by daytime oxidation of nitrogen oxides (NO_{x} = NO + NO_{2} with mixing ratios on the order of 1 ppbv. Efficient transport of reactive nitrogen species (NO_{y}, specifically nitric
Klein, A.A.B.; Melard, G.; Zahaf, T.
2000-01-01
The Fisher information matrix is of fundamental importance for the analysis of parameter estimation of time series models. In this paper the exact information matrix of a multivariate Gaussian time series model expressed in state space form is derived. A computationally efficient procedure is used b
Klein, A.A.B.; Melard, G.; Zahaf, T.
2000-01-01
The Fisher information matrix is of fundamental importance for the analysis of parameter estimation of time series models. In this paper the exact information matrix of a multivariate Gaussian time series model expressed in state space form is derived. A computationally efficient procedure is used b
Spectral Running and Non-Gaussianity from Slow-Roll Inflation in Generalised Two--Field Models
Choi, Ki-Young; van de Bruck, Carsten
2008-01-01
Theories beyond the standard model such as string theory motivate low energy effective field theories with several scalar fields which are not only coupled through a potential but also through their kinetic terms. For such theories we derive the general formulae for the running of the spectral indices for the adiabatic, isocurvature and correlation spectra in the case of two field inflation. We also compute the expected non-Gaussianity in such models for specific forms of the potentials. We find that the coupling has little impact on the level of non-Gaussianity during inflation.
Hard X-ray optics simulation using the coherent mode decomposition of Gaussian Schell model
Hua, Wenqiang; Song, Li; Li, Xiuhong; Wang, Jie
2013-01-01
The propagation of hard X ray beam from partially coherent synchrotron source is simulated by using the novel method based on the coherent mode decomposition of Gaussian Schell model and wave front propagation. We investigate how the coherency properties and intensity distributions of the beam are changed by propagation through optical elements. Here, we simulate and analyze the propagation of the partially coherent radiation transmitted through an ideal slit. We present the first simulations for focusing partially coherent synchrotron hard X ray beams using this novel method. And when compared with the traditional method which assumes the source is a totally coherent point source or completely incoherent, this method is proved to be more reasonable and can also demonstrate the coherence properties of the focusing beam. We also simulate the double slit experiment and the simulated results validate the academic analysis.
Directory of Open Access Journals (Sweden)
Wararit PANICHKITKOSOLKUL
2012-09-01
Full Text Available Guttman and Tiao [1], and Chang [2] showed that the effect of outliers may cause serious bias in estimating autocorrelations, partial correlations, and autoregressive moving average parameters (cited in Chang et al. [3]. This paper presents a modified weighted symmetric estimator for a Gaussian first-order autoregressive AR(1 model with additive outliers. We apply the recursive median adjustment based on an exponentially weighted moving average (EWMA to the weighted symmetric estimator of Park and Fuller [4]. We consider the following estimators: the weighted symmetric estimator (, the recursive mean adjusted weighted symmetric estimator ( proposed by Niwitpong [5], the recursive median adjusted weighted symmetric estimator ( proposed by Panichkitkosolkul [6], and the weighted symmetric estimator using adjusted recursive median based on EWMA (. Using Monte Carlo simulations, we compare the mean square error (MSE of estimators. Simulation results have shown that the proposed estimator, , provides a MSE lower than those of , and for almost all situations.
Ma, Rubao; Xu, Weichao; Zhang, Yun; Ye, Zhongfu
2014-01-01
This paper investigates the robustness properties of Pearson's rank-variate correlation coefficient (PRVCC) in scenarios where one channel is corrupted by impulsive noise and the other is impulsive noise-free. As shown in our previous work, these scenarios that frequently encountered in radar and/or sonar, can be well emulated by a particular bivariate contaminated Gaussian model (CGM). Under this CGM, we establish the asymptotic closed forms of the expectation and variance of PRVCC by means of the well known Delta method. To gain a deeper understanding, we also compare PRVCC with two other classical correlation coefficients, i.e., Spearman's rho (SR) and Kendall's tau (KT), in terms of the root mean squared error (RMSE). Monte Carlo simulations not only verify our theoretical findings, but also reveal the advantage of PRVCC by an example of estimating the time delay in the particular impulsive noise environment.
Propagation of multi-Gaussian Schell-model vortex beams in isotropic random media.
Tang, Miaomiao; Zhao, Daomu
2015-12-14
The effect of isotropic and homogeneous random media on propagation characteristics of recently introduced multi-Gaussian Schell-model (MGSM) vortex beams is investigated. The analytical formula for the cross-spectral density function of such a beam propagating in random turbulent media is derived and used to explore the evolution of the spectral density, the degree of coherence and the turbulence-induced spreading. An example illustrates the fact that, at sufficiently large distance from the source, the source correlations modulation of the spectral distribution in free space is shown to be suppressed by the uniformly correlated turbulence. The impacts, arising from the index M, the correlation width of the source and the properties of the medium on such characteristics are analyzed in depth.
Effect of oceanic turbulence on the propagation of cosine-Gaussian-correlated Schell-model beams
Ding, Chaoliang; Liao, Lamei; Wang, Haixia; Zhang, Yongtao; Pan, Liuzhan
2015-03-01
On the basis of the extended Huygens-Fresnel principle, the analytic expression for the cross-spectral density function of the cosine-Gaussian-correlated Schell-model (CGSM) beams propagating in oceanic turbulence is derived and used to investigate the spectral density and spectral degree of coherence of CGSM beams. The dependence of the spectral density and spectral degree of coherence of CGSM beams on the oceanic turbulence parameters including temperature-salinity balance parameter ω, mean square temperature dissipation rate χT and energy dissipation rate per unit mass ɛ is stressed and illustrated numerically. It is shown that oceanic turbulence plays an important role in the evolution of spectral density and spectral degree of coherence of CGSM beams upon propagation.
GAUSSIAN MIXTURE MODEL BASED LEVEL SET TECHNIQUE FOR AUTOMATED SEGMENTATION OF CARDIAC MR IMAGES
Directory of Open Access Journals (Sweden)
G. Dharanibai,
2011-04-01
Full Text Available In this paper we propose a Gaussian Mixture Model (GMM integrated level set method for automated segmentation of left ventricle (LV, right ventricle (RV and myocardium from short axis views of cardiacmagnetic resonance image. By fitting GMM to the image histogram, global pixel intensity characteristics of the blood pool, myocardium and background are estimated. GMM provides initial segmentation andthe segmentation solution is regularized using level set. Parameters for controlling the level set evolution are automatically estimated from the Bayesian inference classification of pixels. We propose a new speed function that combines edge and region information that stops the evolving level set at the myocardial boundary. Segmentation efficacy is analyzed qualitatively via visual inspection. Results show the improved performance of our of proposed speed function over the conventional Bayesian driven adaptive speed function in automatic segmentation of myocardium
Z-scan experiment with anisotropic Gaussian Schell-model beams.
Liu, Yongxin; Pu, Jixiong; Qi, Hongqun
2009-09-01
We analyze the z-scan experiment with anisotropic Gaussian Schell-model (AGSM) beams. The expression for the cross-spectral density of the AGSM beam passing through the lens and onto the nonlinear thin sample is derived. Based on the expression, we simulate the results of the z-scan experiment theoretically and analyze the effects of the e factor (e=w(0x)/w(0y)) and the spatial degree of coherence in the x and y orientations on the on-axis z-scan transmittance. It is found that DeltaTp(-v) becomes larger with an increment of the e factor and the spatial degree of coherence. So we can improve the sensitivity of the z-scan experiment by increasing the e factor and the spatial degree of coherence. The results are helpful for improving the sensitivity of the z-scan experiment.
Color-texture segmentation using JSEG based on Gaussian mixture modeling
Institute of Scientific and Technical Information of China (English)
Wang Yuzhong; Yang Jie; Zhou Yue
2006-01-01
An improved approach for J-value segmentation (JSEG) is presented for unsupervised color image segmentation. Instead of color quantization algorithm, an automatic classification method based on adaptive mean shift (AMS)based clustering is used for nonparametric clustering of image data set. The clustering results are used to construct Gaussian mixture modelling (GMM) of image data for the calculation of soft J value. The region growing algorithm used in JSEG is then applied in segmenting the image based on the multiscale soft J-images. Experiments show that the synergism of JSEG and the soft classification based on AMS based clustering and GMM overcomes the limitations of JSEG successfully and is more robust.
Spreading and wandering of Gaussian-Schell model laser beams in an anisotropic turbulent ocean
Wu, Yuqian; Zhang, Yixin; Zhu, Yun; Hu, Zhengda
2016-09-01
The effect of anisotropic turbulence on the spreading and wandering of Gaussian-Schell model (GSM) laser beams propagating in an ocean is studied. The long-term spreading of a GSM beam propagating through the paraxial channel of a turbulent ocean is also developed. Expressions of random wander for such laser beams are derived in an anisotropic turbulent ocean based on the extended Huygens-Fresnel principle. We investigate the influence of parameters in a turbulent ocean on the beam wander and spreading. Our results indicate that beam spreading and random beam wandering are smaller without considering the anisotropy of turbulence in the oceanic channel. Salinity fluctuation has a greater contribution to both the beam spreading and beam wander than that of temperature fluctuations in a turbulent ocean. Our results could be helpful for designing a free-space optical wireless communication system in an oceanic environment.
Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin
2015-03-01
Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.
Wang, Chuanyun; Song, Fei; Qin, Shiyin
2017-02-01
Addressing the problems of infrared small target tracking in forward looking infrared (FLIR) system, a new infrared small target tracking method is presented, in which features binding of both target gray intensity and spatial relationship is implemented by compressive sensing so as to construct the Gaussian mixture model of compressive appearance distribution. Subsequently, naive Bayesian classification is carried out over testing samples acquired with non-uniform sampling probability to identify the most credible location of targets from background scene. A series of experiments are carried out over four infrared small target image sequences with more than 200 images for each sequence, the results demonstrate the effectiveness and advantages of the proposed method in both success rate and precision rate.
Propagation of a Laguerre-Gaussian correlated Schell-model beam in strongly nonlocal nonlinear media
Qiu, Yunli; Chen, Zhaoxi; He, Yingji
2017-04-01
Analytical expressions for the cross-spectral density function and the second-order moments of the Wigner distribution function of a Laguerre-Gaussian correlated Schell-model (LGCSM) beam propagating in strongly nonlocal nonlinear media are derived. The propagation properties, such as beam irradiance, beam width, the spectral degree of coherence and the propagation factor of a LGCSM beam inside the media are investigated in detail. The effect of the beam parameters and the input power on the evolution properties of a LGCSM is illustrated numerically. It is found that the beam width varies periodically or keeps invariant for a certain proper input power. And both the beam irradiance and the spectral degree of coherence of the LGCSM beam change periodically with the propagation distance for the arbitrary input power which however has no influence on the propagation factor. The coherent length and the mode order mainly affect the evolution speed of the LGCSM beam in strongly nonlocal nonlinear media.
Online Model Learning of Buildings Using Stochastic Hybrid Systems Based on Gaussian Processes
Directory of Open Access Journals (Sweden)
Hamzah Abdel-Aziz
2017-01-01
Full Text Available Dynamical models are essential for model-based control methodologies which allow smart buildings to operate autonomously in an energy and cost efficient manner. However, buildings have complex thermal dynamics which are affected externally by the environment and internally by thermal loads such as equipment and occupancy. Moreover, the physical parameters of buildings may change over time as the buildings age or due to changes in the buildings’ configuration or structure. In this paper, we introduce an online model learning methodology to identify a nonparametric dynamical model for buildings when the thermal load is latent (i.e., the thermal load cannot be measured. The proposed model is based on stochastic hybrid systems, where the discrete state describes the level of the thermal load and the continuous dynamics represented by Gaussian processes describe the thermal dynamics of the air temperature. We demonstrate the evaluation of the proposed model using two-zone and five-zone buildings. The data for both experiments are generated using the EnergyPlus software. Experimental results show that the proposed model estimates the thermal load level correctly and predicts the thermal behavior with good performance.
About a solvable mean field model of a Gaussian spin glass
Barra, Adriano; Genovese, Giuseppe; Guerra, Francesco; Tantari, Daniele
2014-04-01
In a series of papers, we have studied a modified Hopfield model of a neural network, with learned words characterized by a Gaussian distribution. The model can be represented as a bipartite spin glass, with one party described by dichotomic Ising spins, and the other party by continuous spin variables, with an a priori Gaussian distribution. By application of standard interpolation methods, we have found it useful to compare the neural network model (bipartite) from one side, with two spin glass models, each monopartite, from the other side. Of these, the first is the usual Sherrington-Kirkpatrick model, the second is a spin glass model, with continuous spins and inbuilt highly nonlinear smooth cut-off interactions. This model is an invaluable laboratory for testing all techniques which have been useful in the study of spin glasses. The purpose of this paper is to give a synthetic description of the most peculiar aspects, by stressing the necessary novelties in the treatment. In particular, it will be shown that the control of the infinite volume limit, according to the well-known Guerra-Toninelli strategy, requires in addition one to consider the involvement of the cut-off interaction in the interpolation procedure. Moreover, the control of the ergodic region, the annealed case, cannot be directly achieved through the standard application of the Borel-Cantelli lemma, but requires previous modification of the interaction. This remark could find useful application in other cases. The replica symmetric expression for the free energy can be easily reached through a suitable version of the doubly stochastic interpolation technique. However, this model shares the unique property that the fully broken replica symmetry ansatz can be explicitly calculated. A very simple sum rule connects the general expression of the fully broken free energy trial function with the replica symmetric one. The definite sign of the error term shows that the replica solution is optimal. Then
Fast pencil beam dose calculation for proton therapy using a double-Gaussian beam model
Directory of Open Access Journals (Sweden)
Joakim eda Silva
2015-12-01
Full Text Available The highly conformal dose distributions produced by scanned proton pencil beams are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a pencil beam algorithm running on graphics processing units (GPUs intended specifically for online dose calculation. Here we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such pencil beam algorithm for proton therapy running on a GPU. We employ two different parametrizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of pencil beams in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included whilst prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Further, the calculation time is relatively unaffected by the parametrization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.
Blakeslee, Barbara; Cope, Davis; McCourt, Mark E
2016-03-01
The Oriented Difference of Gaussians (ODOG) model of brightness (perceived intensity) by Blakeslee and McCourt (Vision Research 39:4361-4377, 1999), which is based on linear spatial filtering by oriented receptive fields followed by contrast normalization, has proven highly successful in parsimoniously predicting the perceived intensity (brightness) of regions in complex visual stimuli such as White's effect, which had been believed to defy filter-based explanations. Unlike competing explanations such as anchoring theory, filling-in, edge-integration, or layer decomposition, the spatial filtering approach embodied by the ODOG model readily accounts for the often overlooked but ubiquitous gradient structure of induction which, while most striking in grating induction, also occurs within the test fields of classical simultaneous brightness contrast and the White stimulus. Also, because the ODOG model does not require defined regions of interest, it is generalizable to any stimulus, including natural images. The ODOG model has motivated other researchers to develop modified versions (LODOG and FLODOG), and has served as an important counterweight and proof of concept to constrain high-level theories which rely on less well understood or justified mechanisms such as unconscious inference, transparency, perceptual grouping, and layer decomposition. Here we provide a brief but comprehensive description of the ODOG model as it has been implemented since 1999, as well as working Mathematica (Wolfram, Inc.) notebooks which users can employ to generate ODOG model predictions for their own stimuli.
Energy Technology Data Exchange (ETDEWEB)
Kraabol, A.G.; Stordal, F.; Knudsen, S. [Norwegian Inst. for Air Research, Kjeller (Norway); Konopka, P. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Wessling (Germany). Inst. fuer Physik der Atmosphaere
1997-12-31
An expanding plume model with chemistry has been used to study the chemical conversion of NO{sub x} to reservoir species in aircraft plumes. The heterogeneous conversion of N{sub 2}O{sub 5} to HNO{sub 3}(s) has been investigated when the emissions take place during night-time. The plume from an B747 has been simulated. During a ten-hour calculation the most important reservoir species was HNO{sub 3} for emissions at noon. The heterogeneous reactions had little impact on the chemical loss of NO{sub x} to reservoir species for emissions at night. (author) 4 refs.
Ogungbemi, Kayode I.
The Optogalvanic Effect (OGE) of neon in a hollow cathode discharge lamp has been investigated both experimentally and theoretically. A tunable dye laser was tuned to several 1si -- 2pj neon transitions and the associated time--resolved optogalvanic (OG) spectral waveforms recorded corresponding to the DeltaJ = DeltaK = 0, +/-1 selection rules and modeled using a semi-empirical model. Decay rate constants, amplitudes and the instrumentation time constants were recorded following a good least-squares fit (between the experimental and the theoretical OG data) using the Monte Carlo technique and utilizing both the search and random walk methods. Dominant physical processes responsible for the optogalvanic effect have been analyzed, and the corresponding populations of the laser-excited level and collisional excited levels determined. The behavior of the optogalvanic signal waveform as a function of time, together with the decay rate constants as a function of the discharge current and the instrumentation time constant as a function of current have been studied in detail. The decay times of the OG signals and the population redistributions were also determined. Fairly linear relationships between the decay rate constant and the discharge current, as well as between the instrumental time constant and the discharge current, have been observed. The decay times and the electron collisional rate parameters of the 1s levels involved in the OG transitions have been obtained with accuracy. The excitation temperature of the discharge for neon transitions grouped with the same 1s level have been determined and found to be fairly constant for the neon transitions studied. The experimental optogalvanic effort in the visible region of the electromagnetic spectrum has been complemented by a computation-intensive modeling investigation of rocket plumes in the microwave region. Radio frequency lines of each of the plume species identified were archived utilizing the HITRAN and other
Mobley, B. L.; Smith, S. D.; Van Norman, J. W.; Muppidi, S.; Clark, I
2016-01-01
Provide plume induced heating (radiation & convection) predictions in support of the LDSD thermal design (pre-flight SFDT-1) Predict plume induced aerodynamics in support of flight dynamics, to achieve targeted freestream conditions to test supersonic deceleration technologies (post-flight SFDT-1, pre-flight SFDT-2)
On Diagnostic Checking of Vector ARMA-GARCH Models with Gaussian and Student-t Innovations
Directory of Open Access Journals (Sweden)
Yongning Wang
2013-04-01
Full Text Available This paper focuses on the diagnostic checking of vector ARMA (VARMA models with multivariate GARCH errors. For a fitted VARMA-GARCH model with Gaussian or Student-t innovations, we derive the asymptotic distributions of autocorrelation matrices of the cross-product vector of standardized residuals. This is different from the traditional approach that employs only the squared series of standardized residuals. We then study two portmanteau statistics, called Q1(M and Q2(M, for model checking. A residual-based bootstrap method is provided and demonstrated as an effective way to approximate the diagnostic checking statistics. Simulations are used to compare the performance of the proposed statistics with other methods available in the literature. In addition, we also investigate the effect of GARCH shocks on checking a fitted VARMA model. Empirical sizes and powers of the proposed statistics are investigated and the results suggest a procedure of using jointly Q1(M and Q2(M in diagnostic checking. The bivariate time series of FTSE 100 and DAX index returns is used to illustrate the performance of the proposed portmanteau statistics. The results show that it is important to consider the cross-product series of standardized residuals and GARCH effects in model checking.
Digital Repository Service at National Institute of Oceanography (India)
Babu, M.T.; Vethamony, P.; Suryanarayana, A; Gouveia, A
Thermal plume simulation has been carried out using a 2D model to understand the nature of spreading and rate of cooling of warm water discharge from a proposed outfall in the nearshore region off Nagapattinam, Tamil Nadu, India. Four months...
Identifying fire plumes in the Arctic with tropospheric FTIR measurements and transport models
Viatte, C.; Strong, K.; Hannigan, J.; Nussbaumer, E.; Emmons, L. K.; Conway, S.; Paton-Walsh, C.; Hartley, J.; Benmergui, J.; Lin, J.
2015-03-01
We investigate Arctic tropospheric composition using ground-based Fourier transform infrared (FTIR) solar absorption spectra, recorded at the Polar Environment Atmospheric Research Laboratory (PEARL, Eureka, Nunavut, Canada, 80°05' N, 86°42' W) and at Thule (Greenland, 76°53' N, -68°74' W) from 2008 to 2012. The target species, carbon monoxide (CO), hydrogen cyanide (HCN), ethane (C2H6), acetylene (C2H2), formic acid (HCOOH), and formaldehyde (H2CO) are emitted by biomass burning and can be transported from mid-latitudes to the Arctic. By detecting simultaneous enhancements of three biomass burning tracers (HCN, CO, and C2H6), ten and eight fire events are identified at Eureka and Thule, respectively, within the 5-year FTIR time series. Analyses of Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model back-trajectories coupled with Moderate Resolution Imaging Spectroradiometer (MODIS) fire hotspot data, Stochastic Time-Inverted Lagrangian Transport (STILT) model footprints, and Ozone Monitoring Instrument (OMI) UV aerosol index maps, are used to attribute burning source regions and travel time durations of the plumes. By taking into account the effect of aging of the smoke plumes, measured FTIR enhancement ratios were corrected to obtain emission ratios and equivalent emission factors. The means of emission factors for extratropical forest estimated with the two FTIR data sets are 0.40 ± 0.21 g kg-1 for HCN, 1.24 ± 0.71 g kg-1 for C2H6, 0.34 ± 0.21 g kg-1 for C2H2, and 2.92 ± 1.30 g kg-1 for HCOOH. The emission factor for CH3OH estimated at Eureka is 3.44 ± 1.68 g kg-1. To improve our knowledge concerning the dynamical and chemical processes associated with Arctic pollution from fires, the two sets of FTIR measurements were compared to the Model for OZone And Related chemical Tracers, version 4 (MOZART-4). Seasonal cycles and day-to-day variabilities were compared to assess the ability of the model to reproduce emissions from fires and
Identifying fire plumes in the Arctic with tropospheric FTIR measurements and transport models
Directory of Open Access Journals (Sweden)
C. Viatte
2014-10-01
Full Text Available We investigate Arctic tropospheric composition using ground-based Fourier Transform Infrared (FTIR solar absorption spectra, recorded at the Polar Environment Atmospheric Research Laboratory (PEARL, Eureka, Nunavut, Canada, 80°5' N, 86°42' W and at Thule (Greenland, 76°53' N, −68°74' W from 2008 to 2012. The target species: carbon monoxide (CO, hydrogen cyanide (HCN, ethane (C2H6, acetylene (C2H2, formic acid (HCOOH, and formaldehyde (H2CO are emitted by biomass burning and can be transported from mid-latitudes to the Arctic. By detecting simultaneous enhancements of three biomass burning tracers (HCN, CO, and C2H6, ten and eight fire events are identified at Eureka and Thule, respectively, within the five-year FTIR timeseries. Analyses of Hybrid Single Particle Lagrangian Integrated Trajectory Model (HYSPLIT back-trajectories coupled with Moderate Resolution Imaging Spectroradiometer (MODIS fire hot spot data, Stochastic Time-Inverted Lagrangian Transport model (STILT footprints, and Ozone Monitoring Instrument (OMI UV aerosol index maps are used to attribute burning source regions and travel time durations of the plumes. By taking into account the effect of aging of the smoke plumes, measured FTIR enhancement ratios were corrected to obtain emission ratios and equivalent emission factors. The means of emission factors for extratropical forest estimated with the two FTIR datasets are 0.39 ± 0.15 g kg−1 for HCN, 1.23 ± 0.49 g kg−1 for C2H6, 0.34 ± 0.16 g kg−1 for C2H2, 2.13 ± 0.92 g kg−1 for HCOOH, and 3.14 ± 1.28 g kg−1 for CH3OH. To improve our knowledge concerning the dynamical and chemical processes associated with Arctic pollution from fires, the two sets of FTIR measurements were compared to the Model for Ozone and Related chemical Tracers, version 4 (MOZART-4. Seasonal cycles and day-to-day variabilities were compared to assess the ability of the model to reproduce emissions from fires and their transport. Good
Dynamic Modelling of Aquifer Level Using Space-Time Kriging and Sequential Gaussian Simulation
Varouchakis, Emmanouil A.; Hristopulos, Dionisis T.
2016-04-01
Geostatistical models are widely used in water resources management projects to represent and predict the spatial variability of aquifer levels. In addition, they can be applied as surrogate to numerical hydrological models if the hydrogeological data needed to calibrate the latter are not available. For space-time data, spatiotemporal geostatistical approaches can model the aquifer level variability by incorporating complex space-time correlations. A major advantage of such models is that they can improve the reliability of predictions compared to purely spatial or temporal models in areas with limited spatial and temporal data availability. The identification and incorporation of a spatiotemporal trend model can further increase the accuracy of groundwater level predictions. Our goal is to derive a geostatistical model of dynamic aquifer level changes in a sparsely gauged basin on the island of Crete (Greece). The available data consist of bi-annual (dry and wet hydrological period) groundwater level measurements at 11 monitoring locations for the time period 1981 to 2010. We identify a spatiotemporal trend function that follows the overall drop of the aquifer level over the study period. The correlation of the residuals is modeled using a non-separable space-time variogram function based on the Spartan covariance family. The space-time Residual Kriging (STRK) method is then applied to combine the estimated trend and the residuals into dynamic predictions of groundwater level. Sequential Gaussian Simulation is also employed to determine the uncertainty of the spatiotemporal model (trend and covariance) parameters. This stochastic modelling approach produces multiple realizations, ranks the prediction results on the basis of specified criteria, and captures the range of the uncertainty. The model projections recommend that in 2032 a part of the basin will be under serious threat as the aquifer level will approximate the sea level boundary.
Exponential Gaussian approach for spectral modeling: The EGO algorithm I. Band saturation
Pompilio, Loredana; Pedrazzi, Giuseppe; Sgavetti, Maria; Cloutis, Edward A.; Craig, Michael A.; Roush, Ted L.
2009-06-01
Curve fitting techniques are a widespread approach to spectral modeling in the VNIR range [Burns, R.G., 1970. Am. Mineral. 55, 1608-1632; Singer, R.B., 1981. J. Geophys. Res. 86, 7967-7982; Roush, T.L., Singer, R.B., 1986. J. Geophys. Res. 91, 10301-10308; Sunshine, J.M., Pieters, C.M., Pratt, S.F., 1990. J. Geophys. Res. 95, 6955-6966]. They have been successfully used to model reflectance spectra of powdered minerals and mixtures, natural rock samples and meteorites, and unknown remote spectra of the Moon, Mars and asteroids. Here, we test a new decomposition algorithm to model VNIR reflectance spectra and call it Exponential Gaussian Optimization (EGO). The EGO algorithm is derived from and complementary to the MGM of Sunshine et al. [Sunshine, J.M., Pieters, C.M., Pratt, S.F., 1990. J. Geophys. Res. 95, 6955-6966]. The general EGO equation has been especially designed to account for absorption bands affected by saturation and asymmetry. Here we present a special case of EGO and address it to model saturated electronic transition bands. Our main goals are: (1) to recognize and model band saturation in reflectance spectra; (2) to develop a basic approach for decomposition of rock spectra, where effects due to saturation are most prevalent; (3) to reduce the uncertainty related to quantitative estimation when band saturation is occurring. In order to accomplish these objectives, we simulate flat bands starting from pure Gaussians and test the EGO algorithm on those simulated spectra first. Then we test the EGO algorithm on a number of measurements acquired on powdered pyroxenes having different compositions and average grain size and binary mixtures of orthopyroxenes with barium sulfate. The main results arising from this study are: (1) EGO model is able to numerically account for the occurrence of saturation effects on reflectance spectra of powdered minerals and mixtures; (2) the systematic dilution of a strong absorber using a bright neutral material is not
ASHEE: a compressible, equilibrium-Eulerian model for volcanic ash plumes
Cerminara, Matteo; Berselli, Luigi Carlo
2015-01-01
A new fluid-dynamic model is developed to numerically simulate the non-equilibrium dynamics of polydisperse gas-particle mixtures forming volcanic plumes. Starting from the three-dimensional N-phase Eulerian transport equations for a mixture of gases and solid particles, we adopt an asymptotic expansion strategy to derive a compressible version of the first-order non-equilibrium model, valid for low concentration regimes and small particles Stokes $St<0.2$. When $St < 0.001$ the model reduces to the dusty-gas one. The new model is significantly faster than the Eulerian model while retaining the capability to describe gas-particle non-equilibrium. Direct numerical simulation accurately reproduce the dynamics of isotropic turbulence in subsonic regime. For gas-particle mixtures, it describes the main features of density fluctuations and the preferential concentration of particles by turbulence, verifying the model reliability and suitability for the simulation of high-Reynolds number and high-temperature ...
Strom, K.
2014-12-01
Rivers are the primary conduits for delivery of sediments and organic matter to the sea. This is visually evident when sediment-laden rivers enter coastal waters, producing sediment plumes. The sediment and organic material from such plumes may deposit and be preserved in estuarine and deltaic zones, or may be carried and mixed by ocean currents to deposit elsewhere on the shelf. Both of these outcomes are governed in large part by depositional mechanics that are dependent, at least in part, on the settling velocity of the sediment. This is especially true in modeling, where the settling velocity has been noted to be the primary controlling parameter for accurate prediction of depositional patters from river plumes. Settling velocity is largely controlled by grain size, shape, and density, which for mud can be quite dynamic due to the process of flocculation. Flocculation yields mud aggregates of variable size and density that may be dependent on the turbulent energy and salt levels under which they were formed. Since turbulent energy and salinity both change in river mouth jet/plumes, the dynamic flocculation process may exert significant control on the eventual distribution of sediment in these zones. In this study, two different approaches to floc modeling are integrated into a steady-state river mouth plume integral model. The two floc models are (1) a version of the Winterwerp (1998) model, and (2) a condition-dependent equilibrium floc size model similar to what is typically used in large-scale 2 and 3D hydraulic and sediment transport simulations. Inclusion of these two models into the buoyant river-mouth plume equations allows for the settling velocity of the mud to be functionally tied to the turbulent shear rate and suspended sediment concentration. The concentration and deposition rates are then compared through the plume both without and with the inclusion of the two different floc treatments. The role that entrainment of ambient fluid plays in the
Lin, Guoxing
2016-01-01
Pulsed field gradient (PFG) has been increasingly employed to study anomalous diffusions in Nuclear Magnetic Resonance (NMR) and Magnetic Resonance Imaging (MRI). However, the analysis of PFG anomalous diffusion is complicated. In this paper, a fractal derivative model based modified Gaussian phase distribution method is proposed to describe PFG anomalous diffusion. By using the phase distribution obtained from the effective phase shift diffusion method based on fractal derivatives, and employing some of the traditional Gaussian phase distribution approximation techniques, a general signal attenuation expression for free fractional diffusion is derived. This expression describes a stretched exponential function based attenuation, which is distinct from both the exponential attenuation for normal diffusion obtained from conventional Gaussian phase distribution approximation, and the Mittag-Leffler function based attenuation for anomalous diffusion obtained from fractional derivative. The obtained signal attenu...
Directory of Open Access Journals (Sweden)
Giannina Poletto
2015-12-01
Full Text Available Polar plumes are thin long ray-like structures that project beyond the limb of the Sun polar regions, maintaining their identity over distances of several solar radii. Plumes have been first observed in white-light (WL images of the Sun, but, with the advent of the space era, they have been identified also in X-ray and UV wavelengths (XUV and, possibly, even in in situ data. This review traces the history of plumes, from the time they have been first imaged, to the complex means by which nowadays we attempt to reconstruct their 3-D structure. Spectroscopic techniques allowed us also to infer the physical parameters of plumes and estimate their electron and kinetic temperatures and their densities. However, perhaps the most interesting problem we need to solve is the role they cover in the solar wind origin and acceleration: Does the solar wind emanate from plumes or from the ambient coronal hole wherein they are embedded? Do plumes have a role in solar wind acceleration and mass loading? Answers to these questions are still somewhat ambiguous and theoretical modeling does not provide definite answers either. Recent data, with an unprecedented high spatial and temporal resolution, provide new information on the fine structure of plumes, their temporal evolution and relationship with other transient phenomena that may shed further light on these elusive features.
Model fitting of kink waves in the solar atmosphere: Gaussian damping and time-dependence
Morton, R. J.; Mooroogen, K.
2016-09-01
Aims: Observations of the solar atmosphere have shown that magnetohydrodynamic waves are ubiquitous throughout. Improvements in instrumentation and the techniques used for measurement of the waves now enables subtleties of competing theoretical models to be compared with the observed waves behaviour. Some studies have already begun to undertake this process. However, the techniques employed for model comparison have generally been unsuitable and can lead to erroneous conclusions about the best model. The aim here is to introduce some robust statistical techniques for model comparison to the solar waves community, drawing on the experiences from other areas of astrophysics. In the process, we also aim to investigate the physics of coronal loop oscillations. Methods: The methodology exploits least-squares fitting to compare models to observational data. We demonstrate that the residuals between the model and observations contain significant information about the ability for the model to describe the observations, and show how they can be assessed using various statistical tests. In particular we discuss the Kolmogorov-Smirnoff one and two sample tests, as well as the runs test. We also highlight the importance of including any observational trend line in the model-fitting process. Results: To demonstrate the methodology, an observation of an oscillating coronal loop undergoing standing kink motion is used. The model comparison techniques provide evidence that a Gaussian damping profile provides a better description of the observed wave attenuation than the often used exponential profile. This supports previous analysis from Pascoe et al. (2016, A&A, 585, L6). Further, we use the model comparison to provide evidence of time-dependent wave properties of a kink oscillation, attributing the behaviour to the thermodynamic evolution of the local plasma.
Measuring the Mass of Kepler-78b Using a Gaussian Process Model
Grunblatt, Samuel K; Haywood, Raphaëlle D
2015-01-01
Kepler-78b is a transiting planet that is 1.2 times the size of Earth and orbits a young K dwarf every 8 hours. Two teams independently reported the mass of Kepler-78b based on radial velocity measurements using the HIRES and HARPS-N spectrographs. We modeled these datasets using a nonparametric Gaussian process (GP) regression. We considered three kernel functions for our GP models to account for the quasi-periodic activity from the young host star. All three kernel functions gave consistent Doppler amplitudes. Based on a likelihood analysis, we selected a quasi-periodic kernel that gives a Doppler amplitude of 1.86 $\\pm$ 0.25 m s$^{-1}$. The mass of 1.87 $^{+0.27}_{-0.26}$ M$_{\\oplus}$ is more precise than can be measured with either data set (a 2.5-{\\sigma} improvement on the HIRES data). Reanalyzing only the HIRES data with a GP model, we reach a Doppler signal uncertainty equivalent with the previous study using slightly more than half of the HIRES measurements. Our GP model is the first analysis of the ...
A Gaussian mixture model based cost function for parameter estimation of chaotic biological systems
Shekofteh, Yasser; Jafari, Sajad; Sprott, Julien Clinton; Hashemi Golpayegani, S. Mohammad Reza; Almasganj, Farshad
2015-02-01
As we know, many biological systems such as neurons or the heart can exhibit chaotic behavior. Conventional methods for parameter estimation in models of these systems have some limitations caused by sensitivity to initial conditions. In this paper, a novel cost function is proposed to overcome those limitations by building a statistical model on the distribution of the real system attractor in state space. This cost function is defined by the use of a likelihood score in a Gaussian mixture model (GMM) which is fitted to the observed attractor generated by the real system. Using that learned GMM, a similarity score can be defined by the computed likelihood score of the model time series. We have applied the proposed method to the parameter estimation of two important biological systems, a neuron and a cardiac pacemaker, which show chaotic behavior. Some simulated experiments are given to verify the usefulness of the proposed approach in clean and noisy conditions. The results show the adequacy of the proposed cost function.
Li, Baoyue; Bruyneel, Luk; Lesaffre, Emmanuel
2014-05-20
A traditional Gaussian hierarchical model assumes a nested multilevel structure for the mean and a constant variance at each level. We propose a Bayesian multivariate multilevel factor model that assumes a multilevel structure for both the mean and the covariance matrix. That is, in addition to a multilevel structure for the mean we also assume that the covariance matrix depends on covariates and random effects. This allows to explore whether the covariance structure depends on the values of the higher levels and as such models heterogeneity in the variances and correlation structure of the multivariate outcome across the higher level values. The approach is applied to the three-dimensional vector of burnout measurements collected on nurses in a large European study to answer the research question whether the covariance matrix of the outcomes depends on recorded system-level features in the organization of nursing care, but also on not-recorded factors that vary with countries, hospitals, and nursing units. Simulations illustrate the performance of our modeling approach. Copyright © 2013 John Wiley & Sons, Ltd.
Short-term traffic safety forecasting using Gaussian mixture model and Kalman filter
Institute of Scientific and Technical Information of China (English)
Sheng JIN; Dian-hai WANG; Cheng XU; Dong-fang MA
2013-01-01
In this paper; a prediction model is developed that combines a Gaussian mixture model (GMM) and a Kalman filter for online forecasting of traffic safety on expressways.Raw time-to-collision (TTC) samples are divided into two categories:those representing vehicles in risky situations and those in safe situations.Then,the GMM is used to model the bimodal distribution of the TTC samples,and the maximum likelihood (ML) estimation parameters of the TTC distribution are obtained using the expectation-maximization (EM) algorithm.We propose a new traffic safety indicator,named the proportion of exposure to traffic conflicts (PETTC),for assessing the risk and predicting the safety of expressway traffic.A Kalman filter is applied to forecast the short-term safety indicator,PETTC,and solves the online safety prediction problem.A dataset collected from four different expressway locations is used for performance estimation.The test results demonstrate the precision and robustness of the prediction model under different traffic conditions and using different datasets.These results could help decision-makers to improve their online traffic safety forecasting and enable the optimal operation of expressway traffic management systems.
Jourdain, L.; T. J. Roberts; M. Pirre; Josse, B.
2015-01-01
Ambrym volcano (Vanuatu, Southwest Pacific) is one of the largest sources of continuous volcanic emissions worldwide. As well as releasing SO2 that is oxidized to sulfate, volcanic plumes in the troposphere are shown to undergo reactive halogen chemistry whose atmospheric impacts have been little explored to date. Here, two-way nested simulations were performed with the regional scale model CCATT-BRAMS to test our understanding of the volcano plume chemical...
DEFF Research Database (Denmark)
Andreasen, Martin Møller; Christensen, Bent Jesper
This paper suggests a new and easy approach to estimate linear and non-linear dynamic term structure models with latent factors. We impose no distributional assumptions on the factors and they may therefore be non-Gaussian. The novelty of our approach is to use many observables (yields or bonds p...
Park, Jun-Koo; Jernigan, Robert; Wu, Zhijun
2013-01-01
We investigate several approaches to coarse grained normal mode analysis on protein residual-level structural fluctuations by choosing different ways of representing the residues and the forces among them. Single-atom representations using the backbone atoms C(α), C, N, and C(β) are considered. Combinations of some of these atoms are also tested. The force constants between the representative atoms are extracted from the Hessian matrix of the energy function and served as the force constants between the corresponding residues. The residue mean-square-fluctuations and their correlations with the experimental B-factors are calculated for a large set of proteins. The results are compared with all-atom normal mode analysis and the residue-level Gaussian Network Model. The coarse-grained methods perform more efficiently than all-atom normal mode analysis, while their B-factor correlations are also higher. Their B-factor correlations are comparable with those estimated by the Gaussian Network Model and in many cases better. The extracted force constants are surveyed for different pairs of residues with different numbers of separation residues in sequence. The statistical averages are used to build a refined Gaussian Network Model, which is able to predict residue-level structural fluctuations significantly better than the conventional Gaussian Network Model in many test cases.
Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain
Directory of Open Access Journals (Sweden)
Jian Jia
2015-01-01
Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.
Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control
Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.
1997-01-01
One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.
Directory of Open Access Journals (Sweden)
Uttam Mande
2012-06-01
Full Text Available Lot of research is projected to map the criminal with that of crime and it is observed that there is still a huge increase in the crime rate due to the gap between the optimal usage of technologies and investigation. This has given scope for the development of new methodologies in the area of crime investigation using the techniques based on data mining, image processing, forensic, and social mining. In this paper, presents a model using new methodology for mapping the criminal with the crime. This model clusters the criminal data basing on the type crime. When a crime occurs, based on the eye witness specified features, the criminal is mapped. Here we propose a novel methodology that uses Generalized Gaussian Mixture Model to map the features specified by the eyewitness with that of the features of the criminal who have committed the same type of the crime, if the criminal is not mapped, the suspect table is checked and the reports are generated
Costa, Antonio; Folch, Arnau; Macedonio, Giovanni
2010-09-01
We develop a model to describe ash aggregates in a volcanic plume. The model is based on a solution of the classical Smoluchowski equation, obtained by introducing a similarity variable and a fractal relationship for the number of primary particles in an aggregate. The considered collision frequency function accounts for different mechanisms of aggregation, such as Brownian motion, ambient fluid shear, and differential sedimentation. Although model formulation is general, here only sticking efficiency related to the presence of water is considered. However, the different binding effect of liquid water and ice is discerned. The proposed approach represents a first compromise between the full description of the aggregation process and the need to decrease the computational time necessary for solving the full Smoluchowski equation. We also perform a parametric study on the main model parameters and estimate coagulation kernels and timescales of the aggregation process under simplified conditions of interest in volcanology. Further analyses and applications to real eruptions are presented in the companion paper by Folch et al.
A Gaussian mixture model for definition of lung tumor volumes in positron emission tomography.
Aristophanous, Michalis; Penney, Bill C; Martel, Mary K; Pelizzari, Charles A
2007-11-01
The increased interest in 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET) in radiation treatment planning in the past five years necessitated the independent and accurate segmentation of gross tumor volume (GTV) from FDG-PET scans. In some studies the radiation oncologist contours the GTV based on a computed tomography scan, while incorporating pertinent data from the PET images. Alternatively, a simple threshold, typically 40% of the maximum intensity, has been employed to differentiate tumor from normal tissue, while other researchers have developed algorithms to aid the PET based GTV definition. None of these methods, however, results in reliable PET tumor segmentation that can be used for more sophisticated treatment plans. For this reason, we developed a Gaussian mixture model (GMM) based segmentation technique on selected PET tumor regions from non-small cell lung cancer patients. The purpose of this study was to investigate the feasibility of using a GMM-based tumor volume definition in a robust, reliable and reproducible way. A GMM relies on the idea that any distribution, in our case a distribution of image intensities, can be expressed as a mixture of Gaussian densities representing different classes. According to our implementation, each class belongs to one of three regions in the image; the background (B), the uncertain (U) and the target (T), and from these regions we can obtain the tumor volume. User interaction in the implementation is required, but is limited to the initialization of the model parameters and the selection of an "analysis region" to which the modeling is restricted. The segmentation was developed on three and tested on another four clinical cases to ensure robustness against differences observed in the clinic. It also compared favorably with thresholding at 40% of the maximum intensity and a threshold determination function based on tumor to background image intensities proposed in a recent paper. The parts of the
Modeling the evolution of aerosol particles in a ship plume using PartMC-MOSAIC
Tian, J.; Riemer, N.; West, M.; Pfaffenberger, L.; Schlager, H.; Petzold, A.
2014-06-01
This study investigates the evolution of ship-emitted aerosol particles using the stochastic particle-resolved model PartMC-MOSAIC (Particle Monte Carlo model-Model for Simulating Aerosol Interactions and Chemistry). Comparisons of our results with observations from the QUANTIFY (Quantifying the Climate Impact of Global and European Transport Systems) study in 2007 in the English Channel and the Gulf of Biscay showed that the model was able to reproduce the observed evolution of total number concentration and the vanishing of the nucleation mode consisting of sulfate particles. Further process analysis revealed that during the first hour after emission, dilution reduced the total number concentration by four orders of magnitude, while coagulation reduced it by an additional order of magnitude. Neglecting coagulation resulted in an overprediction of more than one order of magnitude in the number concentration of particles smaller than 40 nm at a plume age of 100 s. Coagulation also significantly altered the mixing state of the particles, leading to a continuum of internal mixtures of sulfate and black carbon. The impact on cloud condensation nuclei (CCN) concentrations depended on the supersaturation threshold S at which CCN activity was evaluated. For the base case conditions, characterized by a low formation rate of secondary aerosol species, neglecting coagulation, but simulating condensation, led to an underestimation of CCN concentrations of about 37% for S = 0.3% at the end of the 14-h simulation. In contrast, for supersaturations higher than 0.7%, neglecting coagulation resulted in an overestimation of CCN concentration, about 75% for S = 1%. For S lower than 0.2% the differences between simulations including coagulation and neglecting coagulation were negligible. Neglecting condensation, but simulating coagulation did not impact the CCN concentrations below 0.2% and resulted in an underestimation of CCN concentrations for larger supersaturations, e.g., 18
Energy Technology Data Exchange (ETDEWEB)
Persiantseva, N.V.; Popovitcheva, O.B.; Rakhimova, T.V. [Moscow State Univ. (Russian Federation)
1997-12-31
Heterogeneous chemistry of HCl, as a main reservoir of chlorine content gases, has been considered after plume cooling and ice particle formation. The HCl, HNO{sub 3}, N{sub 2}O{sub 5} uptake efficiencies by frozen water were obtained in a Knudsen-cell flow reactor at the subsonic cruise conditions. The formation of ice particles in the plume of subsonic aircraft is simulated to describe the kinetics of gaseous HCl loss due to heterogeneous processes. It is shown that the HCl uptake by frozen water particles may play an important role in the gaseous HCl depletion in the aircraft plume. (author) 14 refs.
Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun
2017-08-01
Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM2.5 is a promising way to fill the areas that are not covered by ground PM2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R(2) = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM2.5 estimates.
Fast fitting of non-Gaussian state-space models to animal movement data via Template Model Builder.
Albertsen, Christoffer Moesgaard; Whoriskey, Kim; Yurkowski, David; Nielsen, Anders; Mills, Joanna
2015-10-01
State-space models (SSM) are often used for analyzing complex ecological processes that are not observed directly, such as marine animal movement. When outliers are present in the measurements, special care is needed in the analysis to obtain reliable location and process estimates. Here we recommend using the Laplace approximation combined with automatic differentiation (as implemented in the novel R package Template Model Builder; TMB) for the fast fitting of continuous-time multivariate non-Gaussian SSMs. Through Argos satellite tracking data, we demonstrate that the use of continuous-time t-distributed measurement errors for error-prone data is more robust to outliers and improves the location estimation compared to using discretized-time t-distributed errors (implemented with a Gibbs sampler) or using continuous-time Gaussian errors (as with the Kalman filter). Using TMB, we are able to estimate additional parameters compared to previous methods, all without requiring a substantial increase in computational time. The model implementation is made available through the R package argosTrack.
A Closure Model with Plumes II. Application to the stochastic excitation of stellar p modes
Belkacem, K; Goupil, M J; Kupka, F; Baudin, F
2006-01-01
Amplitudes of stellar p modes result from a balance between excitation and damping processes taking place in the upper-most part of convective zones in solar-type stars and can therefore be used as a seismic diagnostic for the physical properties of these external layers. Our goal is to improve the theoretical modelling of stochastic excitation of p modes by turbulent convection. With the help of the Closure Model with Plume (CMP) developed in a companion paper, we refine the theoretical description of the excitation by the turbulent Reynolds stress term. The CMP is generalized for two-point correlation products so as to apply it to the formalism developed by Samadi & Goupil (2001). The present model gives rise to a frequency dependence of the power supplied into solar p modes which is in agreement with GOLF observations for intermediate and high frequencies. Despite an increase of the Reynolds stress term contribution due to our improved description, an additional source of excitation, identified as the ...
Diagnosing non-Gaussianity of forecast and analysis errors in a convective scale model
Directory of Open Access Journals (Sweden)
R. Legrand
2015-07-01
K2-statistics from the D'Agostino test, which is related to the sum of the squares of univariate skewness and kurtosis. Results confirm that specific humidity is the least Gaussian variable according to that measure, and also that non-Gaussianity is generally more pronounced in the boundary layer and in cloudy areas. The mass control variables used in our data assimilation, namely vorticity and divergence, also show distinct non-Gaussian behavior. It is shown that while non-Gaussianity increases with forecast lead time, it is efficiently reduced by the data assimilation step especially in areas well covered by observations. Our findings may have implication for the choice of the control variables.
Internal pilots for a class of linear mixed models with Gaussian and compound symmetric data.
Gurka, Matthew J; Coffey, Christopher S; Muller, Keith E
2007-09-30
An internal pilot design uses interim sample size analysis, without interim data analysis, to adjust the final number of observations. The approach helps to choose a sample size sufficiently large (to achieve the statistical power desired), but not too large (which would waste money and time). We report on recent research in cerebral vascular tortuosity (curvature in three dimensions) which would benefit greatly from internal pilots due to uncertainty in the parameters of the covariance matrix used for study planning. Unfortunately, observations correlated across the four regions of the brain and small sample sizes preclude using existing methods. However, as in a wide range of medical imaging studies, tortuosity data have no missing or mistimed data, a factorial within-subject design, the same between-subject design for all responses, and a Gaussian distribution with compound symmetry. For such restricted models, we extend exact, small sample univariate methods for internal pilots to linear mixed models with any between-subject design (not just two groups). Planning a new tortuosity study illustrates how the new methods help to avoid sample sizes that are too small or too large while still controlling the type I error rate.
iGNM 2.0: the Gaussian network model database for biomolecular structural dynamics.
Li, Hongchun; Chang, Yuan-Yu; Yang, Lee-Wei; Bahar, Ivet
2016-01-04
Gaussian network model (GNM) is a simple yet powerful model for investigating the dynamics of proteins and their complexes. GNM analysis became a broadly used method for assessing the conformational dynamics of biomolecular structures with the development of a user-friendly interface and database, iGNM, in 2005. We present here an updated version, iGNM 2.0 http://gnmdb.csb.pitt.edu/, which covers more than 95% of the structures currently available in the Protein Data Bank (PDB). Advanced search and visualization capabilities, both 2D and 3D, permit users to retrieve information on inter-residue and inter-domain cross-correlations, cooperative modes of motion, the location of hinge sites and energy localization spots. The ability of iGNM 2.0 to provide structural dynamics data on the large majority of PDB structures and, in particular, on their biological assemblies makes it a useful resource for establishing the bridge between structure, dynamics and function.
A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates
Savanevych, V E; Sokovikova, N S; Bezkrovny, M M; Vavilova, I B; Ivashchenko, Yu M; Elenin, L V; Khlamov, S V; Movsesian, Ia S; Dashkova, A M; Pogorelov, A V
2015-01-01
We describe a new iteration method to estimate asteroid coordinates, which is based on the subpixel Gaussian model of a discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixels potential) of the CCD frame. In this model, a kind of the coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The developed method, being more flexible in adapting to any form of the object image, has a high measurement accuracy along with a low calculating complexity due to a maximum likelihood procedure, which is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for the minimisation of the quadratic form. Since 2010, the method was tested as the basis of our CoLiTec (Collection Light Technology) software, which has been installed at several observatories of the world with the ai...
Gaussian Approximations for an Overloaded X Model Via an Averaging Principle
Perry, Ohad
2010-01-01
In previous papers we developed a deterministic fluid approximation for an overloaded Markovian queueing system having two customer classes and two service pools, known in the call-center literature as the X model. The system uses the fixed-queue-ratio-with-thresholds (FQR-T) control, which we proposed in a recent paper as a way for one service system to help another in face of an unexpected overload. Under FQR-T, customers are served by their own service pool until a threshold is exceeded. Then, one-way sharing is activated with customers from one class allowed to be served in both pools. The control aims to keep the two queues at a pre-specified fixed ratio. We supported the fluid approximation by establishing a many-server heavy-traffic functional weak law of large numbers (FWLLN) involving an averaging principle. In this paper we develop refined Gaussian approximations for the same model based on a many-server heavy-traffic functional central limit theorem (FCLT). We conduct simulations to show that the G...
Inference in Graphical Gaussian Models with Edge and Vertex Symmetries with the gRc Package for R
DEFF Research Database (Denmark)
Højsgaard, Søren; Lauritzen, Steffen L
2007-01-01
In this paper we present the R package gRc for statistical inference in graphical Gaussian models in which symmetry restrictions have been imposed on the concentration or partial correlation matrix. The models are represented by coloured graphs where parameters associated with edges or vertices o...... of same colour are restricted to being identical. We describe algorithms for maximum likelihood estimation and discuss model selection issues. The paper illustrates the practical use of the gRc package......In this paper we present the R package gRc for statistical inference in graphical Gaussian models in which symmetry restrictions have been imposed on the concentration or partial correlation matrix. The models are represented by coloured graphs where parameters associated with edges or vertices...
Learning conditional Gaussian networks
DEFF Research Database (Denmark)
Bøttcher, Susanne Gammelgaard
This paper considers conditional Gaussian networks. The parameters in the network are learned by using conjugate Bayesian analysis. As conjugate local priors, we apply the Dirichlet distribution for discrete variables and the Gaussian-inverse gamma distribution for continuous variables, given...... independence, parameter modularity and likelihood equivalence. Bayes factors to be used in model search are introduced. Finally the methods derived are illustrated by a simple example....
On Gaussian random supergravity
Bachlechner, Thomas C.
2014-01-01
We study the distribution of metastable vacua and the likelihood of slow roll inflation in high dimensional random landscapes. We consider two examples of landscapes: a Gaussian random potential and an effective supergravity potential defined via a Gaussian random superpotential and a trivial K\\"ahler potential. To examine these landscapes we introduce a random matrix model that describes the correlations between various derivatives and we propose an efficient algorithm that allows for a nume...
Unsteady turbulent buoyant plumes
Woodhouse, Mark J; Hogg, Andrew J
2015-01-01
We model the unsteady evolution of turbulent buoyant plumes following temporal changes to the source conditions. The integral model is derived from radial integration of the governing equations expressing the conservation of mass, axial momentum and buoyancy. The non-uniform radial profiles of the axial velocity and density deficit in the plume are explicitly described by shape factors in the integral equations; the commonly-assumed top-hat profiles lead to shape factors equal to unity. The resultant model is hyperbolic when the momentum shape factor, determined from the radial profile of the mean axial velocity, differs from unity. The solutions of the model when source conditions are maintained at constant values retain the form of the well-established steady plume solutions. We demonstrate that the inclusion of a momentum shape factor that differs from unity leads to a well-posed integral model. Therefore, our model does not exhibit the mathematical pathologies that appear in previously proposed unsteady i...
Electrical Evolution of a Dust Plume from a Low Energy Lunar Impact: A Model Analog to LCROSS
Farrell, W. M.; Stubbs, T. J.; Jackson, T. L.; Colaprete, A.; Heldmann, J. L.; Schultz, P. H.; Killen, R. M.; Delory, G. T.; Halekas, J. S.; Marshall, J. R.; Zimmerman, M. I.; Collier, M. R.; Vondrak, R. R.
2011-01-01
A Monte Carlo test particle model was developed that simulates the charge evolution of micron and sub-micron sized dust grains ejected upon low-energy impact of a moderate-size object onto a lunar polar crater floor. Our analog is the LCROSS impact into Cabeus crater. Our primary objective is to model grain discharging as the plume propagates upwards from shadowed crater into sunlight.
Field, Robert D.; Luo, Ming; Fromm, Mike; Voulgarakis, Apostolos; Mangeon, Stéphane; Worden, John
2016-04-01
We simulated the high-altitude smoke plume from the early February 2009 Black Saturday bushfires in southeastern Australia using the NASA Goddard Institute for Space Studies ModelE2. To the best of our knowledge, this is the first single-plume analysis of biomass burning emissions injected directly into the upper troposphere/lower stratosphere (UTLS) using a full-complexity composition-climate model. We compared simulated carbon monoxide (CO) to a new Aura Tropospheric Emission Spectrometer/Microwave Limb Sounder joint CO retrieval, focusing on the plume's initial transport eastward, anticyclonic circulation to the north of New Zealand, westward transport in the lower stratospheric easterlies, and arrival over Africa at the end of February. Our goal was to determine the sensitivity of the simulated plume to prescribed injection height, emissions amount, and emissions timing from different sources for a full-complexity model when compared to Aura. The most realistic plumes were obtained using injection heights in the UTLS, including one drawn from ground-based radar data. A 6 h emissions pulse or emissions tied to independent estimates of hourly fire behavior produced a more realistic plume in the lower stratosphere compared to the same emissions amount being released evenly over 12 or 24 h. Simulated CO in the plume was highly sensitive to the differences between emissions amounts estimated from the Global Fire Emissions Database and from detailed, ground-based estimates of fire growth. The emissions amount determined not only the CO concentration of the plume but also the proportion of the plume that entered the stratosphere. We speculate that this is due to either or both nonlinear CO loss with a weakened OH sink or plume self-lofting driven by shortwave absorption of the coemitted aerosols.
A uniform analysis of HD209458b Spitzer/IRAC lightcurves with Gaussian process models
Evans, Thomas M; Gibson, Neale; Barstow, Joanna K; Amundsen, David S; Tremblin, Pascal; Mourier, Pierre
2015-01-01
We present an analysis of Spitzer/IRAC primary transit and secondary eclipse lightcurves measured for HD209458b, using Gaussian process models to marginalise over the intrapixel sensitivity variations in the 3.6 micron and 4.5 micron channels and the ramp effect in the 5.8 micron and 8.0 micron channels. The main advantage of this approach is that we can account for a broad range of degeneracies between the planet signal and systematics without actually having to specify a deterministic functional form for the latter. Our results do not confirm a previous claim of water absorption in transmission. Instead, our results are more consistent with a featureless transmission spectrum, possibly due to a cloud deck obscuring molecular absorption bands. For the emission data, our values are not consistent with the thermal inversion in the dayside atmosphere that was originally inferred from these data. Instead, we agree with another re-analysis of these same data, which concluded a non-inverted atmosphere provides a b...
Hierarchical heuristic search using a Gaussian mixture model for UAV coverage planning.
Lin, Lanny; Goodrich, Michael A
2014-12-01
During unmanned aerial vehicle (UAV) search missions, efficient use of UAV flight time requires flight paths that maximize the probability of finding the desired subject. The probability of detecting the desired subject based on UAV sensor information can vary in different search areas due to environment elements like varying vegetation density or lighting conditions, making it likely that the UAV can only partially detect the subject. This adds another dimension of complexity to the already difficult (NP-Hard) problem of finding an optimal search path. We present a new class of algorithms that account for partial detection in the form of a task difficulty map and produce paths that approximate the payoff of optimal solutions. The algorithms use the mode goodness ratio heuristic that uses a Gaussian mixture model to prioritize search subregions. The algorithms search for effective paths through the parameter space at different levels of resolution. We compare the performance of the new algorithms against two published algorithms (Bourgault's algorithm and LHC-GW-CONV algorithm) in simulated searches with three real search and rescue scenarios, and show that the new algorithms outperform existing algorithms significantly and can yield efficient paths that yield payoffs near the optimal.
Gaussian Mixture Model and Deep Neural Network based Vehicle Detection and Classification
Directory of Open Access Journals (Sweden)
S Sri Harsha
2016-09-01
Full Text Available The exponential rise in the demand of vision based traffic surveillance systems have motivated academia-industries to develop optimal vehicle detection and classification scheme. In this paper, an adaptive learning rate based Gaussian mixture model (GMM algorithm has been developed for background subtraction of multilane traffic data. Here, vehicle rear information and road dash-markings have been used for vehicle detection. Performing background subtraction, connected component analysis has been applied to retrieve vehicle region. A multilayered AlexNet deep neural network (DNN has been applied to extract higher layer features. Furthermore, scale invariant feature transform (SIFT based vehicle feature extraction has been performed. The extracted 4096-dimensional features have been processed for dimensional reduction using principle component analysis (PCA and linear discriminant analysis (LDA. The features have been mapped for SVM-based classification. The classification results have exhibited that AlexNet-FC6 features with LDA give the accuracy of 97.80%, followed by AlexNet-FC6 with PCA (96.75%. AlexNet-FC7 feature with LDA and PCA algorithms has exhibited classification accuracy of 91.40% and 96.30%, respectively. On the contrary, SIFT features with LDA algorithm has exhibited 96.46% classification accuracy. The results revealed that enhanced GMM with AlexNet DNN at FC6 and FC7 can be significant for optimal vehicle detection and classification.
The tight focusing properties of Laguerre-Gaussian-correlated Schell-model beams
Xu, Hua-Feng; Zhang, Zhou; Qu, Jun; Huang, Wei
2016-08-01
Based on the Richards-Wolf vectorial diffraction theory, the tight focusing properties, including the intensity distribution, the degree of polarization and the degree of coherence, of the Laguerre-Gaussian-correlated Schell-model (LGSM) beams through a high-numerical-aperture (NA) focusing system are investigated in detail. It is found that the LGSM beam exhibits some extraordinary focusing properties, which is quite different from that of the GSM beam, and the tight focusing properties are closely related to the initial spatial coherence ? and the mode order n. The LGSM beam can form an elliptical focal spot, a circular focal spot or a doughnut-shaped dark hollow beam at the focal plane by choosing a suitable value of the initial spatial coherence ?, and the central dark size of the dark hollow beam increases with the increase of the mode order n. In addition, the influences of the initial spatial coherence ? and the mode order n on the degree of polarization and the degree of coherence are also analysed in detail, respectively. Our results may find applications in optical trapping.
Remote sensing image fusion based on Gaussian mixture model and multiresolution analysis
Xiao, Moyan; He, Zhibiao
2013-10-01
A novel image fusion algorithm based on region segmentation and multiresolution analysis(MRA) is proposed to make full use of advantages of different multiscale transform. Nonsubsampled contourlet transform(NSCT) processes edges better than wavelet transform does. While wavelet transform handles smooth area and singularities better than NSCT does. As an image often includes more than one feature, the proposed method is conducted on the basis of Gaussian mixture model(GMM) based region segmentation. Firstly, transform the multispectral(MS) image into intensity, hue and saturation component. Secondly, segment intensity component into dense contour and smooth regions according to GMM and NSCT. And then gain new intensity component by fusing intensity component and high resolution image with Àtrous wavelet transform(ATWT) fusion in smooth areas and NSCT fusion in dense contour areas. Finally transform the new intensity together with hue component, saturation component back into RGB space and obtain the fused image. Multisource remote sensing images are tested to assess this proposed algorithm. Visual evaluation and statistics analysis are employed to evaluate the quality of fused images of different methods. The proposed improved algorithm demonstrates excellent spectrum information and high resolution. Experiment results show that the new proposed fusion algorithm incorporating with region segmentation based improved GMM and MRA outperforms those algorithms based on single multiscale transform.
Multi-Atlas Segmentation for Abdominal Organs with Gaussian Mixture Models.
Burke, Ryan P; Xu, Zhoubing; Lee, Christopher P; Baucom, Rebeccah B; Poulose, Benjamin K; Abramson, Richard G; Landman, Bennett A
2015-03-17
Abdominal organ segmentation with clinically acquired computed tomography (CT) is drawing increasing interest in the medical imaging community. Gaussian mixture models (GMM) have been extensively used through medical segmentation, most notably in the brain for cerebrospinal fluid/gray matter/white matter differentiation. Because abdominal CT exhibit strong localized intensity characteristics, GMM have recently been incorporated in multi-stage abdominal segmentation algorithms. In the context of variable abdominal anatomy and rich algorithms, it is difficult to assess the marginal contribution of GMM. Herein, we characterize the efficacy of an a posteriori framework that integrates GMM of organ-wise intensity likelihood with spatial priors from multiple target-specific registered labels. In our study, we first manually labeled 100 CT images. Then, we assigned 40 images to use as training data for constructing target-specific spatial priors and intensity likelihoods. The remaining 60 images were evaluated as test targets for segmenting 12 abdominal organs. The overlap between the true and the automatic segmentations was measured by Dice similarity coefficient (DSC). A median improvement of 145% was achieved by integrating the GMM intensity likelihood against the specific spatial prior. The proposed framework opens the opportunities for abdominal organ segmentation by efficiently using both the spatial and appearance information from the atlases, and creates a benchmark for large-scale automatic abdominal segmentation.
Fast Kalman-like filtering for large-dimensional linear and Gaussian state-space models
Ait-El-Fquih, Boujemaa
2015-08-13
This paper considers the filtering problem for linear and Gaussian state-space models with large dimensions, a setup in which the optimal Kalman Filter (KF) might not be applicable owing to the excessive cost of manipulating huge covariance matrices. Among the most popular alternatives that enable cheaper and reasonable computation is the Ensemble KF (EnKF), a Monte Carlo-based approximation. In this paper, we consider a class of a posteriori distributions with diagonal covariance matrices and propose fast approximate deterministic-based algorithms based on the Variational Bayesian (VB) approach. More specifically, we derive two iterative KF-like algorithms that differ in the way they operate between two successive filtering estimates; one involves a smoothing estimate and the other involves a prediction estimate. Despite its iterative nature, the prediction-based algorithm provides a computational cost that is, on the one hand, independent of the number of iterations in the limit of very large state dimensions, and on the other hand, always much smaller than the cost of the EnKF. The cost of the smoothing-based algorithm depends on the number of iterations that may, in some situations, make this algorithm slower than the EnKF. The performances of the proposed filters are studied and compared to those of the KF and EnKF through a numerical example.
Spectrum recovery method based on sparse representation for segmented multi-Gaussian model
Teng, Yidan; Zhang, Ye; Ti, Chunli; Su, Nan
2016-09-01
Hyperspectral images can realize crackajack features discriminability for supplying diagnostic characteristics with high spectral resolution. However, various degradations may generate negative influence on the spectral information, including water absorption, bands-continuous noise. On the other hand, the huge data volume and strong redundancy among spectrums produced intense demand on compressing HSIs in spectral dimension, which also leads to the loss of spectral information. The reconstruction of spectral diagnostic characteristics has irreplaceable significance for the subsequent application of HSIs. This paper introduces a spectrum restoration method for HSIs making use of segmented multi-Gaussian model (SMGM) and sparse representation. A SMGM is established to indicating the unsymmetrical spectral absorption and reflection characteristics, meanwhile, its rationality and sparse property are discussed. With the application of compressed sensing (CS) theory, we implement sparse representation to the SMGM. Then, the degraded and compressed HSIs can be reconstructed utilizing the uninjured or key bands. Finally, we take low rank matrix recovery (LRMR) algorithm for post processing to restore the spatial details. The proposed method was tested on the spectral data captured on the ground with artificial water absorption condition and an AVIRIS-HSI data set. The experimental results in terms of qualitative and quantitative assessments demonstrate that the effectiveness on recovering the spectral information from both degradations and loss compression. The spectral diagnostic characteristics and the spatial geometry feature are well preserved.
A Grasp-pose Generation Method Based on Gaussian Mixture Models
Directory of Open Access Journals (Sweden)
Wenjia Wu
2015-11-01
Full Text Available A Gaussian Mixture Model (GMM-based grasp-pose generation method is proposed in this paper. Through offline training, the GMM is set up and used to depict the distribution of the robot’s reachable orientations. By dividing the robot’s workspace into small 3D voxels and training the GMM for each voxel, a look-up table covering all the workspace is built with the x, y and z positions as the index and the GMM as the entry. Through the definition of Task Space Regions (TSR, an object’s feasible grasp poses are expressed as a continuous region. With the GMM, grasp poses can be preferentially sampled from regions with high reachability probabilities in the online grasp-planning stage. The GMM can also be used as a preliminary judgement of a grasp pose’s reachability. Experiments on both a simulated and a real robot show the superiority of our method over the existing method.
Krabbenhoft, David P.; Anderson, Mary P.; Bowser, Carl J.
1990-01-01
A three-dimensional groundwater flow and solute transport model was calibrated to a plume of water described by measurements of δ18O and used to calculate groundwater inflow and outflow rates at a lake in northern Wisconsin. The flow model was calibrated to observed hydraulic gradients and estimated recharge rates. Calibration of the solute transport submodel to the configuration of a stable isotope (18O) plume in the contiguous aquifer on the downgradient side of the lake provides additional data to constrain the model. A good match between observed and simulated temporal variations in plume configuration indicates that the model closely simulated the dynamics of the real system. The model provides information on natural variations of rates of groundwater inflow, lake water outflow, and recharge to the water table. Inflow and outflow estimates compare favorably with estimates derived by the isotope mass balance method (Krabbenhoft et al., this issue). Model simulations agree with field observations that show groundwater inflow rates are more sensitive to seasonal variations in recharge than outflow.
Directory of Open Access Journals (Sweden)
Oleksandr Zaporozhets
2016-06-01
Full Text Available Purpose: Airport air pollution is growing concern because of the air traffic expansion over the years (at annual rate of 5 %, rising tension of airports and growing cities expansion close each other (for such Ukrainian airports, as Zhulyany, Boryspol, Lviv, Odesa and Zaporizhzhia and accordingly growing public concern with air quality around the airport. Analysis of inventory emission results at major European and Ukrainian airports highlighted, that an aircraft is the dominant source of air pollution in most cases under consideration. For accurate assessment of aircraft emission contribution to total airport pollution and development of successful mitigation strategies, it is necessary to combine the modeling and measurement methods. Methods: Measurement of NOx concentration in the jet/plume from aircraft engine was implemented by chemiluminescence method under real operating conditions (taxi, landing, accelerating on the runway and take-off at International Boryspol airport (IBA. Modeling of NOx concentration was done by complex model PolEmiCa, which takes into account the transport and dilution of air contaminates by exhaust gases jet and the wing trailing vortexes.Results: The results of the measured NOx concentration in plume from aircraft engine for take-off conditions at IBA were used for improvement and validation of the complex model PolEmiCa. The comparison of measured and modeled instantaneous concentration of NOx was sufficiently improved by taking into account the impact of wing trailing vortices on the parameters of the jet (buoyancy height, horizontal and vertical deviation and on concentration distribution in plume. Discussion: Combined approach of modeling and measurement methods provides more accurate representation of aircraft emission contribution to total air pollution in airport area. Modeling side provides scientific grounding for organization of instrumental monitoring of aircraft engine emissions, particularly, scheme
Min, B.; Nwachukwu, A.; Srinivasan, S.; Wheeler, M. F.
2015-12-01
This study formulates a framework of a model selection that refines geological models for monitoring CO2 plume migration. Special emphasis is placed on CO2 injection, and the particular techniques that are used for this study including model selection, particle tracking proxies, and partial coupling of flow and geomechanics. The proposed process starts with generating a large initial ensemble of reservoir models that reflect a prior uncertainty in reservoir description, including all plausible geologic scenarios. These models are presumed to be conditioned to available static data. In the absence of production or injection data, all prior reservoir models are regarded as equiprobable. Thus, the model selection algorithm is applied to select a few representative reservoir models that are more consistent with observed dynamic responses. A quick assessment of the models must then be performed to evaluate their dynamic characteristics and flow connectivity. This approach develops a particle tracking proxy and a finite element method solver for solving the flow equation and the stress problem, respectively. The shape of CO2 plume is estimated using a particle-tracking proxy that serves as a fast approximation of finite-difference simulation models. Sequentially, a finite element method solver is coupled with the proxy for analyzing geomechanical effects resulting from CO2 injection. A method is then implemented to group the models into clusters based on similarities in the estimated responses. The posterior model set is chosen as the cluster that produces the minimum deviation from the observed field data. The efficacy of non-dominated sorting based on Pareto-optimality is also tested in the current model selection framework. The proposed scheme is demonstrated on a carbon sequestration project in Algeria. Coupling surface deformation data with well injection data enhances the efficiency of tracking the CO2 plume. Therefore, this algorithm provides a probabilistic
Cooper, B. P., Jr.
1979-01-01
A model for the boundary layer at the exit plane of a rocket nozzle was developed which, unlike most previous models, includes the subsonic sublayer. The equations for the flow near the nozzle exit plane are presented and the method by which the subsonic sublayer transitions to supersonic flow in the plume is described. The resulting model describes the entire boundary layer and can be used to provide a startline for method-of-characteristics calculations of plume flowfields. The model was incorporated into a method of characteristics computer program and comparisons of computed results to experimental data show good agreement. The data used in the comparisons were obtained in tests in which mass fluxes from a 22.2-N (5 lbf) thrust engine were measured at angles off the nozzle centerline of up to 150 deg. Additional comparisons were made with data obtained during tests of a 0.89-N (0.2 lbr) monopropellant thruster and from the OH-64 space shuttle heating tests. The agreement with the data indicates that the model can be used for calculating plume backflow properties.
Rosário, N. E.; Longo, K. M.; Freitas, S. R.; Yamasoe, M. A.; Fonseca, R. M.
2012-07-01
Intra-seasonal variability of smoke aerosol optical depth (AOD) and downwelling solar irradiance at the surface during the 2002 biomass burning season in South America was modeled using the Coupled Chemistry-Aerosol-Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CCATT-BRAMS). Measurements of AOD from the AErosol RObotic NETwork (AERONET) and solar irradiance at the surface from the Solar Radiation Network (SolRad-NET) were used to evaluate model results. In general, the major features associated with AOD evolution over the southern part of the Amazon Basin and cerrado ecosystem are captured by the model. The main discrepancies were found for high aerosol loading events. In the northeastern portion of the Amazon Basin the model systematically underestimated AOD. This is likely due to the cloudy nature of the region, preventing accurate detection of the fire spots used in the emission model. Moreover, measured AOD were very often close to background conditions and emissions other than smoke were not considered in the simulation. Therefore, under the background scenario, one would expect the model to underestimate AOD. The issue of high aerosol loading events in the southern part of the Amazon and cerrado is also discussed in the context of emission shortcomings. The Cuiabá cerrado site was the only one where the highest quality AERONET data were unavailable. Thus, lower quality data were used. Root-mean-square-error (RMSE) between the model and observations decreased from 0.48 to 0.17 when extreme AOD events (AOD550 nm ≥ 1.0) and Cuiabá were excluded from analysis. Downward surface solar irradiance comparisons also followed similar trends when extremes AOD were excluded. This highlights the need to improve the modelling of the regional smoke plume in order to enhance the accuracy of the radiative energy budget. Aerosol optical model based on the mean intensive properties of smoke from the southern part of the
Modelling exhaust plume mixing in the near field of an aircraft
Directory of Open Access Journals (Sweden)
F. Garnier
Full Text Available A simplified approach has been applied to analyse the mixing and entrainment processes of the engine exhaust through their interaction with the vortex wake of an aircraft. Our investigation is focused on the near field, extending from the exit nozzle until about 30 s after the wake is generated, in the vortex phase. This study was performed by using an integral model and a numerical simulation for two large civil aircraft: a two-engine Airbus 330 and a four-engine Boeing 747. The influence of the wing-tip vortices on the dilution ratio (defined as a tracer concentration shown. The mixing process is also affected by the buoyancy effect, but only after the jet regime, when the trapping in the vortex core has occurred. In the early wake, the engine jet location (i.e. inboard or outboard engine jet has an important influence on the mixing rate. The plume streamlines inside the vortices are subject to distortion and stretching, and the role of the descent of the vortices on the maximum tracer concentration is discussed. Qualitative comparison with contrail photograph shows similar features. Finally, tracer concentration of inboard engine centreline of B-747 are compared with other theoretical analyses and measured data.
A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates
Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.
2015-08-01
We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.
Green, Rebecca E.; Breed, Greg A.; Dagg, Michael J.; Lohrenz, Steven E.
2008-07-01
Increases in nitrate loading to the Mississippi River watershed during the last 50 years are considered responsible for the increase in hypoxic zone size in Louisiana-Texas shelf bottom waters. There is currently a national mandate to decrease the size of the hypoxic zone to 5000 km 2 by 2015, mostly by a 30% reduction in annual nitrogen discharge into the Gulf of Mexico. We developed an ecosystem model for the Mississippi River plume to investigate the response of organic matter production and sedimentation to variable nitrate loading. The nitrogen-based model consisted of nine compartments (nitrate, ammonium, labile dissolved organic nitrogen, bacteria, small phytoplankton, diatoms, micro- and mesozooplankton, and detritus), and was developed for the spring season, when sedimentation of organic matter from plume surface waters is considered important in the development of shelf hypoxia. The model was forced by physical parameters specified along the river-ocean salinity gradient, including residence time, light attenuation by dissolved and particulate matter, mixed layer depth, and dilution. The model was developed using measurements of biological biomasses and nutrient concentrations across the salinity gradient, and model validation was performed with an independent dataset of primary production measurements for different riverine NO 3 loads. Based on simulations over the range of observed springtime NO 3 loads, small phytoplankton contributed on average 80% to primary production for intermediate to high salinities (>15), and the main contributors to modeled sedimentation at these salinities were diatom sinking, microzooplankton egestion, and small phytoplankton mortality. We investigated the impact of limiting factors on the relationship between NO 3 loading and ecosystem rates. Model results showed that primary production was primarily limited by physical dilution of NO 3, followed by abiotic light attenuation, light attenuation due to mixing, and diatom
Hao, Zengchao; Hao, Fanghua; Singh, Vijay P.; Sun, Alexander Y.; Xia, Youlong
2016-11-01
Prediction of drought plays an important role in drought preparedness and mitigation, especially because of large impacts of drought and increasing demand for water resources. An important aspect for improving drought prediction skills is the identification of drought predictability sources. In general, a drought originates from precipitation deficit and thus the antecedent meteorological drought may provide predictive information for other types of drought. In this study, a hydrological drought (represented by Standardized Runoff Index (SRI)) prediction method is proposed based on the meta-Gaussian model taking into account the persistence and its prior meteorological drought condition (represented by Standardized Precipitation Index (SPI)). Considering the inherent nature of standardized drought indices, the meta-Gaussian model arises as a suitable model for constructing the joint distribution of multiple drought indices. Accordingly, the conditional distribution of hydrological drought can be derived analytically, which enables the probabilistic prediction of hydrological drought in the target period and uncertainty quantifications. Based on monthly precipitation and surface runoff of climate divisions of Texas, U.S., 1-month and 2-month lead predictions of hydrological drought are illustrated and compared to the prediction from Ensemble Streamflow Prediction (ESP). Results, based on 10 climate divisions in Texas, show that the proposed meta-Gaussian model provides useful drought prediction information with performance depending on regions and seasons.
Chen, Tianju; Zhang, Jinzhi; Wu, Jinhu
2016-07-01
The kinetic and energy productions of pyrolysis of a lignocellulosic biomass were investigated using a three-parallel Gaussian distribution method in this work. The pyrolysis experiment of the pine sawdust was performed using a thermogravimetric-mass spectroscopy (TG-MS) analyzer. A three-parallel Gaussian distributed activation energy model (DAEM)-reaction model was used to describe thermal decomposition behaviors of the three components, hemicellulose, cellulose and lignin. The first, second and third pseudocomponents represent the fractions of hemicellulose, cellulose and lignin, respectively. It was found that the model is capable of predicting the pyrolysis behavior of the pine sawdust. The activation energy distribution peaks for the three pseudo-components were centered at 186.8, 197.5 and 203.9kJmol(-1) for the pine sawdust, respectively. The evolution profiles of H2, CH4, CO, and CO2 were well predicted using the three-parallel Gaussian distribution model. In addition, the chemical composition of bio-oil was also obtained by pyrolysis-gas chromatography/mass spectrometry instrument (Py-GC/MS). Copyright © 2016 Elsevier Ltd. All rights reserved.
Cross-correlations and joint gaussianity in multivariate level crossing models.
Di Bernardino, Elena; León, José; Tchumatchenko, Tatjana
2014-04-17
A variety of phenomena in physical and biological sciences can be mathematically understood by considering the statistical properties of level crossings of random Gaussian processes. Notably, a growing number of these phenomena demand a consideration of correlated level crossings emerging from multiple correlated processes. While many theoretical results have been obtained in the last decades for individual Gaussian level-crossing processes, few results are available for multivariate, jointly correlated threshold crossings. Here, we address bivariate upward crossing processes and derive the corresponding bivariate Central Limit Theorem as well as provide closed-form expressions for their joint level-crossing correlations.
Canales-Rodríguez, Erick J; Sotiropoulos, Stamatios N; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Mendizabal, Yosu Yurramendi; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond
2014-01-01
Due to a higher capability in resolving white matter fiber crossings, Spherical Deconvolution (SD) methods have become very popular in brain fiber-tracking applications. However, while some of these estimation algorithms assume a central Gaussian distribution for the MRI noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique intended to deal with realistic MRI noise. The algorithm relies on a maximum a posteriori formulation based on Rician and noncentral Chi likelihood models and includes a total variation (TV) spatial regularization term. By means of a synthetic phantom contaminated with noise mimicking patterns generated by data processing in mu...
Cuchiara, G. C.; Rappenglück, B.; Rubio, M. A.; Lissi, E.; Gramsch, E.; Garreaud, R. D.
2017-10-01
On January 4, 2014, during the summer period in South America, an intense forest and dry pasture wildfire occurred nearby the city of Santiago de Chile. On that day the biomass-burning plume was transported by low-intensity winds towards the metropolitan area of Santiago and impacted the concentration of pollutants in this region. In this study, the Weather Research and Forecasting model coupled with Chemistry (WRF/Chem) is implemented to investigate the biomass-burning plume associated with these wildfires nearby Santiago, which impacted the ground-level ozone concentration and exacerbated Santiago's air quality. Meteorological variables simulated by WRF/Chem are compared against surface and radiosonde observations, and the results show that the model reproduces fairly well the observed wind speed, wind direction air temperature and relative humidity for the case studied. Based on an analysis of the transport of an inert tracer released over the locations, and at the time the wildfires were captured by the satellite-borne Moderate Resolution Imaging Spectroradiometer (MODIS), the model reproduced reasonably well the transport of biomass burning plume towards the city of Santiago de Chile within a time delay of two hours as observed in ceilometer data. A six day air quality simulation was performed: the first three days were used to validate the anthropogenic and biogenic emissions, and the last three days (during and after the wildfire event) to analyze the performance of WRF/Chem plume-rise model within FINNv1 fire emission estimations. The model presented a satisfactory performance on the first days of the simulation when contrasted against data from the well-established air quality network over the city of Santiago de Chile. These days represent the urban air quality base case for Santiago de Chile unimpacted by fire emissions. However, for the last three simulation days, which were impacted by the fire emissions, the statistical indices showed a decrease in
Institute of Scientific and Technical Information of China (English)
袁步平; 邱榕; 杨超琼
2011-01-01
This paper, based on the results of three small scale experiments done in the Tibet region, China, is aimed to simulate the full-scale axi-symmetric fire plumes under different plateau conditions for the dimensionless fire power by means of computational fluid dynamics software so as to solve numerically the form of the Navier-Stokes equations appropriate for low spe'ed, thermally driven flow with the stress on smoke and heat transfer from the fires. In our paper, we have made a comparison of the experimental and the simulated results in Tibet area with calculated ones of Heskestad' s and Mccaffrey' s plume prediction model, and then analyzed the applicability of these classical models in the plateau conditions. Thus, the first conclusion we have arrived at is that the centerline plume temperature tends to increase with the increase of the elevation, whereas the second conclusion we have made is that the calculation result of the Heskestad !s plume model which had taken into account the differences between the fire plume density and the ambient air density, tends to be higher than the simulated one, with the discrepancies increased with the increase of the dimensionless fire power or the decrease with the outside pressure and air density. Then, the calculation results of the Mccaf-frey' s plume model prove to be lower than the simulated one. In addition, at the high altitudes, the decreasing rate of the centerline plume temperature tends to rise with the height predicted by the Mc-caffrey' s plume model slower than the simulation results. But both of them have made a perfect prediction for the small scale fire power in which the amount of ambient air was very little entrained. In this connection, we have also analyzed the centerline plume temperature decreasing trend at the altitude of 4 000 meters high and then amended the Mccaffrey' s plume model appropriately in accordance with the results of our own experiments with the parameters of η = - 3/4, k = 0.69 in the
Gacal, G. F. B.; Lagrosas, N.
2016-12-01
Nowadays, cameras are commonly used by students. In this study, we use this instrument to look at moon signals and relate these signals to Gaussian functions. To implement this as a classroom activity, students need computers, computer software to visualize signals, and moon images. A normalized Gaussian function is often used to represent probability density functions of normal distribution. It is described by its mean m and standard deviation s. The smaller standard deviation implies less spread from the mean. For the 2-dimensional Gaussian function, the mean can be described by coordinates (x0, y0), while the standard deviations can be described by sx and sy. In modelling moon signals obtained from sky-cameras, the position of the mean (x0, y0) is solved by locating the coordinates of the maximum signal of the moon. The two standard deviations are the mean square weighted deviation based from the sum of total pixel values of all rows/columns. If visualized in three dimensions, the 2D Gaussian function appears as a 3D bell surface (Fig. 1a). This shape is similar to the pixel value distribution of moon signals as captured by a sky-camera. An example of this is illustrated in Fig 1b taken around 22:20 (local time) of January 31, 2015. The local time is 8 hours ahead of coordinated universal time (UTC). This image is produced by a commercial camera (Canon Powershot A2300) with 1s exposure time, f-stop of f/2.8, and 5mm focal length. One has to chose a camera with high sensitivity when operated at nighttime to effectively detect these signals. Fig. 1b is obtained by converting the red-green-blue (RGB) photo to grayscale values. The grayscale values are then converted to a double data type matrix. The last conversion process is implemented for the purpose of having the same scales for both Gaussian model and pixel distribution of raw signals. Subtraction of the Gaussian model from the raw data produces a moonless image as shown in Fig. 1c. This moonless image can be
Institute of Scientific and Technical Information of China (English)
蔡阳健; 林强
2002-01-01
The generalized Collins formula for partially coherent beams through axially non-symmetrical optical systems in the spatial-frequency domain is derived by means of the tensor method. Based on this formula, the tensor ABCD law in the spatial-frequency domain for partially coherent twisted anisotropic Gaussian-Schell model (GSM) beams is derived, which governs the transformation of the twisted anisotropic GSM beams in the spatialfrequency domain. An example of an application is provided.
Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model
Energy Technology Data Exchange (ETDEWEB)
Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven
2009-07-01
The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.
Adaptive Gaussian mixture models for pre-screening in GPR data
Torrione, Peter; Morton, Kenneth, Jr.; Besaw, Lance E.
2011-06-01
Due to the large amount of data generated by vehicle-mounted ground penetrating radar (GPR) antennae arrays, advanced feature extraction and classification can only be performed on a small subset of data during real-time operation. As a result, most GPR based landmine detection systems implement "pre-screening" algorithms to processes all of the data generated by the antennae array and identify locations with anomalous signatures for more advanced processing. These pre-screening algorithms must be computationally efficient and obtain high probability of detection, but can permit a false alarm rate which might be higher than the total system requirements. Many approaches to prescreening have previously been proposed, including linear prediction coefficients, the LMS algorithm, and CFAR-based approaches. Similar pre-screening techniques have also been developed in the field of video processing to identify anomalous behavior or anomalous objects. One such algorithm, an online k-means approximation to an adaptive Gaussian mixture model (GMM), is particularly well-suited to application for pre-screening in GPR data due to its computational efficiency, non-linear nature, and relevance of the logic underlying the algorithm to GPR processing. In this work we explore the application of an adaptive GMM-based approach for anomaly detection from the video processing literature to pre-screening in GPR data. Results with the ARA Nemesis landmine detection system demonstrate significant pre-screening performance improvements compared to alternative approaches, and indicate that the proposed algorithm is a complimentary technique to existing methods.
Combining observations and model simulations to reduce the hazard of Etna volcanic ash plumes
Scollo, Simona; Boselli, Antonella; Coltelli, Mauro; Leto, Giuseppe; Pisani, Gianluca; Prestifilippo, Michele; Spinelli, Nicola; Wang, Xuan; Zanmar Sanchez, Ricardo
2014-05-01
Etna is one of the most active volcanoes in the world with a recent activity characterized by powerful lava fountains that produce several kilometres high eruption columns and disperse volcanic ash in the atmosphere. It is well known that, to improve the volcanic ash dispersal forecast of an ongoing explosive eruption, input parameters used by volcanic ash dispersal models should be measured during the eruption. In this work, in order to better quantify the volcanic ash dispersal, we use data from the video-surveillance system of Istituto Nazionale di Geofisica e Vulcanologia, Osservatorio Etneo, and from the lidar system together with a volcanic ash dispersal model. In detail, the visible camera installed in Catania, 27 km from the vent is able to evaluate the evolution of column height with time. The Lidar, installed at the "M.G. Fracastoro" astrophysical observatory (14.97° E, 37.69° N) of the Istituto Nazionale di Astrofisica in Catania, located at a distance of 7 km from the Etna summit craters, uses a frequency doubled Nd:YAG laser source operating at a 532-nm wavelength, with a repetition rate of 1 kHz. Backscattering and depolarization values measured by the Lidar system can give, with a certain degree of uncertainty, an estimation of volcanic ash concentration in atmosphere. The 12 August 2011 activity is considered a perfect test case because volcanic plume was retrieved by both camera and Lidar. We evaluated the mass eruption rate from the column height and used best fit procedures comparing simulated volcanic ash concentrations with those extracted by the Lidar data. During this event, powerful lava fountains were well visible at about 08:30 GMT and a sustained eruption column was produced since about 08:55 GMT. Ash emission completely ceased around 11:30 GMT. The proposed approach is an attempt to produce more robust ash dispersal forecasts reducing the hazard to air traffic during Etna volcanic crisis.
Waubke, Holger; Kasess, Christian H.
2016-11-01
Devices that emit structure-borne sound are commonly decoupled by elastic components to shield the environment from acoustical noise and vibrations. The elastic elements often have a hysteretic behavior that is typically neglected. In order to take hysteretic behavior into account, Bouc developed a differential equation for such materials, especially joints made of rubber or equipped with dampers. In this work, the Bouc model is solved by means of the Gaussian closure technique based on the Kolmogorov equation. Kolmogorov developed a method to derive probability density functions for arbitrary explicit first-order vector differential equations under white noise excitation using a partial differential equation of a multivariate conditional probability distribution. Up to now no analytical solution of the Kolmogorov equation in conjunction with the Bouc model exists. Therefore a wide range of approximate solutions, especially the statistical linearization, were developed. Using the Gaussian closure technique that is an approximation to the Kolmogorov equation assuming a multivariate Gaussian distribution an analytic solution is derived in this paper for the Bouc model. For the stationary case the two methods yield equivalent results, however, in contrast to statistical linearization the presented solution allows to calculate the transient behavior explicitly. Further, stationary case leads to an implicit set of equations that can be solved iteratively with a small number of iterations and without instabilities for specific parameter sets.
Carena, A; Curri, V; Poggiolini, P; Jiang, Y; Forghieri, F
2014-01-01
The GN-model has been shown to overestimate the variance of non-linearity due to the signal Gaussianity approximation, leading to realistic system maximum reach predictions which may be pessimistic by about 5% to 15%, depending on fiber type and system set-up. Analytical corrections have been proposed, which however substantially increase the model complexity. In this paper we provide a closed-form simple GN-model correction which we show to be very effective in correcting for the GN-model tendency to overestimate non-linearity. Our formula also allows to clearly identify the correction dependence on key system parameters, such as the span length and loss.
Energy Technology Data Exchange (ETDEWEB)
Lewellen, W.S.; Sykes, R.I.; Parker, S.F.; Henn, D.S.; Seaman, N.L.; Stauffer, D.R.; Warner, T.T.
1989-02-01
An existing mesoscale model (the Penn State University/National Center for Atmospheric Research mesoscale model) was extended for use with a PUFF-type plume model. By including a fine-mesh 2 km nested-grid and the assimilation of 4-dimensional data, horizontally variable hourly-average meteorological conditions can be simulated up to 300 km downwind of stack emissions in complex terrain. In 4 days of tests (32 90-minute periods) against meteorological observations obtained in moderately complex terrain, wind-speed uncertainties are usually less than 3.3 m/s, and direction errors are less than 40/degree/ for winds less than 1 m/s. The performance of this model was also compared on 3 days (20 hours) with a locally homogeneous meteorological data assimilation model when both were coupled to a new second order closure integrated puff model (SCIPUFF). Use of the new mesoscale model slightly reduced the deviations between simulated and observed concentrations of SF/sub 6/ tracer, even within 50 km. At distances longer than 50 km (not tested) it is expected that use of the mesoscale model would further improve dispersion simulations. 8 refs., 26 figs., 6 tabs.
Energy Technology Data Exchange (ETDEWEB)
McGrattan, K.B.; Baum, H.R.; Walton, W.D.; Trelles, J.
1997-01-01
The model, ALOFT (A Large Outdoor Fire plume Trajectory), is based on the fundamental conservation equations that govern the introduction of hot gases and particulate matter from a large fire into the atmosphere. Two forms of the Navier-Stokes equations are solved numerically--one to describe the plume rise in the first kilometer, the other to describe the plume transport over tens of kilometers of complex terrain. Each form of the governing equations resolves the flow field at different length scales. Particulate matter, or any non-reacting combustion product, is represented by Lagrangian particles that are advected by the fire-induced flow field. Background atmospheric motion is described in terms of the angular fluctuation of the prevailing wind, and represented by random perturbations to the mean particle paths. Results of the model are compared with three sets of fields experiments. Estimates are made of distances from the fire where ground level concentrations of the combustion products fall below regulatory threshold levels.
Cozzarelli, I. M.; Esaid, H. I.; Bekins, B. A.; Eganhouse, R. P.; Baedecker, M.
2002-05-01
Assessment of natural attenuation as a remedial option requires understanding the long-term fate of contaminant compounds. The development of correct conceptual models of biodegradation requires observations at spatial and temporal scales appropriate for the reactions being measured. For example, the availability of electron acceptors such as solid-phase iron oxides may vary at the cm scale due to aquifer heterogeneities. Characterizing the distribution of these oxides may require small-scale measurements over time scales of tens of years in order to assess their impact on the fate of contaminants. The long-term study of natural attenuation of hydrocarbons in a contaminant plume near Bemidji, MN provides insight into how natural attenuation of hydrocarbons evolves over time. The sandy glacial-outwash aquifer at this USGS Toxic Substances Hydrology research site was contaminated by crude oil in 1979. During the 16 years that data have been collected the shape and extent of the contaminant plume changed as redox reactions, most notably iron reduction, progressed over time. Investigation of the controlling microbial reactions in this system required a systematic and multi-scaled approach. Early indications of plume shrinkage were observed over a time scale of a few years, based on observation well data. These changes were associated with iron reduction near the crude-oil source. The depletion of Fe (III) oxides near the contaminant source caused the dissolved iron concentrations to increase and spread downgradient at a rate of approximately 3 m/year. The zone of maximum benzene, toluene, ethylbenzene, and xylene (BTEX) concentrations has also spread within the anoxic plume. Subsequent analyses of sediment and water, collected at small-scale cm intervals from cores in the contaminant plume, provided insight into the evolution of redox zones at smaller scales. Contaminants, such as ortho-xylene, that appeared to be contained near the oil source based on the larger
Dubkov, Alexander A.; Kharcheva, Anna A.
2014-05-01
Two generalized Verhulst equations with non-Gaussian fluctuations of the reproduction rate and the volume of resources are under analytical investigation. For the first model, using the central limit theorem, we find the asymptotic behavior of the probability distribution of population density for an arbitrary non-Gaussian colored noise with nonzero power spectral density at zero frequency. Specifically, we confirm this result in the case of Markovian dichotomous noise and examine the evolution of mean population density. For fluctuating resources with one-sided stable distribution the transient dynamics of probability density function and statistical characteristics in the steady state are obtained. As shown, the scenario of the population's evolution depends on the parameter of nonlinearity in the original stochastic equation.
Armand, P. P.; Achim, P.; Taffary, T.
2006-12-01
The monitoring of atmospheric radioactive xenon concentration is performed for nuclear safety regulatory requirements. It is also planned to be used for the detection of hypothetical nuclear tests in the framework of the Comprehensive nuclear-Test-Ban Treaty (CTBT). In this context, the French Atomic Energy Commission designed a high sensitive and automated fieldable station, named SPALAX, to measure the activity concentrations of xenon isotopes in the atmosphere. SPALAX stations were set up in Western Europe and have been operated quite continuously for three years or more, detecting principally xenon-133 and more scarcely xenon-135, xenon-133m and xenon-131m. There are around 150 nuclear power plants in the European Union, research reactors, reprocessing plants, medical production and application facilities releasing radioactive xenon in normal or incidental operations. A numerical study was carried out aiming to explain the SPALAX measurements. The mesoscale Atmospheric Transport Modelling involves the MM5 suite (PSU- NCAR) to predict the wind fields on nested domains, and FLEXPART, a 3D Lagrangian particle dispersion code, used to simulate the backward transport of xenon plumes detected by the SPALAX. For every event of detection, at least one potential xenon source has a significant efficiency of emission. The identified likely sources are located quite close to the SPALAX stations (some tens of kilometres), or situated farther (a few hundreds of kilometres). A base line of some mBq per cubic meter in xenon-133 is generated by the nuclear power plants. Peaks of xenon-133 ranging from tens to hundreds of mBq per cubic meter originate from a radioisotope production facility. The calculated xenon source terms required to obtain the SPALAX measurements are discussed and seem consistent with realistic emissions from the xenon sources in Western Europe.
Modelling of transport and biogeochemical processes in pollution plumes: Vejen landfill, Denmark
DEFF Research Database (Denmark)
Brun, A.; Engesgaard, Peter Knudegaard; Christensen, Thomas Højlund;
2002-01-01
A biogeochemical transport code is used to simulate leachate attenuation. biogeochemical processes. and development of redox zones in a pollution plume downstream of the Vejen landfill in Denmark. Calibration of the degradation parameters resulted in a good agreement with the observed distribution...
Directory of Open Access Journals (Sweden)
N. E. Rosário
2013-03-01
. This highlights the need to improve modelling of the regional smoke plume in order to enhance the accuracy of the radiative energy budget. An aerosol optical model based on the mean intensive properties of smoke from the southern part of the Amazon basin produced a radiative flux perturbation efficiency (RFPE of −158 Wm−2/AOD550 nm at noon. This value falls between −154 Wm−2/AOD550 nm and −187 Wm−2/AOD550 nm, the range obtained when spatially varying optical models were considered. The 24 h average surface radiative flux perturbation over the biomass burning season varied from −55 Wm−2 close to smoke sources in the southern part of the Amazon basin and cerrado to −10 Wm−2 in remote regions of the southeast Brazilian coast.
Coastal river plumes: Collisions and coalescence
Warrick, Jonathan A.; Farnsworth, Katherine L.
2017-02-01
Plumes of buoyant river water spread in the ocean from river mouths, and these plumes influence water quality, sediment dispersal, primary productivity, and circulation along the world's coasts. Most investigations of river plumes have focused on large rivers in a coastal region, for which the physical spreading of the plume is assumed to be independent from the influence of other buoyant plumes. Here we provide new understanding of the spreading patterns of multiple plumes interacting along simplified coastal settings by investigating: (i) the relative likelihood of plume-to-plume interactions at different settings using geophysical scaling, (ii) the diversity of plume frontal collision types and the effects of these collisions on spreading patterns of plume waters using a two-dimensional hydrodynamic model, and (iii) the fundamental differences in plume spreading patterns between coasts with single and multiple rivers using a three-dimensional hydrodynamic model. Geophysical scaling suggests that coastal margins with numerous small rivers (watershed areas 100,000 km2). When two plume fronts meet, several types of collision attributes were found, including refection, subduction and occlusion. We found that the relative differences in pre-collision plume densities and thicknesses strongly influenced the resulting collision types. The three-dimensional spreading of buoyant plumes was found to be influenced by the presence of additional rivers for all modeled scenarios, including those with and without Coriolis and wind. Combined, these results suggest that plume-to-plume interactions are common phenomena for coastal regions offshore of the world's smaller rivers and for coastal settings with multiple river mouths in close proximity, and that the spreading and fate of river waters in these settings will be strongly influenced by these interactions. We conclude that new investigations are needed to characterize how plumes interact offshore of river mouths to better
Coastal river plumes: Collisions and coalescence
Warrick, Jonathan; Farnsworth, Katherine L
2017-01-01
Plumes of buoyant river water spread in the ocean from river mouths, and these plumes influence water quality, sediment dispersal, primary productivity, and circulation along the world’s coasts. Most investigations of river plumes have focused on large rivers in a coastal region, for which the physical spreading of the plume is assumed to be independent from the influence of other buoyant plumes. Here we provide new understanding of the spreading patterns of multiple plumes interacting along simplified coastal settings by investigating: (i) the relative likelihood of plume-to-plume interactions at different settings using geophysical scaling, (ii) the diversity of plume frontal collision types and the effects of these collisions on spreading patterns of plume waters using a two-dimensional hydrodynamic model, and (iii) the fundamental differences in plume spreading patterns between coasts with single and multiple rivers using a three-dimensional hydrodynamic model. Geophysical scaling suggests that coastal margins with numerous small rivers (watershed areas 100,000 km2). When two plume fronts meet, several types of collision attributes were found, including refection, subduction and occlusion. We found that the relative differences in pre-collision plume densities and thicknesses strongly influenced the resulting collision types. The three-dimensional spreading of buoyant plumes was found to be influenced by the presence of additional rivers for all modeled scenarios, including those with and without Coriolis and wind. Combined, these results suggest that plume-to-plume interactions are common phenomena for coastal regions offshore of the world’s smaller rivers and for coastal settings with multiple river mouths in close proximity, and that the spreading and fate of river waters in these settings will be strongly influenced by these interactions. We conclude that new investigations are needed to characterize how plumes interact offshore of river mouths to
Greenberg, Michael; Lioy, Paul; Ozbas, Birnur; Mantell, Nancy; Isukapalli, Sastry; Lahr, Michael; Altiok, Tayfur; Bober, Joseph; Lacy, Clifton; Lowrie, Karen; Mayer, Henry; Rovito, Jennifer
2013-11-01
We built three simulation models that can assist rail transit planners and operators to evaluate high and low probability rail-centered hazard events that could lead to serious consequences for rail-centered networks and their surrounding regions. Our key objective is to provide these models to users who, through planning with these models, can prevent events or more effectively react to them. The first of the three models is an industrial systems simulation tool that closely replicates rail passenger traffic flows between New York Penn Station and Trenton, New Jersey. Second, we built and used a line source plume model to trace chemical plumes released by a slow-moving freight train that could impact rail passengers, as well as people in surrounding areas. Third, we crafted an economic simulation model that estimates the regional economic consequences of a variety of rail-related hazard events through the year 2020. Each model can work independently of the others. However, used together they help provide a coherent story about what could happen and set the stage for planning that should make rail-centered transport systems more resistant and resilient to hazard events. We highlight the limitations and opportunities presented by using these models individually or in sequence.
Directory of Open Access Journals (Sweden)
Carlo Baldassi
Full Text Available In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids, exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i the prediction of residue-residue contacts in proteins, and (ii the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code.
Lin, Guoxing
2017-02-01
Pulsed field gradient (PFG) technique is a noninvasive tool, and has been increasingly employed to study anomalous diffusions in Nuclear Magnetic Resonance (NMR) and Magnetic Resonance Imaging (MRI). However, the analysis of PFG anomalous diffusion is much more complicated than normal diffusion. In this paper, a fractal derivative model based modified Gaussian phase distribution method is proposed to describe PFG anomalous diffusion. By using the phase distribution obtained from the effective phase shift diffusion method based on fractal derivatives, and employing some of the traditional Gaussian phase distribution approximation techniques, a general signal attenuation expression for free fractional diffusion is derived. This expression describes a stretched exponential function based attenuation, which is distinct from both the exponential attenuation for normal diffusion obtained from conventional Gaussian phase distribution approximation, and the Mittag-Leffler function based attenuation for anomalous diffusion obtained from fractional derivative. The obtained signal attenuation expression can analyze the finite gradient pulse width (FGPW) effect. Additionally, it can generally be applied to all three types of PFG fractional diffusions classified based on time derivative order α and space derivative order β. These three types of fractional diffusions include time-fractional diffusion with { 0 reported results based on effective phase shift diffusion equation method and instantaneous signal attenuation method. This method provides a new, convenient approximation formalism for analyzing PFG anomalous diffusion experiments. The expression that can simultaneously interpret general fractional diffusion and FGPW effect could be especially important in PFG MRI, where the narrow gradient pulse limit cannot be satisfied.
Rings, J.; Vrugt, J.A.; Schoups, G.; Huisman, J.A.; Vereecken, H.
2012-01-01
Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive
Rings, J.; Vrugt, J.A.; Schoups, G.; Huisman, J.A.; Vereecken, H.
2012-01-01
Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probabi
Rings, J.; Vrugt, J.A.; Schoups, G.; Huisman, J.A.; Vereecken, H.
2012-01-01
Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probabi
Atmospheric chemistry in volcanic plumes.
von Glasow, Roland
2010-04-13
Recent field observations have shown that the atmospheric plumes of quiescently degassing volcanoes are chemically very active, pointing to the role of chemical cycles involving halogen species and heterogeneous reactions on aerosol particles that have previously been unexplored for this type of volcanic plumes. Key features of these measurements can be reproduced by numerical models such as the one employed in this study. The model shows sustained high levels of reactive bromine in the plume, leading to extensive ozone destruction, that, depending on plume dispersal, can be maintained for several days. The very high concentrations of sulfur dioxide in the volcanic plume reduces the lifetime of the OH radical drastically, so that it is virtually absent in the volcanic plume. This would imply an increased lifetime of methane in volcanic plumes, unless reactive chlorine chemistry in the plume is strong enough to offset the lack of OH chemistry. A further effect of bromine chemistry in addition to ozone destruction shown by the model studies presented here, is the oxidation of mercury. This relates to mercury that has been coemitted with bromine from the volcano but also to background atmospheric mercury. The rapid oxidation of mercury implies a drastically reduced atmospheric lifetime of mercury so that the contribution of volcanic mercury to the atmospheric background might be less than previously thought. However, the implications, especially health and environmental effects due to deposition, might be substantial and warrant further studies, especially field measurements to test this hypothesis.
A Monte Carlo simulation model for stationary non-Gaussian processes
DEFF Research Database (Denmark)
Grigoriu, M.; Ditlevsen, Ove Dalager; Arwade, S. R.
2003-01-01
includes translation processes and is useful for both Monte Carlo simulation and analytical studies. As for translation processes, the mixture of translation processes can have a wide range of marginal distributions and correlation functions. Moreover, these processes can match a broader range of second...... athe proposed Monte Carlo algorithm and compare features of translation processes and mixture of translation processes. Keywords: Monte Carlo simulation, non-Gaussian processes, sampling theorem, stochastic processes, translation processes......A class of stationary non-Gaussian processes, referred to as the class of mixtures of translation processes, is defined by their finite dimensional distributions consisting of mixtures of finite dimensional distributions of translation processes. The class of mixtures of translation processes...
Leroy-Cancellieri, V.; Augustin, P.; Filippi, J. B.; Mari, C.; Fourmentin, M.; Bosseur, F.; Morandini, F.; Delbarre, H.
2014-03-01
Vegetation fires emit large amount of gases and aerosols which are detrimental to human health. Smoke exposure near and downwind of fires depends on the fire propagation, the atmospheric circulations and the burnt vegetation. A better knowledge of the interaction between wildfire and atmosphere is a primary requirement to investigate fire smoke and particle transport. The purpose of this paper is to highlight the usefulness of an UV scanning lidar to characterise the fire smoke plume and consequently validate fire-atmosphere model simulations. An instrumented burn was conducted in a Mediterranean area typical of ones frequently subject to wildfire with low dense shrubs. Using lidar measurements positioned near the experimental site, fire smoke plume was thoroughly characterised by its optical properties, edge and dynamics. These parameters were obtained by combining methods based on lidar inversion technique, wavelet edge detection and a backscatter barycentre technique. The smoke plume displacement was determined using a digital video camera coupled with the lidar. The simulation was performed using a mesoscale atmospheric model in a large eddy simulation configuration (Meso-NH) coupled to a fire propagation physical model (ForeFire), taking into account the effect of wind, slope and fuel properties. A passive numerical scalar tracer was injected in the model at fire location to mimic the smoke plume. The simulated fire smoke plume width remained within the edge smoke plume obtained from lidar measurements. The maximum smoke injection derived from lidar backscatter coefficients and the simulated passive tracer was around 200 m. The vertical position of the simulated plume barycentre was systematically below the barycentre derived from the lidar backscatter coefficients due to the oversimplified properties of the passive tracer compared to real aerosol particles. Simulated speed and horizontal location of the plume compared well with the observations derived from
Leroy-Cancellieri, V.; Augustin, P.; Filippi, J. B.; Mari, C.; Fourmentin, M.; Bosseur, F.; Morandini, F.; Delbarre, H.
2013-08-01
Vegetation fires emit large amount of gases and aerosols which are detrimental to human health. Smoke exposure near and downwind of fires depends on the fire propagation, the atmospheric circulations and the burnt vegetation. A better knowledge of the interaction between wildfire and atmosphere is a primary requirement to investigate fire smoke and particle transport. The purpose of this paper is to highlight the usefulness of an UV scanning lidar to characterize the fire smoke plume and consequently validate fire-atmosphere model simulations. An instrumented burn was conducted in a Mediterranean area typical of ones frequently concern by wildfire with low dense shrubs. Using Lidar measurements positioned near the experimental site, fire smoke plume was thoroughly characterized by its optical properties, edge and dynamics. These parameters were obtained by combining methods based on lidar inversion technique, wavelet edge detection and a backscatter barycenter technique. The smoke plume displacement was determined using a digital video camera coupled with the Lidar. The simulation was performed using a meso-scale atmospheric model in a large eddy simulation configuration (Meso-NH) coupled to a fire propagation physical model (ForeFire) taking into account the effect of wind, slope and fuel properties. A passive numerical scalar tracer was injected in the model at fire location to mimic the smoke plume. The simulated fire smoke plume width remained within the edge smoke plume obtained from lidar measurements. The maximum smoke injection derived from lidar backscatter coefficients and the simulated passive tracer was around 200 m. The vertical position of the simulated plume barycenter was systematically below the barycenter derived from the lidar backscatter coefficients due to the oversimplified properties of the passive tracer compared to real aerosols particles. Simulated speed and horizontal location of the plume compared well with the observations derived from
Energy Technology Data Exchange (ETDEWEB)
Vesselinov, Velimir V.; Broxton, David; Birdsell, Kay; Reneau, Steven; Harp, Dylan; Mishra, Phoolendra [Computational Earth Science - EES-16, Earth and Environmental Sciences, Los Alamos National Laboratory, Los Alamos NM 87545 (United States); Katzman, Danny; Goering, Tim [Environmental Programs (ADEP), Los Alamos National Laboratory, Los Alamos NM 87545 (United States); Vaniman, David; Longmire, Pat; Fabryka-Martin, June; Heikoop, Jeff; Ding, Mei; Hickmott, Don; Jacobs, Elaine [Earth Systems Observations - EES-14, Earth and Environmental Sciences, Los Alamos National Laboratory, Los Alamos NM 87545 (United States)
2013-07-01
A series of site investigations and decision-support analyses have been performed related to a chromium plume in the regional aquifer beneath the Los Alamos National Laboratory (LANL). Based on the collected data and site information, alternative conceptual and numerical models representing governing subsurface processes with different complexity and resolution have been developed. The current conceptual model is supported by multiple lines of evidence based on comprehensive analyses of the available data and modeling results. The model is applied for decision-support analyses related to estimation of contaminant- arrival locations and chromium mass flux reaching the regional aquifer, and to optimization of a site monitoring-well network. Plume characterization is a challenging and non-unique problem because multiple models and contamination scenarios are consistent with the site data and conceptual knowledge. To solve this complex problem, an advanced methodology based on model calibration and uncertainty quantification has been developed within the computational framework MADS (http://mads.lanl.gov). This work implements high-performance computing and novel, efficient and robust model analysis techniques for optimization and uncertainty quantification (ABAGUS, Squads, multi-try (multi-start) techniques), which allow for solving problems with large degrees of freedom. (authors)
Zhang, Ruoqiao; Pal, Debashish; Thibault, Jean-Baptiste; Sauer, Ken D; Bouman, Charles A
2016-01-01
Markov random fields (MRFs) have been widely used as prior models in various inverse problems such as tomographic reconstruction. While MRFs provide a simple and often effective way to model the spatial dependencies in images, they suffer from the fact that parameter estimation is difficult. In practice, this means that MRFs typically have very simple structure that cannot completely capture the subtle characteristics of complex images. In this paper, we present a novel Gaussian mixture Markov random field model (GM-MRF) that can be used as a very expressive prior model for inverse problems such as denoising and reconstruction. The GM-MRF forms a global image model by merging together individual Gaussian-mixture models (GMMs) for image patches. In addition, we present a novel analytical framework for computing MAP estimates using the GM-MRF prior model through the construction of surrogate functions that result in a sequence of quadratic optimizations. We also introduce a simple but effective method to adjust...
Capabilities of 3-D wavelet transforms to detect plume-like structures from seismic tomography
Bergeron, Stephen Y.; Yuen, David A.; Vincent, Alain P.
2000-10-01
The wavelet transform methods have been applied to viewing 3-D seismic tomography by casting the transformed quantities into two proxy distributions, E-max, the maximum of the magnitude of the local spectra about a local point and the associated local wavenumber, k-max. Using a stochastic background noise, we test the capability of this procedure in picking up the coherent structures of upper-mantle plumes. Plumes with a Gaussian shape and a characteristic width up to 2250 km have been tested for various amounts of the signal-to-noise ratios (SNR). We have found that plumes can be picked out for SNR as low as 0.08 db and that the optimal plume width for detection is around 1500 km. For plume width ranging between 700 km and 2000 km, the SNR can be lower than 1 db. This length-scale falls within the range for plume-detection based on the signal-to-noise levels associated with the current global tomographical models.
Wang, Junming; Hiscox, April L; Miller, David R; Meyer, Thomas H; Sammis, Ted W
2009-11-01
A Lagrangian particle model has been adapted to examine human exposures to particulate matter friction velocity, Monin-Obukhov length, and wind direction (1 sec) were measured with a three-axis sonic anemometer at a single point in the field (at 1.5-m height). The Lagrangian model of Wang et al. predicted the near-field concentrations of dust plumes emitted from a field disking operation with an ov