Low Complexity Bayesian Single Channel Source Separation
DEFF Research Database (Denmark)
Beierholm, Thomas; Pedersen, Brian Dam; Winther, Ole
2004-01-01
can be estimated quite precisely using ML-II, but the estimation is quite sensitive to the accuracy of the priors as opposed to the source separation quality for known mixing coefficients, which is quite insensitive to the accuracy of the priors. Finally, we discuss how to improve our approach while...
Bayesian mixture models for source separation in MEG
International Nuclear Information System (INIS)
Calvetti, Daniela; Homa, Laura; Somersalo, Erkki
2011-01-01
This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal. (paper)
Estimation of Input Function from Dynamic PET Brain Data Using Bayesian Blind Source Separation
Czech Academy of Sciences Publication Activity Database
Tichý, Ondřej; Šmídl, Václav
2015-01-01
Roč. 12, č. 4 (2015), s. 1273-1287 ISSN 1820-0214 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : blind source separation * Variational Bayes method * dynamic PET * input function * deconvolution Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.623, year: 2015 http://library.utia.cas.cz/separaty/2015/AS/tichy-0450509.pdf
Bayesian component separation: The Planck experience
Wehus, Ingunn Kathrine; Eriksen, Hans Kristian
2018-05-01
Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.
Sparse Bayesian Learning for Nonstationary Data Sources
Fujimaki, Ryohei; Yairi, Takehisa; Machida, Kazuo
This paper proposes an online Sparse Bayesian Learning (SBL) algorithm for modeling nonstationary data sources. Although most learning algorithms implicitly assume that a data source does not change over time (stationary), one in the real world usually does due to such various factors as dynamically changing environments, device degradation, sudden failures, etc (nonstationary). The proposed algorithm can be made useable for stationary online SBL by setting time decay parameters to zero, and as such it can be interpreted as a single unified framework for online SBL for use with stationary and nonstationary data sources. Tests both on four types of benchmark problems and on actual stock price data have shown it to perform well.
Radiation Source Mapping with Bayesian Inverse Methods
Hykes, Joshua Michael
We present a method to map the spectral and spatial distributions of radioactive sources using a small number of detectors. Locating and identifying radioactive materials is important for border monitoring, accounting for special nuclear material in processing facilities, and in clean-up operations. Most methods to analyze these problems make restrictive assumptions about the distribution of the source. In contrast, the source-mapping method presented here allows an arbitrary three-dimensional distribution in space and a flexible group and gamma peak distribution in energy. To apply the method, the system's geometry and materials must be known. A probabilistic Bayesian approach is used to solve the resulting inverse problem (IP) since the system of equations is ill-posed. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint flux, discrete ordinates solutions, obtained in this work by the Denovo code, are required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes are then used to form the linear model to map the state space to the response space. The test for the method is simultaneously locating a set of 137Cs and 60Co gamma sources in an empty room. This test problem is solved using synthetic measurements generated by a Monte Carlo (MCNP) model and using experimental measurements that we collected for this purpose. With the synthetic data, the predicted source distributions identified the locations of the sources to within tens of centimeters, in a room with an approximately four-by-four meter floor plan. Most of the predicted source intensities were within a factor of ten of their true value. The chi-square value of the predicted source was within a factor of five from the expected value based on the number of measurements employed. With a favorable uniform initial guess, the predicted source map was nearly identical to the true distribution
Convolutive Blind Source Separation Methods
DEFF Research Database (Denmark)
Pedersen, Michael Syskind; Larsen, Jan; Kjems, Ulrik
2008-01-01
During the past decades, much attention has been given to the separation of mixed sources, in particular for the blind case where both the sources and the mixing process are unknown and only recordings of the mixtures are available. In several situations it is desirable to recover all sources from...... the recorded mixtures, or at least to segregate a particular source. Furthermore, it may be useful to identify the mixing process itself to reveal information about the physical mixing system. In some simple mixing models each recording consists of a sum of differently weighted source signals. However, in many...... real-world applications, such as in acoustics, the mixing process is more complex. In such systems, the mixtures are weighted and delayed, and each source contributes to the sum with multiple delays corresponding to the multiple paths by which an acoustic signal propagates to a microphone...
Neuronal integration of dynamic sources: Bayesian learning and Bayesian inference.
Siegelmann, Hava T; Holzman, Lars E
2010-09-01
One of the brain's most basic functions is integrating sensory data from diverse sources. This ability causes us to question whether the neural system is computationally capable of intelligently integrating data, not only when sources have known, fixed relative dependencies but also when it must determine such relative weightings based on dynamic conditions, and then use these learned weightings to accurately infer information about the world. We suggest that the brain is, in fact, fully capable of computing this parallel task in a single network and describe a neural inspired circuit with this property. Our implementation suggests the possibility that evidence learning requires a more complex organization of the network than was previously assumed, where neurons have different specialties, whose emergence brings the desired adaptivity seen in human online inference.
Bayesian statistics in radionuclide metrology: measurement of a decaying source
International Nuclear Information System (INIS)
Bochud, F. O.; Bailat, C.J.; Laedermann, J.P.
2007-01-01
The most intuitive way of defining a probability is perhaps through the frequency at which it appears when a large number of trials are realized in identical conditions. The probability derived from the obtained histogram characterizes the so-called frequentist or conventional statistical approach. In this sense, probability is defined as a physical property of the observed system. By contrast, in Bayesian statistics, a probability is not a physical property or a directly observable quantity, but a degree of belief or an element of inference. The goal of this paper is to show how Bayesian statistics can be used in radionuclide metrology and what its advantages and disadvantages are compared with conventional statistics. This is performed through the example of an yttrium-90 source typically encountered in environmental surveillance measurement. Because of the very low activity of this kind of source and the small half-life of the radionuclide, this measurement takes several days, during which the source decays significantly. Several methods are proposed to compute simultaneously the number of unstable nuclei at a given reference time, the decay constant and the background. Asymptotically, all approaches give the same result. However, Bayesian statistics produces coherent estimates and confidence intervals in a much smaller number of measurements. Apart from the conceptual understanding of statistics, the main difficulty that could deter radionuclide metrologists from using Bayesian statistics is the complexity of the computation. (authors)
Blind source separation dependent component analysis
Xiang, Yong; Yang, Zuyuan
2015-01-01
This book provides readers a complete and self-contained set of knowledge about dependent source separation, including the latest development in this field. The book gives an overview on blind source separation where three promising blind separation techniques that can tackle mutually correlated sources are presented. The book further focuses on the non-negativity based methods, the time-frequency analysis based methods, and the pre-coding based methods, respectively.
Efficient Bayesian experimental design for contaminant source identification
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Removal of micropollutants in source separated sanitation
Butkovskyi, A.
2015-01-01
Source separated sanitation is an innovative sanitation method designed for minimizing use of energy and clean drinking water, and maximizing reuse of water, organics and nutrients from waste water. This approach is based on separate collection and treatment of toilet wastewater (black water) and
Nitrate source apportionment in a subtropical watershed using Bayesian model
Energy Technology Data Exchange (ETDEWEB)
Yang, Liping; Han, Jiangpei; Xue, Jianlong; Zeng, Lingzao [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Shi, Jiachun, E-mail: jcshi@zju.edu.cn [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Wu, Laosheng, E-mail: laowu@zju.edu.cn [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Jiang, Yonghai [State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research Academy of Environmental Sciences, Beijing, 100012 (China)
2013-10-01
Nitrate (NO{sub 3}{sup −}) pollution in aquatic system is a worldwide problem. The temporal distribution pattern and sources of nitrate are of great concern for water quality. The nitrogen (N) cycling processes in a subtropical watershed located in Changxing County, Zhejiang Province, China were greatly influenced by the temporal variations of precipitation and temperature during the study period (September 2011 to July 2012). The highest NO{sub 3}{sup −} concentration in water was in May (wet season, mean ± SD = 17.45 ± 9.50 mg L{sup −1}) and the lowest concentration occurred in December (dry season, mean ± SD = 10.54 ± 6.28 mg L{sup −1}). Nevertheless, no water sample in the study area exceeds the WHO drinking water limit of 50 mg L{sup −1} NO{sub 3}{sup −}. Four sources of NO{sub 3}{sup −} (atmospheric deposition, AD; soil N, SN; synthetic fertilizer, SF; manure and sewage, M and S) were identified using both hydrochemical characteristics [Cl{sup −}, NO{sub 3}{sup −}, HCO{sub 3}{sup −}, SO{sub 4}{sup 2−}, Ca{sup 2+}, K{sup +}, Mg{sup 2+}, Na{sup +}, dissolved oxygen (DO)] and dual isotope approach (δ{sup 15}N–NO{sub 3}{sup −} and δ{sup 18}O–NO{sub 3}{sup −}). Both chemical and isotopic characteristics indicated that denitrification was not the main N cycling process in the study area. Using a Bayesian model (stable isotope analysis in R, SIAR), the contribution of each source was apportioned. Source apportionment results showed that source contributions differed significantly between the dry and wet season, AD and M and S contributed more in December than in May. In contrast, SN and SF contributed more NO{sub 3}{sup −} to water in May than that in December. M and S and SF were the major contributors in December and May, respectively. Moreover, the shortcomings and uncertainties of SIAR were discussed to provide implications for future works. With the assessment of temporal variation and sources of NO{sub 3}{sup −}, better
Nitrate source apportionment in a subtropical watershed using Bayesian model
International Nuclear Information System (INIS)
Yang, Liping; Han, Jiangpei; Xue, Jianlong; Zeng, Lingzao; Shi, Jiachun; Wu, Laosheng; Jiang, Yonghai
2013-01-01
Nitrate (NO 3 − ) pollution in aquatic system is a worldwide problem. The temporal distribution pattern and sources of nitrate are of great concern for water quality. The nitrogen (N) cycling processes in a subtropical watershed located in Changxing County, Zhejiang Province, China were greatly influenced by the temporal variations of precipitation and temperature during the study period (September 2011 to July 2012). The highest NO 3 − concentration in water was in May (wet season, mean ± SD = 17.45 ± 9.50 mg L −1 ) and the lowest concentration occurred in December (dry season, mean ± SD = 10.54 ± 6.28 mg L −1 ). Nevertheless, no water sample in the study area exceeds the WHO drinking water limit of 50 mg L −1 NO 3 − . Four sources of NO 3 − (atmospheric deposition, AD; soil N, SN; synthetic fertilizer, SF; manure and sewage, M and S) were identified using both hydrochemical characteristics [Cl − , NO 3 − , HCO 3 − , SO 4 2− , Ca 2+ , K + , Mg 2+ , Na + , dissolved oxygen (DO)] and dual isotope approach (δ 15 N–NO 3 − and δ 18 O–NO 3 − ). Both chemical and isotopic characteristics indicated that denitrification was not the main N cycling process in the study area. Using a Bayesian model (stable isotope analysis in R, SIAR), the contribution of each source was apportioned. Source apportionment results showed that source contributions differed significantly between the dry and wet season, AD and M and S contributed more in December than in May. In contrast, SN and SF contributed more NO 3 − to water in May than that in December. M and S and SF were the major contributors in December and May, respectively. Moreover, the shortcomings and uncertainties of SIAR were discussed to provide implications for future works. With the assessment of temporal variation and sources of NO 3 − , better agricultural management practices and sewage disposal programs can be implemented to sustain water quality in subtropical watersheds
Albert, Carlo; Ulzega, Simone; Stoop, Ruedi
2016-04-01
Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.
Fast Bayesian Optimal Experimental Design for Seismic Source Inversion
Long, Quan; Motamed, Mohammad; Tempone, Raul
2016-01-01
We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.
Fast Bayesian optimal experimental design for seismic source inversion
Long, Quan
2015-07-01
We develop a fast method for optimally designing experiments in the context of statistical seismic source inversion. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by elastodynamic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the "true" parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem. © 2015 Elsevier B.V.
Fast Bayesian Optimal Experimental Design for Seismic Source Inversion
Long, Quan
2016-01-06
We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.
Bayesian source term determination with unknown covariance of measurements
Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav
2017-04-01
Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
Removal of micropollutants in source separated sanitation
Butkovskyi, A.
2015-01-01
Source separated sanitation is an innovative sanitation method designed for minimizing use of energy and clean drinking water, and maximizing reuse of water, organics and nutrients from waste water. This approach is based on separate collection and treatment of toilet wastewater (black water) and the rest of the domestic wastewater (grey water). Different characteristics of wastewater streams facilitate recovery of energy, nutrients and fresh water. To ensure agricultural or ecological reuse ...
Blind source separation theory and applications
Yu, Xianchuan; Xu, Jindong
2013-01-01
A systematic exploration of both classic and contemporary algorithms in blind source separation with practical case studies The book presents an overview of Blind Source Separation, a relatively new signal processing method. Due to the multidisciplinary nature of the subject, the book has been written so as to appeal to an audience from very different backgrounds. Basic mathematical skills (e.g. on matrix algebra and foundations of probability theory) are essential in order to understand the algorithms, although the book is written in an introductory, accessible style. This book offers
Direction-of-Arrival Estimation for Coherent Sources via Sparse Bayesian Learning
Directory of Open Access Journals (Sweden)
Zhang-Meng Liu
2014-01-01
Full Text Available A spatial filtering-based relevance vector machine (RVM is proposed in this paper to separate coherent sources and estimate their directions-of-arrival (DOA, with the filter parameters and DOA estimates initialized and refined via sparse Bayesian learning. The RVM is used to exploit the spatial sparsity of the incident signals and gain improved adaptability to much demanding scenarios, such as low signal-to-noise ratio (SNR, limited snapshots, and spatially adjacent sources, and the spatial filters are introduced to enhance global convergence of the original RVM in the case of coherent sources. The proposed method adapts to arbitrary array geometry, and simulation results show that it surpasses the existing methods in DOA estimation performance.
Bayesian Source Attribution of Salmonellosis in South Australia.
Glass, K; Fearnley, E; Hocking, H; Raupach, J; Veitch, M; Ford, L; Kirk, M D
2016-03-01
Salmonellosis is a significant cause of foodborne gastroenteritis in Australia, and rates of illness have increased over recent years. We adopt a Bayesian source attribution model to estimate the contribution of different animal reservoirs to illness due to Salmonella spp. in South Australia between 2000 and 2010, together with 95% credible intervals (CrI). We excluded known travel associated cases and those of rare subtypes (fewer than 20 human cases or fewer than 10 isolates from included sources over the 11-year period), and the remaining 76% of cases were classified as sporadic or outbreak associated. Source-related parameters were included to allow for different handling and consumption practices. We attributed 35% (95% CrI: 20-49) of sporadic cases to chicken meat and 37% (95% CrI: 23-53) of sporadic cases to eggs. Of outbreak-related cases, 33% (95% CrI: 20-62) were attributed to chicken meat and 59% (95% CrI: 29-75) to eggs. A comparison of alternative model assumptions indicated that biases due to possible clustering of samples from sources had relatively minor effects on these estimates. Analysis of source-related parameters showed higher risk of illness from contaminated eggs than from contaminated chicken meat, suggesting that consumption and handling practices potentially play a bigger role in illness due to eggs, considering low Salmonella prevalence on eggs. Our results strengthen the evidence that eggs and chicken meat are important vehicles for salmonellosis in South Australia. © 2015 Society for Risk Analysis.
Joint Matrices Decompositions and Blind Source Separation
Czech Academy of Sciences Publication Activity Database
Chabriel, G.; Kleinsteuber, M.; Moreau, E.; Shen, H.; Tichavský, Petr; Yeredor, A.
2014-01-01
Roč. 31, č. 3 (2014), s. 34-43 ISSN 1053-5888 R&D Projects: GA ČR GA102/09/1278 Institutional support: RVO:67985556 Keywords : joint matrices decomposition * tensor decomposition * blind source separation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 5.852, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/tichavsky-0427607.pdf
Fast Bayesian optimal experimental design for seismic source inversion
Long, Quan; Motamed, Mohammad; Tempone, Raul
2015-01-01
of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the "true" parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected
Blind source separation problem in GPS time series
Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.
2016-04-01
A critical point in the analysis of ground displacement time series, as those recorded by space geodetic techniques, is the development of data-driven methods that allow the different sources of deformation to be discerned and characterized in the space and time domains. Multivariate statistic includes several approaches that can be considered as a part of data-driven methods. A widely used technique is the principal component analysis (PCA), which allows us to reduce the dimensionality of the data space while maintaining most of the variance of the dataset explained. However, PCA does not perform well in finding the solution to the so-called blind source separation (BSS) problem, i.e., in recovering and separating the original sources that generate the observed data. This is mainly due to the fact that PCA minimizes the misfit calculated using an L2 norm (χ 2), looking for a new Euclidean space where the projected data are uncorrelated. The independent component analysis (ICA) is a popular technique adopted to approach the BSS problem. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we test the use of a modified variational Bayesian ICA (vbICA) method to recover the multiple sources of ground deformation even in the presence of missing data. The vbICA method models the probability density function (pdf) of each source signal using a mix of Gaussian distributions, allowing for more flexibility in the description of the pdf of the sources with respect to standard ICA, and giving a more reliable estimate of them. Here we present its application to synthetic global positioning system (GPS) position time series, generated by simulating deformation near an active fault, including inter-seismic, co-seismic, and post-seismic signals, plus seasonal signals and noise, and an additional time-dependent volcanic source. We evaluate the ability of the PCA and ICA decomposition
International Nuclear Information System (INIS)
Xu, Y; Meng, Y X; Xu, W W
2008-01-01
A toy detector has been designed to simulate central detectors in reactor neutrino experiments in the paper. The samples of neutrino events and three major backgrounds from the Monte-Carlo simulation of the toy detector are generated in the signal region. The Bayesian Neural Networks (BNN) are applied to separate neutrino events from backgrounds in reactor neutrino experiments. As a result, the most neutrino events and uncorrelated background events in the signal region can be identified with BNN, and the part events each of the fast neutron and 8 He/ 9 Li backgrounds in the signal region can be identified with BNN. Then, the signal to noise ratio in the signal region is enhanced with BNN. The neutrino discrimination increases with the increase of the neutrino rate in the training sample. However, the background discriminations decrease with the decrease of the background rate in the training sample
Directory of Open Access Journals (Sweden)
Zhujie Chu
2016-02-01
Full Text Available Municipal household solid waste (MHSW has become a serious problem in China over the course of the last two decades, resulting in significant side effects to the environment. Therefore, effective management of MHSW has attracted wide attention from both researchers and practitioners. Separate collection, the first and crucial step to solve the MHSW problem, however, has not been thoroughly studied to date. An empirical survey has been conducted among 387 households in Harbin, China in this study. We use Bayesian Belief Networks model to determine the influencing factors on separate collection. Four types of factors are identified, including political, economic, social cultural and technological based on the PEST (political, economic, social and technological analytical method. In addition, we further analyze the influential power of different factors, based on the network structure and probability changes obtained by Netica software. Results indicate that technological dimension has the greatest impact on MHSW separate collection, followed by the political dimension and economic dimension; social cultural dimension impacts MHSW the least.
Blind Source Separation For Ion Mobility Spectra
International Nuclear Information System (INIS)
Marco, S.; Pomareda, V.; Pardo, A.; Kessler, M.; Goebel, J.; Mueller, G.
2009-01-01
Miniaturization is a powerful trend for smart chemical instrumentation in a diversity of applications. It is know that miniaturization in IMS leads to a degradation of the system characteristics. For the present work, we are interested in signal processing solutions to mitigate limitations introduced by limited drift tube length that basically involve a loss of chemical selectivity. While blind source separation techniques (BSS) are popular in other domains, their application for smart chemical instrumentation is limited. However, in some conditions, basically linearity, BSS may fully recover the concentration time evolution and the pure spectra with few underlying hypothesis. This is extremely helpful in conditions where non-expected chemical interferents may appear, or unwanted perturbations may pollute the spectra. SIMPLISMA has been advocated by Harrington et al. in several papers. However, more modern methods of BSS for bilinear decomposition with the restriction of positiveness have appeared in the last decade. In order to explore and compare the performances of those methods a series of experiments were performed.
Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G
2014-11-01
Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Le; Timbie, Peter T. [Department of Physics, University of Wisconsin, Madison, WI 53706 (United States); Bunn, Emory F. [Physics Department, University of Richmond, Richmond, VA 23173 (United States); Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S. [Department of Physics, Brown University, 182 Hope Street, Providence, RI 02912 (United States); Sutter, P. M. [Center for Cosmology and Astro-Particle Physics, Ohio State University, Columbus, OH 43210 (United States); Wandelt, Benjamin D., E-mail: lzhang263@wisc.edu [Department of Physics, University of Illinois at Urbana-Champaign, 1110 W Green Street, Urbana, IL 61801 (United States)
2016-01-15
In this paper, we present a new Bayesian semi-blind approach for foreground removal in observations of the 21 cm signal measured by interferometers. The technique, which we call H i Expectation–Maximization Independent Component Analysis (HIEMICA), is an extension of the Independent Component Analysis technique developed for two-dimensional (2D) cosmic microwave background maps to three-dimensional (3D) 21 cm cosmological signals measured by interferometers. This technique provides a fully Bayesian inference of power spectra and maps and separates the foregrounds from the signal based on the diversity of their power spectra. Relying only on the statistical independence of the components, this approach can jointly estimate the 3D power spectrum of the 21 cm signal, as well as the 2D angular power spectrum and the frequency dependence of each foreground component, without any prior assumptions about the foregrounds. This approach has been tested extensively by applying it to mock data from interferometric 21 cm intensity mapping observations under idealized assumptions of instrumental effects. We also discuss the impact when the noise properties are not known completely. As a first step toward solving the 21 cm power spectrum analysis problem, we compare the semi-blind HIEMICA technique to the commonly used Principal Component Analysis. Under the same idealized circumstances, the proposed technique provides significantly improved recovery of the power spectrum. This technique can be applied in a straightforward manner to all 21 cm interferometric observations, including epoch of reionization measurements, and can be extended to single-dish observations as well.
Source Signals Separation and Reconstruction Following Principal Component Analysis
Directory of Open Access Journals (Sweden)
WANG Cheng
2014-02-01
Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.
Incorporating Open Source Data for Bayesian Classification of Urban Land Use From VHR Stereo Images
Li, Mengmeng; De Beurs, Kirsten M.; Stein, Alfred; Bijker, Wietske
2017-01-01
This study investigates the incorporation of open source data into a Bayesian classification of urban land use from very high resolution (VHR) stereo satellite images. The adopted classification framework starts from urban land cover classification, proceeds to building-type characterization, and
Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei
2017-11-01
In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.
Bayesian Blind Separation and Deconvolution of Dynamic Image Sequences Using Sparsity Priors
Czech Academy of Sciences Publication Activity Database
Tichý, Ondřej; Šmídl, Václav
2015-01-01
Roč. 34, č. 1 (2015), s. 258-266 ISSN 0278-0062 R&D Projects: GA ČR GA13-29225S Keywords : Functional imaging * Blind source separation * Computer-aided detection and diagnosis * Probabilistic and statistical methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.756, year: 2015 http://library.utia.cas.cz/separaty/2014/AS/tichy-0431090.pdf
Bayesian spatial filters for source signal extraction: a study in the peripheral nerve.
Tang, Y; Wodlinger, B; Durand, D M
2014-03-01
The ability to extract physiological source signals to control various prosthetics offer tremendous therapeutic potential to improve the quality of life for patients suffering from motor disabilities. Regardless of the modality, recordings of physiological source signals are contaminated with noise and interference along with crosstalk between the sources. These impediments render the task of isolating potential physiological source signals for control difficult. In this paper, a novel Bayesian Source Filter for signal Extraction (BSFE) algorithm for extracting physiological source signals for control is presented. The BSFE algorithm is based on the source localization method Champagne and constructs spatial filters using Bayesian methods that simultaneously maximize the signal to noise ratio of the recovered source signal of interest while minimizing crosstalk interference between sources. When evaluated over peripheral nerve recordings obtained in vivo, the algorithm achieved the highest signal to noise interference ratio ( 7.00 ±3.45 dB) amongst the group of methodologies compared with average correlation between the extracted source signal and the original source signal R = 0.93. The results support the efficacy of the BSFE algorithm for extracting source signals from the peripheral nerve.
Gradient Flow Convolutive Blind Source Separation
DEFF Research Database (Denmark)
Pedersen, Michael Syskind; Nielsen, Chinton Møller
2004-01-01
Experiments have shown that the performance of instantaneous gradient flow beamforming by Cauwenberghs et al. is reduced significantly in reverberant conditions. By expanding the gradient flow principle to convolutive mixtures, separation in a reverberant environment is possible. By use...... of a circular four microphone array with a radius of 5 mm, and applying convolutive gradient flow instead of just applying instantaneous gradient flow, experimental results show an improvement of up to around 14 dB can be achieved for simulated impulse responses and up to around 10 dB for a hearing aid...
Bayesian image processing of data from fuzzy pattern sources
International Nuclear Information System (INIS)
Liang, Z.; Hart, H.
1986-01-01
In some radioisotopic organ image applications, a priori or supplementary source information may exist and can be characterized in terms of probability density functions P (phi) of the source elements {phi/sub j/} = phi (where phi/sub j/ (j = 1,2,..α) is the estimated average photon emission in voxel j per unit time at t = 0). For example, in cardiac imaging studies it is possible to evaluate the radioisotope concentration of the blood filling the cardiac chambers independently as a function of time by peripheral measurement. The blood concentration information in effect serves to limit amplitude uncertainty to the chamber boundary voxels and thus reduces the extent of amplitude ambiguities in the overall cardiac imaging reconstruction. The a priori or supplementary information may more generally be spatial, amplitude-dependent probability distributions P(phi), fuzzy patterns superimposed upon a background
Concepts and Criteria for Blind Quantum Source Separation
Deville, Alain; Deville, Yannick
2016-01-01
Blind Source Separation (BSS) is an active domain of Classical Information Processing. The development of Quantum Information Processing has made possible the appearance of Blind Quantum Source Separation (BQSS). This article discusses some consequences of the existence of the entanglement phenomenon, and of the probabilistic aspect of quantum measurements, upon BQSS solutions. It focuses on a pair of spins initially separately prepared in a pure state, and then with an undesired coupling bet...
Gualandi, Adriano; Serpelloni, Enrico; Elina Belardinelli, Maria; Bonafede, Maurizio; Pezzo, Giuseppe; Tolomei, Cristiano
2015-04-01
A critical point in the analysis of ground displacement time series, as those measured by modern space geodetic techniques (primarly continuous GPS/GNSS and InSAR) is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies, since PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem. The recovering and separation of the different sources that generate the observed ground deformation is a fundamental task in order to provide a physical meaning to the possible different sources. PCA fails in the BSS problem since it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the displacement time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient deformation signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources
Underdetermined Blind Source Separation in Echoic Environments Using DESPRIT
Directory of Open Access Journals (Sweden)
Melia Thomas
2007-01-01
Full Text Available The DUET blind source separation algorithm can demix an arbitrary number of speech signals using anechoic mixtures of the signals. DUET however is limited in that it relies upon source signals which are mixed in an anechoic environment and which are sufficiently sparse such that it is assumed that only one source is active at a given time frequency point. The DUET-ESPRIT (DESPRIT blind source separation algorithm extends DUET to situations where sparsely echoic mixtures of an arbitrary number of sources overlap in time frequency. This paper outlines the development of the DESPRIT method and demonstrates its properties through various experiments conducted on synthetic and real world mixtures.
Blind separation of more sources than sensors in convolutive mixtures
DEFF Research Database (Denmark)
Olsson, Rasmus Kongsgaard; Hansen, Lars Kai
2006-01-01
We demonstrate that blind separation of more sources than sensors can be performed based solely on the second order statistics of the observed mixtures. This a generalization of well-known robust algorithms that are suited for equal number of sources and sensors. It is assumed that the sources...
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods
Single channel blind source separation based on ICA feature extraction
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
A new technique is proposed to solve the blind source separation (BSS) given only a single channel observation. The basis functions and the density of the coefficients of source signals learned by ICA are used as the prior knowledge. Based on the learned prior information the learning rules of single channel BSS are presented by maximizing the joint log likelihood of the mixed sources to obtain source signals from single observation,in which the posterior density of the given measurements is maximized. The experimental results exhibit a successful separation performance for mixtures of speech and music signals.
International Nuclear Information System (INIS)
Kopka, P; Wawrzynczak, A; Borysiewicz, M
2015-01-01
In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found. (paper)
A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)
Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.
2017-12-01
Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we
Comparison of methods for separating vibration sources in rotating machinery
Klein, Renata
2017-12-01
Vibro-acoustic signatures are widely used for diagnostics of rotating machinery. Vibration based automatic diagnostics systems need to achieve a good separation between signals generated by different sources. The separation task may be challenging, since the effects of the different vibration sources often overlap. In particular, there is a need to separate between signals related to the natural frequencies of the structure and signals resulting from the rotating components (signal whitening), as well as a need to separate between signals generated by asynchronous components like bearings and signals generated by cyclo-stationary components like gears. Several methods were proposed to achieve the above separation tasks. The present study compares between some of these methods. The paper also presents a new method for whitening, Adaptive Clutter Separation, as well as a new efficient algorithm for dephase, which separates between asynchronous and cyclo-stationary signals. For whitening the study compares between liftering of the high quefrencies and adaptive clutter separation. For separating between the asynchronous and the cyclo-stationary signals the study compares between liftering in the quefrency domain and dephase. The methods are compared using both simulated signals and real data.
Source separation of household waste: A case study in China
International Nuclear Information System (INIS)
Zhuang Ying; Wu Songwei; Wang Yunlong; Wu Weixiang; Chen Yingxu
2008-01-01
A pilot program concerning source separation of household waste was launched in Hangzhou, capital city of Zhejiang province, China. Detailed investigations on the composition and properties of household waste in the experimental communities revealed that high water content and high percentage of food waste are the main limiting factors in the recovery of recyclables, especially paper from household waste, and the main contributors to the high cost and low efficiency of waste disposal. On the basis of the investigation, a novel source separation method, according to which household waste was classified as food waste, dry waste and harmful waste, was proposed and performed in four selected communities. In addition, a corresponding household waste management system that involves all stakeholders, a recovery system and a mechanical dehydration system for food waste were constituted to promote source separation activity. Performances and the questionnaire survey results showed that the active support and investment of a real estate company and a community residential committee play important roles in enhancing public participation and awareness of the importance of waste source separation. In comparison with the conventional mixed collection and transportation system of household waste, the established source separation and management system is cost-effective. It could be extended to the entire city and used by other cities in China as a source of reference
Extended nonnegative tensor factorisation models for musical sound source separation.
FitzGerald, Derry; Cranitch, Matt; Coyle, Eugene
2008-01-01
Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.
Extended Nonnegative Tensor Factorisation Models for Musical Sound Source Separation
Directory of Open Access Journals (Sweden)
Derry FitzGerald
2008-01-01
Full Text Available Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.
A Bayesian approach to quantify the contribution of animal-food sources to human salmonellosis
DEFF Research Database (Denmark)
Hald, Tine; Vose, D.; Wegener, Henrik Caspar
2004-01-01
Based on the data from the integrated Danish Salmonella surveillance in 1999, we developed a mathematical model for quantifying the contribution of each of the major animal-food sources to human salmonellosis. The model was set up to calculate the number of domestic and sporadic cases caused...... salmonellosis was also included. The joint posterior distribution was estimated by fitting the model to the reported number of domestic and sporadic cases per Salmonella type in a Bayesian framework using Markov Chain Monte Carlo simulation. The number of domestic and sporadic cases was obtained by subtracting.......8-10.4%) of the cases, respectively. Taken together, imported foods were estimated to account for 11.8% (95% CI: 5.0-19.0%) of the cases. Other food sources considered had only a minor impact, whereas 25% of the cases could not be associated with any source. This approach of quantifying the contribution of the various...
Application of hierarchical Bayesian unmixing models in river sediment source apportionment
Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice
2016-04-01
Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling
Bayesian model selection of template forward models for EEG source reconstruction.
Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan
2014-06-01
Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.
Using Bayesian Belief Network (BBN) modelling for Rapid Source Term Prediction. RASTEP Phase 1
International Nuclear Information System (INIS)
Knochenhauer, M.; Swaling, V.H.; Alfheim, P.
2012-09-01
The project is connected to the development of RASTEP, a computerized source term prediction tool aimed at providing a basis for improving off-site emergency management. RASTEP uses Bayesian belief networks (BBN) to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, timing, and pathway of released radio-nuclides). The output is a set of possible source terms with associated probabilities. In the NKS project, a number of complex issues associated with the integration of probabilistic and deterministic analyses are addressed. This includes issues related to the method for estimating source terms, signal validation, and sensitivity analysis. One major task within Phase 1 of the project addressed the problem of how to make the source term module flexible enough to give reliable and valid output throughout the accident scenario. Of the alternatives evaluated, it is recommended that RASTEP is connected to a fast running source term prediction code, e.g., MARS, with a possibility of updating source terms based on real-time observations. (Author)
Using Bayesian Belief Network (BBN) modelling for Rapid Source Term Prediction. RASTEP Phase 1
Energy Technology Data Exchange (ETDEWEB)
Knochenhauer, M.; Swaling, V.H.; Alfheim, P. [Scandpower AB, Sundbyberg (Sweden)
2012-09-15
The project is connected to the development of RASTEP, a computerized source term prediction tool aimed at providing a basis for improving off-site emergency management. RASTEP uses Bayesian belief networks (BBN) to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, timing, and pathway of released radio-nuclides). The output is a set of possible source terms with associated probabilities. In the NKS project, a number of complex issues associated with the integration of probabilistic and deterministic analyses are addressed. This includes issues related to the method for estimating source terms, signal validation, and sensitivity analysis. One major task within Phase 1 of the project addressed the problem of how to make the source term module flexible enough to give reliable and valid output throughout the accident scenario. Of the alternatives evaluated, it is recommended that RASTEP is connected to a fast running source term prediction code, e.g., MARS, with a possibility of updating source terms based on real-time observations. (Author)
Carbon footprint of urban source separation for nutrient recovery.
Kjerstadius, H; Bernstad Saraiva, A; Spångberg, J; Davidsson, Å
2017-07-15
Source separation systems for the management of domestic wastewater and food waste has been suggested as more sustainable sanitation systems for urban areas. The present study used an attributional life cycle assessment to investigate the carbon footprint and potential for nutrient recovery of two sanitation systems for a hypothetical urban area in Southern Sweden. The systems represented a typical Swedish conventional system and a possible source separation system with increased nutrient recovery. The assessment included the management chain from household collection, transport, treatment and final return of nutrients to agriculture or disposal of the residuals. The results for carbon footprint and nutrient recovery (phosphorus and nitrogen) concluded that the source separation system could increase nutrient recovery (0.30-0.38 kg P capita -1 year -1 and 3.10-3.28 kg N capita -1 year -1 ), while decreasing the carbon footprint (-24 to -58 kg CO 2 -eq. capita -1 year -1 ), compared to the conventional system. The nutrient recovery was increased by the use of struvite precipitation and ammonium stripping at the wastewater treatment plant. The carbon footprint decreased, mainly due to the increased biogas production, increased replacement of mineral fertilizer in agriculture and less emissions of nitrous oxide from wastewater treatment. In conclusion, the study showed that source separation systems could potentially be used to increase nutrient recovery from urban areas, while decreasing the climate impact. Copyright © 2017 Elsevier Ltd. All rights reserved.
Underdetermined Blind Audio Source Separation Using Modal Decomposition
Directory of Open Access Journals (Sweden)
Abdeldjalil Aïssa-El-Bey
2007-03-01
Full Text Available This paper introduces new algorithms for the blind separation of audio sources using modal decomposition. Indeed, audio signals and, in particular, musical signals can be well approximated by a sum of damped sinusoidal (modal components. Based on this representation, we propose a two-step approach consisting of a signal analysis (extraction of the modal components followed by a signal synthesis (grouping of the components belonging to the same source using vector clustering. For the signal analysis, two existing algorithms are considered and compared: namely the EMD (empirical mode decomposition algorithm and a parametric estimation algorithm using ESPRIT technique. A major advantage of the proposed method resides in its validity for both instantaneous and convolutive mixtures and its ability to separate more sources than sensors. Simulation results are given to compare and assess the performance of the proposed algorithms.
Underdetermined Blind Audio Source Separation Using Modal Decomposition
Directory of Open Access Journals (Sweden)
Aïssa-El-Bey Abdeldjalil
2007-01-01
Full Text Available This paper introduces new algorithms for the blind separation of audio sources using modal decomposition. Indeed, audio signals and, in particular, musical signals can be well approximated by a sum of damped sinusoidal (modal components. Based on this representation, we propose a two-step approach consisting of a signal analysis (extraction of the modal components followed by a signal synthesis (grouping of the components belonging to the same source using vector clustering. For the signal analysis, two existing algorithms are considered and compared: namely the EMD (empirical mode decomposition algorithm and a parametric estimation algorithm using ESPRIT technique. A major advantage of the proposed method resides in its validity for both instantaneous and convolutive mixtures and its ability to separate more sources than sensors. Simulation results are given to compare and assess the performance of the proposed algorithms.
Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy
Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.
1998-01-01
We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources
Energy Technology Data Exchange (ETDEWEB)
Klumpp, John [Colorado State University, Department of Environmental and Radiological Health Sciences, Molecular and Radiological Biosciences Building, Colorado State University, Fort Collins, Colorado, 80523 (United States)
2013-07-01
We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements from an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
Evaluating source separation of plastic waste using conjoint analysis.
Nakatani, Jun; Aramaki, Toshiya; Hanaki, Keisuke
2008-11-01
Using conjoint analysis, we estimated the willingness to pay (WTP) of households for source separation of plastic waste and the improvement of related environmental impacts, the residents' loss of life expectancy (LLE), the landfill capacity, and the CO2 emissions. Unreliable respondents were identified and removed from the sample based on their answers to follow-up questions. It was found that the utility associated with reducing LLE and with the landfill capacity were both well expressed by logarithmic functions, but that residents were indifferent to the level of CO2 emissions even though they approved of CO2 reduction. In addition, residents derived utility from the act of separating plastic waste, irrespective of its environmental impacts; that is, they were willing to practice the separation of plastic waste at home in anticipation of its "invisible effects", such as the improvement of citizens' attitudes toward solid waste issues.
Blind source separation advances in theory, algorithms and applications
Wang, Wenwu
2014-01-01
Blind Source Separation intends to report the new results of the efforts on the study of Blind Source Separation (BSS). The book collects novel research ideas and some training in BSS, independent component analysis (ICA), artificial intelligence and signal processing applications. Furthermore, the research results previously scattered in many journals and conferences worldwide are methodically edited and presented in a unified form. The book is likely to be of interest to university researchers, R&D engineers and graduate students in computer science and electronics who wish to learn the core principles, methods, algorithms, and applications of BSS. Dr. Ganesh R. Naik works at University of Technology, Sydney, Australia; Dr. Wenwu Wang works at University of Surrey, UK.
Energy Technology Data Exchange (ETDEWEB)
Blidholm, O; Wiklund, S E [AaF-Energikonsult (Sweden); Bauer, A C [Energikonsult A. Bauer (Sweden)
1997-02-01
The basic idea of this project is to study the possibilities to use source separated combustible material for energy conversion in conventional solid fuel boilers (i.e. not municipal waste incineration plants). The project has been carried out in three phases. During phase 1 and 2 a number of fuel analyses of different fractions were carried out. During phase 3 two combustion tests were carried out; (1) a boiler with grate equipped with cyclone, electrostatic precipitator and flue gas condenser, and (2) a bubbling fluidized bed boiler with electrostatic precipitator and flue gas condenser. During the tests source separated paper and plastic packagings were co-fired with biomass fuels. The mixing rate of packagings was approximately 15%. This study reports the results of phase 3 and the conclusions of the whole project. The technical terms of using packaging as fuel are good. The technique is available for shredding both paper and plastic packaging. The material can be co-fired with biomass. The economical terms of using source separated packaging for energy conversion can be very advantageous, but can also form obstacles. The result is to a high degree guided by such facts as how the fuel is collected, transported, reduced in size and handled at the combustion plant. The results of the combustion tests show that the environmental terms of using source separated packaging for energy conversion are good. The emissions of heavy metals into the atmosphere are very low. The emissions are well below the emission standards for waste incineration plants. 35 figs, 13 tabs, 8 appendices
Roostaee, M.; Deng, Z.
2017-12-01
The states' environmental agencies are required by The Clean Water Act to assess all waterbodies and evaluate potential sources of impairments. Spatial and temporal distributions of water quality parameters are critical in identifying Critical Source Areas (CSAs). However, due to limitations in monetary resources and a large number of waterbodies, available monitoring stations are typically sparse with intermittent periods of data collection. Hence, scarcity of water quality data is a major obstacle in addressing sources of pollution through management strategies. In this study spatiotemporal Bayesian Maximum Entropy method (BME) is employed to model the inherent temporal and spatial variability of measured water quality indicators such as Dissolved Oxygen (DO) concentration for Turkey Creek Watershed. Turkey Creek is located in northern Louisiana and has been listed in 303(d) list for DO impairment since 2014 in Louisiana Water Quality Inventory Reports due to agricultural practices. BME method is proved to provide more accurate estimates than the methods of purely spatial analysis by incorporating space/time distribution and uncertainty in available measured soft and hard data. This model would be used to estimate DO concentration at unmonitored locations and times and subsequently identifying CSAs. The USDA's crop-specific land cover data layers of the watershed were then used to determine those practices/changes that led to low DO concentration in identified CSAs. Primary results revealed that cultivation of corn and soybean as well as urban runoff are main contributing sources in low dissolved oxygen in Turkey Creek Watershed.
Shaw, Simon C.; Goldstein, Michael
2017-01-01
We explore the effect of finite population sampling in design problems with many variables cross-classified in many ways. In particular, we investigate designs where we wish to sample individuals belonging to different groups for which the underlying covariance matrices are separable between groups and variables. We exploit the generalised conditional independence structure of the model to show how the analysis of the full model can be reduced to an interpretable series of lower dimensional p...
Gao, Lingli; Pan, Yudi
2018-05-01
The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.
Using Bayesian Belief Network (BBN) modelling for rapid source term prediction. Final report
International Nuclear Information System (INIS)
Knochenhauer, M.; Swaling, V.H.; Dedda, F.D.; Hansson, F.; Sjoekvist, S.; Sunnegaerd, K.
2013-10-01
The project presented in this report deals with a number of complex issues related to the development of a tool for rapid source term prediction (RASTEP), based on a plant model represented as a Bayesian belief network (BBN) and a source term module which is used for assigning relevant source terms to BBN end states. Thus, RASTEP uses a BBN to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, composition, timing, and release path of released radio-nuclides). The output is a set of possible source terms with associated probabilities. One major issue has been associated with the integration of probabilistic and deterministic analyses are addressed, dealing with the challenge of making the source term determination flexible enough to give reliable and valid output throughout the accident scenario. The potential for connecting RASTEP to a fast running source term prediction code has been explored, as well as alternative ways of improving the deterministic connections of the tool. As part of the investigation, a comparison of two deterministic severe accident analysis codes has been performed. A second important task has been to develop a general method where experts' beliefs can be included in a systematic way when defining the conditional probability tables (CPTs) in the BBN. The proposed method includes expert judgement in a systematic way when defining the CPTs of a BBN. Using this iterative method results in a reliable BBN even though expert judgements, with their associated uncertainties, have been used. It also simplifies verification and validation of the considerable amounts of quantitative data included in a BBN. (Author)
Using Bayesian Belief Network (BBN) modelling for rapid source term prediction. Final report
Energy Technology Data Exchange (ETDEWEB)
Knochenhauer, M.; Swaling, V.H.; Dedda, F.D.; Hansson, F.; Sjoekvist, S.; Sunnegaerd, K. [Lloyd' s Register Consulting AB, Sundbyberg (Sweden)
2013-10-15
The project presented in this report deals with a number of complex issues related to the development of a tool for rapid source term prediction (RASTEP), based on a plant model represented as a Bayesian belief network (BBN) and a source term module which is used for assigning relevant source terms to BBN end states. Thus, RASTEP uses a BBN to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, composition, timing, and release path of released radio-nuclides). The output is a set of possible source terms with associated probabilities. One major issue has been associated with the integration of probabilistic and deterministic analyses are addressed, dealing with the challenge of making the source term determination flexible enough to give reliable and valid output throughout the accident scenario. The potential for connecting RASTEP to a fast running source term prediction code has been explored, as well as alternative ways of improving the deterministic connections of the tool. As part of the investigation, a comparison of two deterministic severe accident analysis codes has been performed. A second important task has been to develop a general method where experts' beliefs can be included in a systematic way when defining the conditional probability tables (CPTs) in the BBN. The proposed method includes expert judgement in a systematic way when defining the CPTs of a BBN. Using this iterative method results in a reliable BBN even though expert judgements, with their associated uncertainties, have been used. It also simplifies verification and validation of the considerable amounts of quantitative data included in a BBN. (Author)
Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.
2014-12-01
A critical point in the analysis of ground displacements time series is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies. Indeed, PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem, i.e. in recovering and separating the original sources that generated the observed data. This is mainly due to the assumptions on which PCA relies: it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here we present the application of the vbICA technique to GPS position time series. First, we use vbICA on synthetic data that simulate a seismic cycle
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.
Directory of Open Access Journals (Sweden)
Rasheda Arman Chowdhury
Full Text Available Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG or Magneto-EncephaloGraphy (MEG signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i brain activity may be modeled using cortical parcels and (ii brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM and the Hierarchical Bayesian (HB source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2 to 30 cm(2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.
Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.
2015-05-01
The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
Blind separation of positive sources by globally convergent gradient search.
Oja, Erkki; Plumbley, Mark
2004-09-01
The instantaneous noise-free linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full column rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps simplified solution methods may yet become possible. In this letter, we consider the task of independent component analysis when the independent sources are known to be nonnegative and well grounded, which means that they have a nonzero pdf in the region of zero. It can be shown that in this case, the solution method is basically very simple: an orthogonal rotation of the whitened observation vector into nonnegative outputs will give a positive permutation of the original sources. We propose a cost function whose minimum coincides with nonnegativity and derive the gradient algorithm under the whitening constraint, under which the separating matrix is orthogonal. We further prove that in the Stiefel manifold of orthogonal matrices, the cost function is a Lyapunov function for the matrix gradient flow, implying global convergence. Thus, this algorithm is guaranteed to find the nonnegative well-grounded independent sources. The analysis is complemented by a numerical simulation, which illustrates the algorithm.
International Nuclear Information System (INIS)
George, J.S.; Schmidt, D.M.; Wood, C.C.
1999-01-01
We have developed a Bayesian approach to the analysis of neural electromagnetic (MEG/EEG) data that can incorporate or fuse information from other imaging modalities and addresses the ill-posed inverse problem by sarnpliig the many different solutions which could have produced the given data. From these samples one can draw probabilistic inferences about regions of activation. Our source model assumes a variable number of variable size cortical regions of stimulus-correlated activity. An active region consists of locations on the cortical surf ace, within a sphere centered on some location in cortex. The number and radi of active regions can vary to defined maximum values. The goal of the analysis is to determine the posterior probability distribution for the set of parameters that govern the number, location, and extent of active regions. Markov Chain Monte Carlo is used to generate a large sample of sets of parameters distributed according to the posterior distribution. This sample is representative of the many different source distributions that could account for given data, and allows identification of probable (i.e. consistent) features across solutions. Examples of the use of this analysis technique with both simulated and empirical MEG data are presented
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.
López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆
López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874
Vermicomposting of source-separated human faeces for nutrient recycling.
Yadav, Kunwar D; Tare, Vinod; Ahammed, M Mansoor
2010-01-01
The present study examined the suitability of vermicomposting technology for processing source-separated human faeces. Since the earthworm species Eisenia fetida could not survive in fresh faeces, modification in the physical characteristics of faeces was necessary before earthworms could be introduced to faeces. A preliminary study with six different combinations of faeces, soil and bulking material (vermicompost) in different layers was conducted to find out the best condition for biomass growth and reproduction of earthworms. The results indicated that SVFV combination (soil, vermicompost, faeces and vermicompost - bottom to top layers) was the best for earthworm biomass growth indicating the positive role of soil layer in earthworm biomass growth. Further studies with SVFV and VFV combinations, however, showed that soil layer did not enhance vermicompost production rate. Year-long study conducted with VFV combination to assess the quality and quantity of vermicompost produced showed an average vermicompost production rate of 0.30kg-cast/kg-worm/day. The vermicompost produced was mature as indicated by low dissolved organic carbon (2.4+/-0.43mg/g) and low oxygen uptake rate (0.15+/-0.09mg O(2)/g VS/h). Complete inactivation of total coliforms was noted during the study, which is one of the important objectives of human faeces processing. Results of the study thus indicated the potential of vermicomposting for processing of source-separated human faeces.
International Nuclear Information System (INIS)
Kim, Joo Yeon; Jang, Han Ki; Lee, Jai Ki
2005-01-01
Bayesian methodology is appropriated for use in PRA because subjective knowledges as well as objective data are applied to assessment. In this study, radiological risk based on Bayesian methodology is assessed for the loss of source in field radiography. The exposure scenario for the lost source presented in U.S. NRC is reconstructed by considering the domestic situation and Bayes theorem is applied to updating of failure probabilities of safety functions. In case of updating of failure probabilities, it shows that 5% Bayes credible intervals using Jeffreys prior distribution are lower than ones using vague prior distribution. It is noted that Jeffreys prior distribution is appropriated in risk assessment for systems having very low failure probabilities. And, it shows that the mean of the expected annual dose for the public based on Bayesian methodology is higher than the dose based on classical methodology because the means of the updated probabilities are higher than classical probabilities. The database for radiological risk assessment are sparse in domestic. It summarizes that Bayesian methodology can be applied as an useful alternative for risk assessment and the study on risk assessment will be contributed to risk-informed regulation in the field of radiation safety
Troldborg, M.; Nowak, W.; Binning, P. J.; Bjerg, P. L.
2012-12-01
Estimates of mass discharge (mass/time) are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Mass discharge estimates are, however, prone to rather large uncertainties as they integrate uncertain spatial distributions of both concentration and groundwater flow velocities. For risk assessments or any other decisions that are being based on mass discharge estimates, it is essential to address these uncertainties. We present a novel Bayesian geostatistical approach for quantifying the uncertainty of the mass discharge across a multilevel control plane. The method decouples the flow and transport simulation and has the advantage of avoiding the heavy computational burden of three-dimensional numerical flow and transport simulation coupled with geostatistical inversion. It may therefore be of practical relevance to practitioners compared to existing methods that are either too simple or computationally demanding. The method is based on conditional geostatistical simulation and accounts for i) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics (including the uncertainty in covariance functions), ii) measurement uncertainty, and iii) uncertain source zone geometry and transport parameters. The method generates multiple equally likely realizations of the spatial flow and concentration distribution, which all honour the measured data at the control plane. The flow realizations are generated by analytical co-simulation of the hydraulic conductivity and the hydraulic gradient across the control plane. These realizations are made consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox transformed concentration data is used to simulate observed
Blind Source Separation of Event-Related EEG/MEG.
Metsomaa, Johanna; Sarvas, Jukka; Ilmoniemi, Risto Juhani
2017-09-01
Blind source separation (BSS) can be used to decompose complex electroencephalography (EEG) or magnetoencephalography data into simpler components based on statistical assumptions without using a physical model. Applications include brain-computer interfaces, artifact removal, and identifying parallel neural processes. We wish to address the issue of applying BSS to event-related responses, which is challenging because of nonstationary data. We introduce a new BSS approach called momentary-uncorrelated component analysis (MUCA), which is tailored for event-related multitrial data. The method is based on approximate joint diagonalization of multiple covariance matrices estimated from the data at separate latencies. We further show how to extend the methodology for autocovariance matrices and how to apply BSS methods suitable for piecewise stationary data to event-related responses. We compared several BSS approaches by using simulated EEG as well as measured somatosensory and transcranial magnetic stimulation (TMS) evoked EEG. Among the compared methods, MUCA was the most tolerant one to noise, TMS artifacts, and other challenges in the data. With measured somatosensory data, over half of the estimated components were found to be similar by MUCA and independent component analysis. MUCA was also stable when tested with several input datasets. MUCA is based on simple assumptions, and the results suggest that MUCA is robust with nonideal data. Event-related responses and BSS are valuable and popular tools in neuroscience. Correctly designed BSS is an efficient way of identifying artifactual and neural processes from nonstationary event-related data.
Ransom, Katherine M; Grote, Mark N.; Deinhart, Amanda; Eppich, Gary; Kendall, Carol; Sanborn, Matthew E.; Sounders, A. Kate; Wimpenny, Joshua; Yin, Qing-zhu; Young, Megan B.; Harter, Thomas
2016-01-01
Groundwater quality is a concern in alluvial aquifers that underlie agricultural areas, such as in the San Joaquin Valley of California. Shallow domestic wells (less than 150 m deep) in agricultural areas are often contaminated by nitrate. Agricultural and rural nitrate sources include dairy manure, synthetic fertilizers, and septic waste. Knowledge of the relative proportion that each of these sources contributes to nitrate concentration in individual wells can aid future regulatory and land management decisions. We show that nitrogen and oxygen isotopes of nitrate, boron isotopes, and iodine concentrations are a useful, novel combination of groundwater tracers to differentiate between manure, fertilizers, septic waste, and natural sources of nitrate. Furthermore, in this work, we develop a new Bayesian mixing model in which these isotopic and elemental tracers were used to estimate the probability distribution of the fractional contributions of manure, fertilizers, septic waste, and natural sources to the nitrate concentration found in an individual well. The approach was applied to 56 nitrate-impacted private domestic wells located in the San Joaquin Valley. Model analysis found that some domestic wells were clearly dominated by the manure source and suggests evidence for majority contributions from either the septic or fertilizer source for other wells. But, predictions of fractional contributions for septic and fertilizer sources were often of similar magnitude, perhaps because modeled uncertainty about the fraction of each was large. For validation of the Bayesian model, fractional estimates were compared to surrounding land use and estimated source contributions were broadly consistent with nearby land use types.
Rosenheim, B. E.; Firesinger, D.; Roberts, M. L.; Burton, J. R.; Khan, N.; Moyer, R. P.
2016-12-01
Radiocarbon (14C) sediment core chronologies benefit from a high density of dates, even when precision of individual dates is sacrificed. This is demonstrated by a combined approach of rapid 14C analysis of CO2 gas generated from carbonates and organic material coupled with Bayesian statistical modeling. Analysis of 14C is facilitated by the gas ion source on the Continuous Flow Accelerator Mass Spectrometry (CFAMS) system at the Woods Hole Oceanographic Institution's National Ocean Sciences Accelerator Mass Spectrometry facility. This instrument is capable of producing a 14C determination of +/- 100 14C y precision every 4-5 minutes, with limited sample handling (dissolution of carbonates and/or combustion of organic carbon in evacuated containers). Rapid analysis allows over-preparation of samples to include replicates at each depth and/or comparison of different sample types at particular depths in a sediment or peat core. Analysis priority is given to depths that have the least chronologic precision as determined by Bayesian modeling of the chronology of calibrated ages. Use of such a statistical approach to determine the order in which samples are run ensures that the chronology constantly improves so long as material is available for the analysis of chronologic weak points. Ultimately, accuracy of the chronology is determined by the material that is actually being dated, and our combined approach allows testing of different constituents of the organic carbon pool and the carbonate minerals within a core. We will present preliminary results from a deep-sea sediment core abundant in deep-sea foraminifera as well as coastal wetland peat cores to demonstrate statistical improvements in sediment- and peat-core chronologies obtained by increasing the quantity and decreasing the quality of individual dates.
International Nuclear Information System (INIS)
Byun, Hyunsuk; Lee, Chul-Yong
2017-01-01
Generally, consumers use electricity without considering the source the electricity was generated from. Since different energy sources exert varying effects on society, it is necessary to analyze consumers’ latent preference for electricity generation sources. The present study estimates Korean consumers’ marginal utility and an appropriate generation mix is derived using the hierarchical Bayesian logit model in a discrete choice experiment. The results show that consumers consider the danger posed by the source of electricity as the most important factor among the effects of electricity generation sources. Additionally, Korean consumers wish to reduce the contribution of nuclear power from the existing 32–11%, and increase that of renewable energy from the existing 4–32%. - Highlights: • We derive an electricity mix reflecting Korean consumers’ latent preferences. • We use the discrete choice experiment and hierarchical Bayesian logit model. • The danger posed by the generation source is the most important attribute. • The consumers wish to increase the renewable energy proportion from 4.3% to 32.8%. • Korea's cost-oriented energy supply policy and consumers’ preference differ markedly.
Reverse osmosis brine for phosphorus recovery from source separated urine.
Tian, Xiujun; Wang, Guotian; Guan, Detian; Li, Jiuyi; Wang, Aimin; Li, Jin; Yu, Zhe; Chen, Yong; Zhang, Zhongguo
2016-12-01
Phosphorus (P) recovery from waste streams has recently been recognized as a key step in the sustainable supply of this indispensable and non-renewable resource. The feasibility of using brine from a reverse osmosis (RO) membrane unit treating cooling water as a precipitant for P recovery from source separated urine was evaluated in the present study. P removal efficiency, process parameters and precipitate properties were investigated in batch and continuous flow experiments. More than 90% of P removal was obtained from both undiluted fresh and hydrolyzed urines by mixing with RO brine (1:1, v/v) at a pH over 9.0. Around 2.58 and 1.24 Kg of precipitates could be recovered from 1 m 3 hydrolyzed and fresh urine, respectively, and the precipitated solids contain 8.1-19.0% of P, 10.3-15.2% of Ca, 3.7-5.0% of Mg and 0.1-3.5% of ammonium nitrogen. Satisfactory P removal performance was also achieved in a continuous flow precipitation reactor with a hydraulic retention time of 3-6 h. RO brine could be considered as urinal and toilet flush water despite of a marginally higher precipitation tendency than tap water. This study provides a widely available, low - cost and efficient precipitant for P recovery in urban areas, which will make P recovery from urine more economically attractive. Copyright © 2016 Elsevier Ltd. All rights reserved.
Decentralized modal identification using sparse blind source separation
International Nuclear Information System (INIS)
Sadhu, A; Hazra, B; Narasimhan, S; Pandey, M D
2011-01-01
Popular ambient vibration-based system identification methods process information collected from a dense array of sensors centrally to yield the modal properties. In such methods, the need for a centralized processing unit capable of satisfying large memory and processing demands is unavoidable. With the advent of wireless smart sensor networks, it is now possible to process information locally at the sensor level, instead. The information at the individual sensor level can then be concatenated to obtain the global structure characteristics. A novel decentralized algorithm based on wavelet transforms to infer global structure mode information using measurements obtained using a small group of sensors at a time is proposed in this paper. The focus of the paper is on algorithmic development, while the actual hardware and software implementation is not pursued here. The problem of identification is cast within the framework of under-determined blind source separation invoking transformations of measurements to the time–frequency domain resulting in a sparse representation. The partial mode shape coefficients so identified are then combined to yield complete modal information. The transformations are undertaken using stationary wavelet packet transform (SWPT), yielding a sparse representation in the wavelet domain. Principal component analysis (PCA) is then performed on the resulting wavelet coefficients, yielding the partial mixing matrix coefficients from a few measurement channels at a time. This process is repeated using measurements obtained from multiple sensor groups, and the results so obtained from each group are concatenated to obtain the global modal characteristics of the structure
Decentralized modal identification using sparse blind source separation
Sadhu, A.; Hazra, B.; Narasimhan, S.; Pandey, M. D.
2011-12-01
Popular ambient vibration-based system identification methods process information collected from a dense array of sensors centrally to yield the modal properties. In such methods, the need for a centralized processing unit capable of satisfying large memory and processing demands is unavoidable. With the advent of wireless smart sensor networks, it is now possible to process information locally at the sensor level, instead. The information at the individual sensor level can then be concatenated to obtain the global structure characteristics. A novel decentralized algorithm based on wavelet transforms to infer global structure mode information using measurements obtained using a small group of sensors at a time is proposed in this paper. The focus of the paper is on algorithmic development, while the actual hardware and software implementation is not pursued here. The problem of identification is cast within the framework of under-determined blind source separation invoking transformations of measurements to the time-frequency domain resulting in a sparse representation. The partial mode shape coefficients so identified are then combined to yield complete modal information. The transformations are undertaken using stationary wavelet packet transform (SWPT), yielding a sparse representation in the wavelet domain. Principal component analysis (PCA) is then performed on the resulting wavelet coefficients, yielding the partial mixing matrix coefficients from a few measurement channels at a time. This process is repeated using measurements obtained from multiple sensor groups, and the results so obtained from each group are concatenated to obtain the global modal characteristics of the structure.
The Effects of Environmental Management Systems on Source Separation in the Work and Home Settings
Directory of Open Access Journals (Sweden)
Chris von Borgstede
2012-06-01
Full Text Available Measures that challenge the generation of waste are needed to address the global problem of the increasing volumes of waste that are generated in both private homes and workplaces. Source separation at the workplace is commonly implemented by environmental management systems (EMS. In the present study, the relationship between source separation at work and at home was investigated. A questionnaire that maps psychological and behavioural predictors of source separation was distributed to employees at different workplaces. The results show that respondents with awareness of EMS report higher levels of source separation at work, stronger environmental concern, personal and social norms, and perceive source separation to be less difficult. Furthermore, the results support the notion that after the adoption of EMS at the workplace, source separation at work spills over into source separation in the household. The potential implications for environmental management systems are discussed.
Xia, Yongqiu; Li, Yuefei; Zhang, Xinyu; Yan, Xiaoyuan
2017-01-01
Nitrate (NO3-) pollution is a serious problem worldwide, particularly in countries with intensive agricultural and population activities. Previous studies have used δ15N-NO3- and δ18O-NO3- to determine the NO3- sources in rivers. However, this approach is subject to substantial uncertainties and limitations because of the numerous NO3- sources, the wide isotopic ranges, and the existing isotopic fractionations. In this study, we outline a combined procedure for improving the determination of NO3- sources in a paddy agriculture-urban gradient watershed in eastern China. First, the main sources of NO3- in the Qinhuai River were examined by the dual-isotope biplot approach, in which we narrowed the isotope ranges using site-specific isotopic results. Next, the bacterial groups and chemical properties of the river water were analyzed to verify these sources. Finally, we introduced a Bayesian model to apportion the spatiotemporal variations of the NO3- sources. Denitrification was first incorporated into the Bayesian model because denitrification plays an important role in the nitrogen pathway. The results showed that fertilizer contributed large amounts of NO3- to the surface water in traditional agricultural regions, whereas manure effluents were the dominant NO3- source in intensified agricultural regions, especially during the wet seasons. Sewage effluents were important in all three land uses and exhibited great differences between the dry season and the wet season. This combined analysis quantitatively delineates the proportion of NO3- sources from paddy agriculture to urban river water for both dry and wet seasons and incorporates isotopic fractionation and uncertainties in the source compositions.
International Nuclear Information System (INIS)
Xue Dongmei; De Baets, Bernard; Van Cleemput, Oswald; Hennessy, Carmel; Berglund, Michael; Boeckx, Pascal
2012-01-01
To identify different NO 3 − sources in surface water and to estimate their proportional contribution to the nitrate mixture in surface water, a dual isotope and a Bayesian isotope mixing model have been applied for six different surface waters affected by agriculture, greenhouses in an agricultural area, and households. Annual mean δ 15 N–NO 3 − were between 8.0 and 19.4‰, while annual mean δ 18 O–NO 3 − were given by 4.5–30.7‰. SIAR was used to estimate the proportional contribution of five potential NO 3 − sources (NO 3 − in precipitation, NO 3 − fertilizer, NH 4 + in fertilizer and rain, soil N, and manure and sewage). SIAR showed that “manure and sewage” contributed highest, “soil N”, “NO 3 − fertilizer” and “NH 4 + in fertilizer and rain” contributed middle, and “NO 3 − in precipitation” contributed least. The SIAR output can be considered as a “fingerprint” for the NO 3 − source contributions. However, the wide range of isotope values observed in surface water and of the NO 3 − sources limit its applicability. - Highlights: ► The dual isotope approach (δ 15 N- and δ 18 O–NO 3 − ) identify dominant nitrate sources in 6 surface waters. ► The SIAR model estimate proportional contributions for 5 nitrate sources. ► SIAR is a reliable approach to assess temporal and spatial variations of different NO 3 − sources. ► The wide range of isotope values observed in surface water and of the nitrate sources limit its applicability. - This paper successfully applied a dual isotope approach and Bayesian isotopic mixing model to identify and quantify 5 potential nitrate sources in surface water.
Back-trajectory modeling of high time-resolution air measurement data to separate nearby sources
Strategies to isolate air pollution contributions from sources is of interest as voluntary or regulatory measures are undertaken to reduce air pollution. When different sources are located in close proximity to one another and have similar emissions, separating source emissions ...
DEFF Research Database (Denmark)
Pires, Sara Monteiro; Hald, Tine
2010-01-01
Salmonella is a major cause of human gastroenteritis worldwide. To prioritize interventions and assess the effectiveness of efforts to reduce illness, it is important to attribute salmonellosis to the responsible sources. Studies have suggested that some Salmonella subtypes have a higher health...... impact than others. Likewise, some food sources appear to have a higher impact than others. Knowledge of variability in the impact of subtypes and sources may provide valuable added information for research, risk management, and public health strategies. We developed a Bayesian model that attributes...... illness to specific sources and allows for a better estimation of the differences in the ability of Salmonella subtypes and food types to result in reported salmonellosis. The model accommodates data for multiple years and is based on the Danish Salmonella surveillance. The number of sporadic cases caused...
Separation of source and propagation effects at regional distances
Energy Technology Data Exchange (ETDEWEB)
Goldstein, P.; Jarpe, S.; Mayeda, K. [Lawrence Livermore National Lab., CA (United States)] [and others
1994-12-31
Improved estimates of the contributions of source and propagation effects to regional seismic signals are needed to explain the performance of existing discriminants and to help develop more robust methods for identifying underground explosions. In this paper, we use close-in, local, and regional estimates of explosion source time functions to remove source effects from regional recordings of the Non-Proliferation Experiment (NPE), a one kiloton chemical explosion in N-tunnel at Rainier Mesa on the Nevada Test Site, and nearby nuclear explosions and earthquakes. Using source corrected regional waveforms, we find that regional Pg and Lg spectra of shallow explosions have significant low frequency ({approximately}1Hz) enhancements when compared to normal depth earthquakes. Data and simulations suggest that such enhancements are most sensitive to source depth, but may also be a function of mechanism, source receiver distance, and regional structure.
Residents’ Household Solid Waste (HSW Source Separation Activity: A Case Study of Suzhou, China
Directory of Open Access Journals (Sweden)
Hua Zhang
2014-09-01
Full Text Available Though the Suzhou government has provided household solid waste (HSW source separation since 2000, the program remains largely ineffective. Between January and March 2014, the authors conducted an intercept survey in five different community groups in Suzhou, and 505 valid surveys were completed. Based on the survey, the authors used an ordered probit regression to study residents’ HSW source separation activities for both Suzhou and for the five community groups. Results showed that 43% of the respondents in Suzhou thought they knew how to source separate HSW, and 29% of them have source separated HSW accurately. The results also found that the current HSW source separation pilot program in Suzhou is valid, as HSW source separation facilities and residents’ separation behavior both became better and better along with the program implementation. The main determinants of residents’ HSW source separation behavior are residents’ age, HSW source separation facilities and government preferential policies. The accessibility to waste management service is particularly important. Attitudes and willingness do not have significant impacts on residents’ HSW source separation behavior.
Directory of Open Access Journals (Sweden)
Meng Wang
2016-08-01
Full Text Available A high concentration of nitrate (NO3− in surface water threatens aquatic systems and human health. Revealing nitrate characteristics and identifying its sources are fundamental to making effective water management strategies. However, nitrate sources in multi-tributaries and mix land use watersheds remain unclear. In this study, based on 20 surface water sampling sites for more than two years’ monitoring from April 2012 to December 2014, water chemical and dual isotopic approaches (δ15N-NO3− and δ18O-NO3− were integrated for the first time to evaluate nitrate characteristics and sources in the Huashan watershed, Jianghuai hilly region, China. Nitrate-nitrogen concentrations (ranging from 0.02 to 8.57 mg/L were spatially heterogeneous that were influenced by hydrogeological and land use conditions. Proportional contributions of five potential nitrate sources (i.e., precipitation; manure and sewage, M & S; soil nitrogen, NS; nitrate fertilizer; nitrate derived from ammonia fertilizer and rainfall were estimated by using a Bayesian isotope mixing model. The results showed that nitrate sources contributions varied significantly among different rainfall conditions and land use types. As for the whole watershed, M & S (manure and sewage and NS (soil nitrogen were major nitrate sources in both wet and dry seasons (from 28% to 36% for manure and sewage and from 24% to 27% for soil nitrogen, respectively. Overall, combining a dual isotopes method with a Bayesian isotope mixing model offered a useful and practical way to qualitatively analyze nitrate sources and transformations as well as quantitatively estimate the contributions of potential nitrate sources in drinking water source watersheds, Jianghuai hilly region, eastern China.
Synthesis of blind source separation algorithms on reconfigurable FPGA platforms
Du, Hongtao; Qi, Hairong; Szu, Harold H.
2005-03-01
Recent advances in intelligence technology have boosted the development of micro- Unmanned Air Vehicles (UAVs) including Sliver Fox, Shadow, and Scan Eagle for various surveillance and reconnaissance applications. These affordable and reusable devices have to fit a series of size, weight, and power constraints. Cameras used on such micro-UAVs are therefore mounted directly at a fixed angle without any motion-compensated gimbals. This mounting scheme has resulted in the so-called jitter effect in which jitter is defined as sub-pixel or small amplitude vibrations. The jitter blur caused by the jitter effect needs to be corrected before any other processing algorithms can be practically applied. Jitter restoration has been solved by various optimization techniques, including Wiener approximation, maximum a-posteriori probability (MAP), etc. However, these algorithms normally assume a spatial-invariant blur model that is not the case with jitter blur. Szu et al. developed a smart real-time algorithm based on auto-regression (AR) with its natural generalization of unsupervised artificial neural network (ANN) learning to achieve restoration accuracy at the sub-pixel level. This algorithm resembles the capability of the human visual system, in which an agreement between the pair of eyes indicates "signal", otherwise, the jitter noise. Using this non-statistical method, for each single pixel, a deterministic blind sources separation (BSS) process can then be carried out independently based on a deterministic minimum of the Helmholtz free energy with a generalization of Shannon's information theory applied to open dynamic systems. From a hardware implementation point of view, the process of jitter restoration of an image using Szu's algorithm can be optimized by pixel-based parallelization. In our previous work, a parallelly structured independent component analysis (ICA) algorithm has been implemented on both Field Programmable Gate Array (FPGA) and Application
Zhou, X.; Albertson, J. D.
2016-12-01
Natural gas is considered as a bridge fuel towards clean energy due to its potential lower greenhouse gas emission comparing with other fossil fuels. Despite numerous efforts, an efficient and cost-effective approach to monitor fugitive methane emissions along the natural gas production-supply chain has not been developed yet. Recently, mobile methane measurement has been introduced which applies a Bayesian approach to probabilistically infer methane emission rates and update estimates recursively when new measurements become available. However, the likelihood function, especially the error term which determines the shape of the estimate uncertainty, is not rigorously defined and evaluated with field data. To address this issue, we performed a series of near-source (using a specialized vehicle mounted with fast response methane analyzers and a GPS unit. Methane concentrations were measured at two different heights along mobile traversals downwind of the sources, and concurrent wind and temperature data are recorded by nearby 3-D sonic anemometers. With known methane release rates, the measurements were used to determine the functional form and the parameterization of the likelihood function in the Bayesian inference scheme under different meteorological conditions.
Source Separation of Heartbeat Sounds for Effective E-Auscultation
Geethu, R. S.; Krishnakumar, M.; Pramod, K. V.; George, Sudhish N.
2016-03-01
This paper proposes a cost effective solution for improving the effectiveness of e-auscultation. Auscultation is the most difficult skill for a doctor, since it can be acquired only through experience. The heart sound mixtures are captured by placing the four numbers of sensors at appropriate auscultation area in the body. These sound mixtures are separated to its relevant components by a statistical method independent component analysis. The separated heartbeat sounds can be further processed or can be stored for future reference. This idea can be used for making a low cost, easy to use portable instrument which will be beneficial to people living in remote areas and are unable to take the advantage of advanced diagnosis methods.
Phan, Kevin; Xie, Ashleigh; Kumar, Narendra; Wong, Sophia; Medi, Caroline; La Meir, Mark; Yan, Tristan D
2015-08-01
Simplified maze procedures involving radiofrequency, cryoenergy and microwave energy sources have been increasingly utilized for surgical treatment of atrial fibrillation as an alternative to the traditional cut-and-sew approach. In the absence of direct comparisons, a Bayesian network meta-analysis is another alternative to assess the relative effect of different treatments, using indirect evidence. A Bayesian meta-analysis of indirect evidence was performed using 16 published randomized trials identified from 6 databases. Rank probability analysis was used to rank each intervention in terms of their probability of having the best outcome. Sinus rhythm prevalence beyond the 12-month follow-up was similar between the cut-and-sew, microwave and radiofrequency approaches, which were all ranked better than cryoablation (respectively, 39, 36, and 25 vs 1%). The cut-and-sew maze was ranked worst in terms of mortality outcomes compared with microwave, radiofrequency and cryoenergy (2 vs 19, 34, and 24%, respectively). The cut-and-sew maze procedure was associated with significantly lower stroke rates compared with microwave ablation [odds ratio <0.01; 95% confidence interval 0.00, 0.82], and ranked the best in terms of pacemaker requirements compared with microwave, radiofrequency and cryoenergy (81 vs 14, and 1, <0.01% respectively). Bayesian rank probability analysis shows that the cut-and-sew approach is associated with the best outcomes in terms of sinus rhythm prevalence and stroke outcomes, and remains the gold standard approach for AF treatment. Given the limitations of indirect comparison analysis, these results should be viewed with caution and not over-interpreted. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Source separation on hyperspectral cube applied to dermatology
Mitra, J.; Jolivot, R.; Vabres, P.; Marzani, F. S.
2010-03-01
This paper proposes a method of quantification of the components underlying the human skin that are supposed to be responsible for the effective reflectance spectrum of the skin over the visible wavelength. The method is based on independent component analysis assuming that the epidermal melanin and the dermal haemoglobin absorbance spectra are independent of each other. The method extracts the source spectra that correspond to the ideal absorbance spectra of melanin and haemoglobin. The noisy melanin spectrum is fixed using a polynomial fit and the quantifications associated with it are reestimated. The results produce feasible quantifications of each source component in the examined skin patch.
Kubo, H.; Asano, K.; Iwata, T.; Aoi, S.
2014-12-01
Previous studies for the period-dependent source characteristics of the 2011 Tohoku earthquake (e.g., Koper et al., 2011; Lay et al., 2012) were based on the short and long period source models using different method. Kubo et al. (2013) obtained source models of the 2011 Tohoku earthquake using multi period-bands waveform data by a common inversion method and discussed its period-dependent source characteristics. In this study, to achieve more in detail spatiotemporal source rupture behavior of this event, we introduce a new fault surface model having finer sub-fault size and estimate the source models in multi period-bands using a Bayesian inversion method combined with a multi-time-window method. Three components of velocity waveforms at 25 stations of K-NET, KiK-net, and F-net of NIED are used in this analysis. The target period band is 10-100 s. We divide this period band into three period bands (10-25 s, 25-50 s, and 50-100 s) and estimate a kinematic source model in each period band using a Bayesian inversion method with MCMC sampling (e.g., Fukuda & Johnson, 2008; Minson et al., 2013, 2014). The parameterization of spatiotemporal slip distribution follows the multi-time-window method (Hartzell & Heaton, 1983). The Green's functions are calculated by the 3D FDM (GMS; Aoi & Fujiwara, 1999) using a 3D velocity structure model (JIVSM; Koketsu et al., 2012). The assumed fault surface model is based on the Pacific plate boundary of JIVSM and is divided into 384 subfaults of about 16 * 16 km^2. The estimated source models in multi period-bands show the following source image: (1) First deep rupture off Miyagi at 0-60 s toward down-dip mostly radiating relatively short period (10-25 s) seismic waves. (2) Shallow rupture off Miyagi at 45-90 s toward up-dip with long duration radiating long period (50-100 s) seismic wave. (3) Second deep rupture off Miyagi at 60-105 s toward down-dip radiating longer period seismic waves then that of the first deep rupture. (4) Deep
Overcomplete Blind Source Separation by Combining ICA and Binary Time-Frequency Masking
DEFF Research Database (Denmark)
Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan
2005-01-01
a novel method for over-complete blind source separation. Two powerful source separation techniques have been combined, independent component analysis and binary time-frequency masking. Hereby, it is possible to iteratively extract each speech signal from the mixture. By using merely two microphones we...
DEFF Research Database (Denmark)
Oh, Geok Lian
properties such as the elastic wave speeds and soil densities. One processing method is casting the estimation problem into an inverse problem to solve for the unknown material parameters. The forward model for the seismic signals used in the literatures include ray tracing methods that consider only...... density values of the discretized ground medium, which leads to time-consuming computations and instability behaviour of the inversion process. In addition, the geophysics inverse problem is generally ill-posed due to non-exact forward model that introduces errors. The Bayesian inversion method through...... the first arrivals of the reflected compressional P-waves from the subsurface structures, or 3D elastic wave models that model all the seismic wave components. The ray tracing forward model formulation is linear, whereas the full 3D elastic wave model leads to a nonlinear inversion problem. In this Ph...
Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng
2016-05-01
In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.
How Many Separable Sources? Model Selection In Independent Components Analysis
DEFF Research Database (Denmark)
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....
Empirical Study on Factors Influencing Residents' Behavior of Separating Household Wastes at Source
Institute of Scientific and Technical Information of China (English)
Qu Ying; Zhu Qinghua; Murray Haight
2007-01-01
Source separation is the basic premise for making effective use of household wastes. In eight cities of China, however, several pilot projects of source separation finally failed because of the poor participation rate of residents. In order to solve this problem, identifying those factors that influence residents' behavior of source separation becomes crucial. By means of questionnaire survey, we conducted descriptive analysis and exploratory factor analysis. The results show that trouble-feeling, moral notion, environment protection, public education, environment value and knowledge deficiency are the main factors that play an important role for residents in deciding to separate their household wastes. Also, according to the contribution percentage of the six main factors to the total behavior of source separation, their influencing power is analyzed, which will provide suggestions on household waste management for policy makers and decision makers in China.
Single-channel source separation using non-negative matrix factorization
DEFF Research Database (Denmark)
Schmidt, Mikkel Nørgaard
-determined and its solution relies on making appropriate assumptions concerning the sources. This dissertation is concerned with model-based probabilistic single-channel source separation based on non-negative matrix factorization, and consists of two parts: i) three introductory chapters and ii) five published...... papers. The first part introduces the single-channel source separation problem as well as non-negative matrix factorization and provides a comprehensive review of existing approaches, applications, and practical algorithms. This serves to provide context for the second part, the published papers......, in which a number of methods for single-channel source separation based on non-negative matrix factorization are presented. In the papers, the methods are applied to separating audio signals such as speech and musical instruments and separating different types of tissue in chemical shift imaging....
Blind Source Separation Based on Covariance Ratio and Artificial Bee Colony Algorithm
Directory of Open Access Journals (Sweden)
Lei Chen
2014-01-01
Full Text Available The computation amount in blind source separation based on bioinspired intelligence optimization is high. In order to solve this problem, we propose an effective blind source separation algorithm based on the artificial bee colony algorithm. In the proposed algorithm, the covariance ratio of the signals is utilized as the objective function and the artificial bee colony algorithm is used to solve it. The source signal component which is separated out, is then wiped off from mixtures using the deflation method. All the source signals can be recovered successfully by repeating the separation process. Simulation experiments demonstrate that significant improvement of the computation amount and the quality of signal separation is achieved by the proposed algorithm when compared to previous algorithms.
Sequential Bayesian geoacoustic inversion for mobile and compact source-receiver configuration.
Carrière, Olivier; Hermand, Jean-Pierre
2012-04-01
Geoacoustic characterization of wide areas through inversion requires easily deployable configurations including free-drifting platforms, underwater gliders and autonomous vehicles, typically performing repeated transmissions during their course. In this paper, the inverse problem is formulated as sequential Bayesian filtering to take advantage of repeated transmission measurements. Nonlinear Kalman filters implement a random-walk model for geometry and environment and an acoustic propagation code in the measurement model. Data from MREA/BP07 sea trials are tested consisting of multitone and frequency-modulated signals (bands: 0.25-0.8 and 0.8-1.6 kHz) received on a shallow vertical array of four hydrophones 5-m spaced drifting over 0.7-1.6 km range. Space- and time-coherent processing are applied to the respective signal types. Kalman filter outputs are compared to a sequence of global optimizations performed independently on each received signal. For both signal types, the sequential approach is more accurate but also more efficient. Due to frequency diversity, the processing of modulated signals produces a more stable tracking. Although an extended Kalman filter provides comparable estimates of the tracked parameters, the ensemble Kalman filter is necessary to properly assess uncertainty. In spite of mild range dependence and simplified bottom model, all tracked geoacoustic parameters are consistent with high-resolution seismic profiling, core logging P-wave velocity, and previous inversion results with fixed geometries.
A Separation Algorithm for Sources with Temporal Structure Only Using Second-order Statistics
Directory of Open Access Journals (Sweden)
J.G. Wang
2013-09-01
Full Text Available Unlike conventional blind source separation (BSS deals with independent identically distributed (i.i.d. sources, this paper addresses the separation from mixtures of sources with temporal structure, such as linear autocorrelations. Many sequential extraction algorithms have been reported, resulting in inevitable cumulated errors introduced by the deflation scheme. We propose a robust separation algorithm to recover original sources simultaneously, through a joint diagonalizer of several average delayed covariance matrices at positions of the optimal time delay and its integers. The proposed algorithm is computationally simple and efficient, since it is based on the second-order statistics only. Extensive simulation results confirm the validity and high performance of the algorithm. Compared with related extraction algorithms, its separation signal-to-noise rate for a desired source can reach 20dB higher, and it seems rather insensitive to the estimation error of the time delay.
Separation of radiation from two sources from their known radiated sum field
DEFF Research Database (Denmark)
Laitinen, Tommi; Pivnenko, Sergey
2011-01-01
This paper presents a technique for complete and exact separation of the radiated fields of two sources (at the same frequency) from the knowledge of their radiated sum field. The two sources can be arbitrary but it must be possible to enclose the sources inside their own non-intersecting minimum...
Xu, Wanying; Zhou, Chuanbin; Lan, Yajun; Jin, Jiasheng; Cao, Aixin
2015-05-01
Municipal solid waste (MSW) management (MSWM) is most important and challenging in large urban communities. Sound community-based waste management systems normally include waste reduction and material recycling elements, often entailing the separation of recyclable materials by the residents. To increase the efficiency of source separation and recycling, an incentive-based source separation model was designed and this model was tested in 76 households in Guiyang, a city of almost three million people in southwest China. This model embraced the concepts of rewarding households for sorting organic waste, government funds for waste reduction, and introducing small recycling enterprises for promoting source separation. Results show that after one year of operation, the waste reduction rate was 87.3%, and the comprehensive net benefit under the incentive-based source separation model increased by 18.3 CNY tonne(-1) (2.4 Euros tonne(-1)), compared to that under the normal model. The stakeholder analysis (SA) shows that the centralized MSW disposal enterprises had minimum interest and may oppose the start-up of a new recycling system, while small recycling enterprises had a primary interest in promoting the incentive-based source separation model, but they had the least ability to make any change to the current recycling system. The strategies for promoting this incentive-based source separation model are also discussed in this study. © The Author(s) 2015.
Jakkareddy, Pradeep S.; Balaji, C.
2016-09-01
This paper employs the Bayesian based Metropolis Hasting - Markov Chain Monte Carlo algorithm to solve inverse heat transfer problem of determining the spatially varying heat transfer coefficient from a flat plate with flush mounted discrete heat sources with measured temperatures at the bottom of the plate. The Nusselt number is assumed to be of the form Nu = aReb(x/l)c . To input reasonable values of ’a’ and ‘b’ into the inverse problem, first limited two dimensional conjugate convection simulations were done with Comsol. Based on the guidance from this different values of ‘a’ and ‘b’ are input to a computationally less complex problem of conjugate conduction in the flat plate (15mm thickness) and temperature distributions at the bottom of the plate which is a more convenient location for measuring the temperatures without disturbing the flow were obtained. Since the goal of this work is to demonstrate the eficiacy of the Bayesian approach to accurately retrieve ‘a’ and ‘b’, numerically generated temperatures with known values of ‘a’ and ‘b’ are treated as ‘surrogate’ experimental data. The inverse problem is then solved by repeatedly using the forward solutions together with the MH-MCMC aprroach. To speed up the estimation, the forward model is replaced by an artificial neural network. The mean, maximum-a-posteriori and standard deviation of the estimated parameters ‘a’ and ‘b’ are reported. The robustness of the proposed method is examined, by synthetically adding noise to the temperatures.
DEFF Research Database (Denmark)
Stahlhut, Carsten; Mørup, Morten; Winther, Ole
2011-01-01
We present an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model representation is motivated by the many random contributions to the path from sources to measurements including the tissue conductivity distribution, the geometry of the cortical s...
Stationary plasma source of heavy ions for imitating research at the separator
International Nuclear Information System (INIS)
Yuferov, V.B.; Sharyj, S.V.; Seroshtanov, V.A.
2008-01-01
The imitation gas mix choice for experimenting on the demonstration imitation separator have been grounded. The construction of plasma source is changed. The research of operating conditions and contrastive analysis of received characteristics have been carry out
Fate of pharmaceuticals in full-scale source separated sanitation system
Butkovskyi, A.; Hernandez Leal, L.; Rijnaarts, H.H.M.; Zeeman, G.
2015-01-01
Removal of 14 pharmaceuticals and 3 of their transformation products was studied in a full-scale source separated sanitation system with separate collection and treatment of black water and grey water. Black water is treated in an up-flow anaerobic sludge blanket (UASB) reactor followed by
Cohen, M.S.; Gulbinaite, R.
2017-01-01
Steady-state evoked potentials (SSEPs) are rhythmic brain responses to rhythmic sensory stimulation, and are often used to study perceptual and attentional processes. We present a data analysis method for maximizing the signal-to-noise ratio of the narrow-band steady-state response in the frequency and time-frequency domains. The method, termed rhythmic entrainment source separation (RESS), is based on denoising source separation approaches that take advantage of the simultaneous but differen...
Dutta, Rishabh
2017-12-20
Several researchers have studied the source parameters of the 2005 Fukuoka (northwestern Kyushu Island, Japan) earthquake (MW 6.6) using teleseismic, strong motion and geodetic data. However, in all previous studies, errors of the estimated fault solutions have been neglected, making it impossible to assess the reliability of the reported solutions. We use Bayesian inference to estimate the location, geometry and slip parameters of the fault and their uncertainties using Interferometric Synthetic Aperture Radar (InSAR) and Global Positioning System (GPS) data. The offshore location of the earthquake makes the fault parameter estimation challenging, with geodetic data coverage mostly to the southeast of the earthquake. To constrain the fault parameters, we use a priori constraints on the magnitude of the earthquake and the location of the fault with respect to the aftershock distribution and find that the estimated fault slip ranges from 1.5 m to 2.5 m with decreasing probability. The marginal distributions of the source parameters show that the location of the western end of the fault is poorly constrained by the data whereas that of the eastern end, located closer to the shore, is better resolved. We propagate the uncertainties of the fault model and calculate the variability of Coulomb failure stress changes for the nearby Kego fault, located directly below Fukuoka city, showing that the mainshock increased stress on the fault and brought it closer to failure.
Dutta, Rishabh; Jónsson, Sigurjón; Wang, Teng; Vasyura-Bathke, Hannes
2018-04-01
Several researchers have studied the source parameters of the 2005 Fukuoka (northwestern Kyushu Island, Japan) earthquake (Mw 6.6) using teleseismic, strong motion and geodetic data. However, in all previous studies, errors of the estimated fault solutions have been neglected, making it impossible to assess the reliability of the reported solutions. We use Bayesian inference to estimate the location, geometry and slip parameters of the fault and their uncertainties using Interferometric Synthetic Aperture Radar and Global Positioning System data. The offshore location of the earthquake makes the fault parameter estimation challenging, with geodetic data coverage mostly to the southeast of the earthquake. To constrain the fault parameters, we use a priori constraints on the magnitude of the earthquake and the location of the fault with respect to the aftershock distribution and find that the estimated fault slip ranges from 1.5 to 2.5 m with decreasing probability. The marginal distributions of the source parameters show that the location of the western end of the fault is poorly constrained by the data whereas that of the eastern end, located closer to the shore, is better resolved. We propagate the uncertainties of the fault model and calculate the variability of Coulomb failure stress changes for the nearby Kego fault, located directly below Fukuoka city, showing that the main shock increased stress on the fault and brought it closer to failure.
D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco P.; Pasquariello, Guido
2018-03-01
High-resolution, remotely sensed images of the Earth surface have been proven to be of help in producing detailed flood maps, thanks to their synoptic overview of the flooded area and frequent revisits. However, flood scenarios can be complex situations, requiring the integration of different data in order to provide accurate and robust flood information. Several processing approaches have been recently proposed to efficiently combine and integrate heterogeneous information sources. In this paper, we introduce DAFNE, a Matlab®-based, open source toolbox, conceived to produce flood maps from remotely sensed and other ancillary information, through a data fusion approach. DAFNE is based on Bayesian Networks, and is composed of several independent modules, each one performing a different task. Multi-temporal and multi-sensor data can be easily handled, with the possibility of following the evolution of an event through multi-temporal output flood maps. Each DAFNE module can be easily modified or upgraded to meet different user needs. The DAFNE suite is presented together with an example of its application.
Directory of Open Access Journals (Sweden)
Sujitra Vassanadumrongdee
2018-03-01
Full Text Available Source separation for recycling has been recognized as a way to achieve sustainable municipal solid waste (MSW management. However, most developing countries including Thailand have been facing with lack of recycling facilities and low level of source separation practice. By employing questionnaire surveys, this study investigated Bangkok residents' source separation intention and willingness to pay (WTP for improving MSW service and recycling facilities (n = 1076. This research extended the theory of planned behavior to explore the effects of both internal and external factors. The survey highlighted perceived inconvenience and mistrust on MSW collection being major barriers to carrying out source separation in Bangkok. Promoting source separation at workplace may possibly create spill-over effect to people's intention to recycle their waste at home. Both subjective norms and knowledge on MSW situation were found to be a positive correlation with Bangkok residents' source separation intention and WTP (p < 0.001. Besides, the average WTP values are higher than the existing rate of waste collection service, which shows that Bangkok residents have preference for recycling programs. However, the WTP figures are still much lower than the average MSW management cost. These findings suggest that Bangkok Metropolitan Administration targets improving people knowledge on waste problems that could have adverse impact on the economy and well-being of Bangkok residents and improve its MSW collection service as these factors have positive influence on residents' WTP.
Darsinos, T.; Satchell, S.E.
2001-01-01
Bayesian statistical methods are naturally oriented towards pooling in a rigorous way information from separate sources. It has been suggested that both historical and implied volatilities convey information about future volatility. However, typically in the literature implied and return volatility series are fed separately into models to provide rival forecasts of volatility or options prices. We develop a formal Bayesian framework where we can merge the backward looking information as r...
Separation of Correlated Astrophysical Sources Using Multiple-Lag Data Covariance Matrices
Directory of Open Access Journals (Sweden)
Baccigalupi C
2005-01-01
Full Text Available This paper proposes a new strategy to separate astrophysical sources that are mutually correlated. This strategy is based on second-order statistics and exploits prior information about the possible structure of the mixing matrix. Unlike ICA blind separation approaches, where the sources are assumed mutually independent and no prior knowledge is assumed about the mixing matrix, our strategy allows the independence assumption to be relaxed and performs the separation of even significantly correlated sources. Besides the mixing matrix, our strategy is also capable to evaluate the source covariance functions at several lags. Moreover, once the mixing parameters have been identified, a simple deconvolution can be used to estimate the probability density functions of the source processes. To benchmark our algorithm, we used a database that simulates the one expected from the instruments that will operate onboard ESA's Planck Surveyor Satellite to measure the CMB anisotropies all over the celestial sphere.
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, Miroslav; Stohl, Andreas
2017-10-01
In the fall of 2011, iodine-131 (131I) was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA) was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS) matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS) and from European Centre for Medium-range Weather Forecasts (ECMWF) weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC), to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most probable location of
Directory of Open Access Journals (Sweden)
O. Tichý
2017-10-01
Full Text Available In the fall of 2011, iodine-131 (131I was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS and from European Centre for Medium-range Weather Forecasts (ECMWF weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC, to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most
DEFF Research Database (Denmark)
Naroznova, Irina; Møller, Jacob; Scheutz, Charlotte
2013-01-01
The environmental performance of two pretreatment technologies for source-separated organic waste was compared using life cycle assessment (LCA). An innovative pulping process where source-separated organic waste is pulped with cold water forming a volatile solid rich biopulp was compared to a more...... including a number of non-toxic and toxic impact categories were assessed. No big difference in the overall performance of the two technologies was observed. The difference for the separate life cycle steps was, however, more pronounced. More efficient material transfer in the scenario with waste pulping...
Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)
DEFF Research Database (Denmark)
Stahlhut, Carsten; Mørup, Morten; Winther, Ole
2009-01-01
In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...
Materials recovery system for source-separated noncombustible rubbish and bulky waste in Nishinomiya
Energy Technology Data Exchange (ETDEWEB)
Adachi, Yoshihiro
1987-01-01
Since 1980, the city of Nishinomiya has been recovering materials from source-separated non-combustible and bulky waste to reduce the amount of final disposal. Materials amounting to 33-39% of the throughput are recovered in the Shredding and Separation Facility, which consists of a manual separation system, a mechanical separation system, a shredder, a pair of shears and incinerators. The facility system is shown in order of processing of the waste. The secondary pollution control, safety equipment, instrumentation, etc., are also described. The recovery percentage and use of revenues are explained in detail.
Prospects of Source-Separation-Based Sanitation Concepts: A Model-Based Study
Directory of Open Access Journals (Sweden)
Cees Buisman
2013-07-01
Full Text Available Separation of different domestic wastewater streams and targeted on-site treatment for resource recovery has been recognized as one of the most promising sanitation concepts to re-establish the balance in carbon, nutrient and water cycles. In this study a model was developed based on literature data to compare energy and water balance, nutrient recovery, chemical use, effluent quality and land area requirement in four different sanitation concepts: (1 centralized; (2 centralized with source-separation of urine; (3 source-separation of black water, kitchen refuse and grey water; and (4 source-separation of urine, feces, kitchen refuse and grey water. The highest primary energy consumption of 914 MJ/capita(cap/year was attained within the centralized sanitation concept, and the lowest primary energy consumption of 437 MJ/cap/year was attained within source-separation of urine, feces, kitchen refuse and grey water. Grey water bio-flocculation and subsequent grey water sludge co-digestion decreased the primary energy consumption, but was not energetically favorable to couple with grey water effluent reuse. Source-separation of urine improved the energy balance, nutrient recovery and effluent quality, but required larger land area and higher chemical use in the centralized concept.
Jameel, M. Y.; Brewer, S.; Fiorella, R.; Tipple, B. J.; Bowen, G. J.; Terry, S.
2017-12-01
Public water supply systems (PWSS) are complex distribution systems and critical infrastructure, making them vulnerable to physical disruption and contamination. Exploring the susceptibility of PWSS to such perturbations requires detailed knowledge of the supply system structure and operation. Although the physical structure of supply systems (i.e., pipeline connection) is usually well documented for developed cities, the actual flow patterns of water in these systems are typically unknown or estimated based on hydrodynamic models with limited observational validation. Here, we present a novel method for mapping the flow structure of water in a large, complex PWSS, building upon recent work highlighting the potential of stable isotopes of water (SIW) to document water management practices within complex PWSS. We sampled a major water distribution system of the Salt Lake Valley, Utah, measuring SIW of water sources, treatment facilities, and numerous sites within in the supply system. We then developed a hierarchical Bayesian (HB) isotope mixing model to quantify the proportion of water supplied by different sources at sites within the supply system. Known production volumes and spatial distance effects were used to define the prior probabilities for each source; however, we did not include other physical information about the supply system. Our results were in general agreement with those obtained by hydrodynamic models and provide quantitative estimates of contributions of different water sources to a given site along with robust estimates of uncertainty. Secondary properties of the supply system, such as regions of "static" and "dynamic" source (e.g., regions supplied dominantly by one source vs. those experiencing active mixing between multiple sources), can be inferred from the results. The isotope-based HB isotope mixing model offers a new investigative technique for analyzing PWSS and documenting aspects of supply system structure and operation that are
A particle velocity based method for separating all multi incoherent sound sources
Winkel, J.C.; Yntema, Doekle Reinder; Druyvesteyn, W.F.; de Bree, H.E.
2006-01-01
In this paper we present a method to separate the contributions of different uncorrelated sound sources to the total sound field. When the contribution of each sound source to the total sound field is known, techniques with array-applications like direct sound field measurements or inverse acoustics
Cohen, Michael X
2017-09-27
The number of simultaneously recorded electrodes in neuroscience is steadily increasing, providing new opportunities for understanding brain function, but also new challenges for appropriately dealing with the increase in dimensionality. Multivariate source separation analysis methods have been particularly effective at improving signal-to-noise ratio while reducing the dimensionality of the data and are widely used for cleaning, classifying and source-localizing multichannel neural time series data. Most source separation methods produce a spatial component (that is, a weighted combination of channels to produce one time series); here, this is extended to apply source separation to a time series, with the idea of obtaining a weighted combination of successive time points, such that the weights are optimized to satisfy some criteria. This is achieved via a two-stage source separation procedure, in which an optimal spatial filter is first constructed and then its optimal temporal basis function is computed. This second stage is achieved with a time-delay-embedding matrix, in which additional rows of a matrix are created from time-delayed versions of existing rows. The optimal spatial and temporal weights can be obtained by solving a generalized eigendecomposition of covariance matrices. The method is demonstrated in simulated data and in an empirical electroencephalogram study on theta-band activity during response conflict. Spatiotemporal source separation has several advantages, including defining empirical filters without the need to apply sinusoidal narrowband filters. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
DEFF Research Database (Denmark)
Fakhry, Mahmoud; Svaizer, Piergiorgio; Omologo, Maurizio
2017-01-01
-maximization algorithm and used to separate the signals by means of multichannel Wiener filtering. We propose to estimate these parameters by applying nonnegative factorization based on prior information on source variances. In the nonnegative factorization, spectral basis matrices can be defined as the prior...... information. The matrices can be either extracted or indirectly made available through a redundant library that is trained in advance. In a separate step, applying nonnegative tensor factorization, two algorithms are proposed in order to either extract or detect the basis matrices that best represent......In Gaussian model-based multichannel audio source separation, the likelihood of observed mixtures of source signals is parametrized by source spectral variances and by associated spatial covariance matrices. These parameters are estimated by maximizing the likelihood through an expectation...
Multiple Speech Source Separation Using Inter-Channel Correlation and Relaxed Sparsity
Directory of Open Access Journals (Sweden)
Maoshen Jia
2018-01-01
Full Text Available In this work, a multiple speech source separation method using inter-channel correlation and relaxed sparsity is proposed. A B-format microphone with four spatially located channels is adopted due to the size of the microphone array to preserve the spatial parameter integrity of the original signal. Specifically, we firstly measure the proportion of overlapped components among multiple sources and find that there exist many overlapped time-frequency (TF components with increasing source number. Then, considering the relaxed sparsity of speech sources, we propose a dynamic threshold-based separation approach of sparse components where the threshold is determined by the inter-channel correlation among the recording signals. After conducting a statistical analysis of the number of active sources at each TF instant, a form of relaxed sparsity called the half-K assumption is proposed so that the active source number in a certain TF bin does not exceed half the total number of simultaneously occurring sources. By applying the half-K assumption, the non-sparse components are recovered by regarding the extracted sparse components as a guide, combined with vector decomposition and matrix factorization. Eventually, the final TF coefficients of each source are recovered by the synthesis of sparse and non-sparse components. The proposed method has been evaluated using up to six simultaneous speech sources under both anechoic and reverberant conditions. Both objective and subjective evaluations validated that the perceptual quality of the separated speech by the proposed approach outperforms existing blind source separation (BSS approaches. Besides, it is robust to different speeches whilst confirming all the separated speeches with similar perceptual quality.
Separation of zeros for source signature identification under reverberant path conditions.
Hasegawa, Tomomi; Tohyama, Mikio
2011-10-01
This paper presents an approach to distinguishing the zeros representing a sound source from those representing the transfer function on the basis of Lyon's residue-sign model. In machinery noise diagnostics, the source signature must be separated from observation records under reverberant path conditions. In numerical examples and an experimental piano-string vibration analysis, the modal responses could be synthesized by using clustered line-spectrum modeling. The modeling error represented the source signature subject to the source characteristics being given by a finite impulse response. The modeling error can be interpreted as a remainder function necessary for the zeros representing the source signature. © 2011 Acoustical Society of America
Gu, C.; Toksoz, M. N.; Marzouk, Y.; Al-Enezi, A.; Al-Jeri, F.; Buyukozturk, O.
2016-12-01
The increasing seismic activity in the regions of oil/gas fields due to fluid injection/extraction and hydraulic fracturing has drawn new attention in both academia and industry. Source mechanism and triggering stress of these induced earthquakes are of great importance for understanding the physics of the seismic processes in reservoirs, and predicting ground motion in the vicinity of oil/gas fields. The induced seismicity data in our study are from Kuwait National Seismic Network (KNSN). Historically, Kuwait has low local seismicity; however, in recent years the KNSN has monitored more and more local earthquakes. Since 1997, the KNSN has recorded more than 1000 earthquakes (Mw Institutions for Seismology (IRIS) and KNSN, and widely felt by people in Kuwait. These earthquakes happen repeatedly in the same locations close to the oil/gas fields in Kuwait (see the uploaded image). The earthquakes are generally small (Mw stress of these earthquakes was calculated based on the source mechanisms results. In addition, we modeled the ground motion in Kuwait due to these local earthquakes. Our results show that most likely these local earthquakes occurred on pre-existing faults and were triggered by oil field activities. These events are generally smaller than Mw 5; however, these events, occurring in the reservoirs, are very shallow with focal depths less than about 4 km. As a result, in Kuwait, where oil fields are close to populated areas, these induced earthquakes could produce ground accelerations high enough to cause damage to local structures without using seismic design criteria.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources.
Klumpp, John; Brandl, Alexander
2015-03-01
A particle counting and detection system is proposed that searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data (e.g., time between counts), as this was shown to be a more sensitive technique for detecting low count rate sources compared to analyzing counts per unit interval (Luo et al. 2013). Two distinct versions of the detection system are developed. The first is intended for situations in which the sample is fixed and can be measured for an unlimited amount of time. The second version is intended to detect sources that are physically moving relative to the detector, such as a truck moving past a fixed roadside detector or a waste storage facility under an airplane. In both cases, the detection system is expected to be active indefinitely; i.e., it is an online detection system. Both versions of the multi-energy detection systems are compared to their respective gross count rate detection systems in terms of Type I and Type II error rates and sensitivity.
Life cycle assessment of a household solid waste source separation programme: a Swedish case study.
Bernstad, Anna; la Cour Jansen, Jes; Aspegren, Henrik
2011-10-01
The environmental impact of an extended property close source-separation system for solid household waste (i.e., a systems for collection of recyclables from domestic properties) is investigated in a residential area in southern Sweden. Since 2001, households have been able to source-separate waste into six fractions of dry recyclables and food waste sorting. The current system was evaluated using the EASEWASTE life cycle assessment tool. Current status is compared with an ideal scenario in which households display perfect source-separation behaviour and a scenario without any material recycling. Results show that current recycling provides substantial environmental benefits compared to a non-recycling alternative. The environmental benefit varies greatly between recyclable fractions, and the recyclables currently most frequently source-separated by households are often not the most beneficial from an environmental perspective. With optimal source-separation of all recyclables, the current net contribution to global warming could be changed to a net-avoidance while current avoidance of nutrient enrichment, acidification and photochemical ozone formation could be doubled. Sensitivity analyses show that the type of energy substituted by incineration of non-recycled waste, as well as energy used in recycling processes and in the production of materials substituted by waste recycling, is of high relevance for the attained results.
Municipal solid waste source-separated collection in China: A comparative analysis
International Nuclear Information System (INIS)
Tai Jun; Zhang Weiqian; Che Yue; Feng Di
2011-01-01
A pilot program focusing on municipal solid waste (MSW) source-separated collection was launched in eight major cities throughout China in 2000. Detailed investigations were carried out and a comprehensive system was constructed to evaluate the effects of the eight-year implementation in those cities. This paper provides an overview of different methods of collection, transportation, and treatment of MSW in the eight cities; as well as making a comparative analysis of MSW source-separated collection in China. Information about the quantity and composition of MSW shows that the characteristics of MSW are similar, which are low calorific value, high moisture content and high proportion of organisms. Differences which exist among the eight cities in municipal solid waste management (MSWM) are presented in this paper. Only Beijing and Shanghai demonstrated a relatively effective result in the implementation of MSW source-separated collection. While the six remaining cities result in poor performance. Considering the current status of MSWM, source-separated collection should be a key priority. Thus, a wider range of cities should participate in this program instead of merely the eight pilot cities. It is evident that an integrated MSWM system is urgently needed. Kitchen waste and recyclables are encouraged to be separated at the source. Stakeholders involved play an important role in MSWM, thus their responsibilities should be clearly identified. Improvement in legislation, coordination mechanisms and public education are problematic issues that need to be addressed.
Wallace, D. J.; Rosenheim, B. E.; Roberts, M. L.; Burton, J. R.; Donnelly, J. P.; Woodruff, J. D.
2014-12-01
Is a small quantity of high-precision ages more robust than a higher quantity of lower-precision ages for sediment core chronologies? AMS Radiocarbon ages have been available to researchers for several decades now, and precision of the technique has continued to improve. Analysis and time cost is high, though, and projects are often limited in terms of the number of dates that can be used to develop a chronology. The Gas Ion Source at the National Ocean Sciences Accelerator Mass Spectrometry Facility (NOSAMS), while providing lower-precision (uncertainty of order 100 14C y for a sample), is significantly less expensive and far less time consuming than conventional age dating and offers the unique opportunity for large amounts of ages. Here we couple two approaches, one analytical and one statistical, to investigate the utility of an age model comprised of these lower-precision ages for paleotempestology. We use a gas ion source interfaced to a gas-bench type device to generate radiocarbon dates approximately every 5 minutes while determining the order of sample analysis using the published Bayesian accumulation histories for deposits (Bacon). During two day-long sessions, several dates were obtained from carbonate shells in living position in a sediment core comprised of sapropel gel from Mangrove Lake, Bermuda. Samples were prepared where large shells were available, and the order of analysis was determined by the depth with the highest uncertainty according to Bacon. We present the results of these analyses as well as a prognosis for a future where such age models can be constructed from many dates that are quickly obtained relative to conventional radiocarbon dates. This technique currently is limited to carbonates, but development of a system for organic material dating is underway. We will demonstrate the extent to which sacrificing some analytical precision in favor of more dates improves age models.
Dutta, Rishabh; Jonsson, Sigurjon; Wang, Teng; Vasyura-Bathke, Hannes
2017-01-01
solutions have been neglected, making it impossible to assess the reliability of the reported solutions. We use Bayesian inference to estimate the location, geometry and slip parameters of the fault and their uncertainties using Interferometric Synthetic
Bayesian Independent Component Analysis
DEFF Research Database (Denmark)
Winther, Ole; Petersen, Kaare Brandt
2007-01-01
In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...
Albert, Jim
2009-01-01
There has been a dramatic growth in the development and application of Bayesian inferential methods. Some of this growth is due to the availability of powerful simulation-based algorithms to summarize posterior distributions. There has been also a growing interest in the use of the system R for statistical analyses. R's open source nature, free availability, and large number of contributor packages have made R the software of choice for many statisticians in education and industry. Bayesian Computation with R introduces Bayesian modeling by the use of computation using the R language. The earl
Directory of Open Access Journals (Sweden)
Dan Yang
2017-04-01
Full Text Available To solve the problem of multi-fault blind source separation (BSS in the case that the observed signals are under-determined, a novel approach for single channel blind source separation (SCBSS based on the improved tensor-based singular spectrum analysis (TSSA is proposed. As the most natural representation of high-dimensional data, tensor can preserve the intrinsic structure of the data to the maximum extent. Thus, TSSA method can be employed to extract the multi-fault features from the measured single-channel vibration signal. However, SCBSS based on TSSA still has some limitations, mainly including unsatisfactory convergence of TSSA in many cases and the number of source signals is hard to accurately estimate. Therefore, the improved TSSA algorithm based on canonical decomposition and parallel factors (CANDECOMP/PARAFAC weighted optimization, namely CP-WOPT, is proposed in this paper. CP-WOPT algorithm is applied to process the factor matrix using a first-order optimization approach instead of the original least square method in TSSA, so as to improve the convergence of this algorithm. In order to accurately estimate the number of the source signals in BSS, EMD-SVD-BIC (empirical mode decomposition—singular value decomposition—Bayesian information criterion method, instead of the SVD in the conventional TSSA, is introduced. To validate the proposed method, we applied it to the analysis of the numerical simulation signal and the multi-fault rolling bearing signals.
Gearbox Fault Diagnosis in a Wind Turbine Using Single Sensor Based Blind Source Separation
Directory of Open Access Journals (Sweden)
Yuning Qian
2016-01-01
Full Text Available This paper presents a single sensor based blind source separation approach, namely, the wavelet-assisted stationary subspace analysis (WSSA, for gearbox fault diagnosis in a wind turbine. Continuous wavelet transform (CWT is used as a preprocessing tool to decompose a single sensor measurement data into a set of wavelet coefficients to meet the multidimensional requirement of the stationary subspace analysis (SSA. The SSA is a blind source separation technique that can separate the multidimensional signals into stationary and nonstationary source components without the need for independency and prior information of the source signals. After that, the separated nonstationary source component with the maximum kurtosis value is analyzed by the enveloping spectral analysis to identify potential fault-related characteristic frequencies. Case studies performed on a wind turbine gearbox test system verify the effectiveness of the WSSA approach and indicate that it outperforms independent component analysis (ICA and empirical mode decomposition (EMD, as well as the spectral-kurtosis-based enveloping, for wind turbine gearbox fault diagnosis.
Melesse Eshetu Moges; Daniel Todt; Arve Heistad
2018-01-01
Using a filter medium for organic matter removal and nutrient recovery from blackwater treatment is a novel concept and has not been investigated sufficiently to date. This paper demonstrates a combined blackwater treatment and nutrient-recovery strategy and establishes mechanisms for a more dependable source of plant nutrients aiming at a circular economy. Source-separated blackwater from a student dormitory was used as feedstock for a sludge blanket anaerobic-baffled reactor. The effluent f...
Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle
Isomura, Takuya; Kotani, Kiyoshi; Jimbo, Yasuhiko
2015-01-01
Blind source separation is the computation underlying the cocktail party effect––a partygoer can distinguish a particular talker’s voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes’ principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico) demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle. PMID:26690814
Directory of Open Access Journals (Sweden)
Takuya Isomura
2015-12-01
Full Text Available Blind source separation is the computation underlying the cocktail party effect--a partygoer can distinguish a particular talker's voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes' principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle.
Directory of Open Access Journals (Sweden)
Yalin Yuan
2014-12-01
Full Text Available A source separation program for household kitchen waste has been in place in Beijing since 2010. However, the participation rate of residents is far from satisfactory. This study was carried out to identify residents’ preferences based on an improved management strategy for household kitchen waste source separation. We determine the preferences of residents in an ad hoc sample, according to their age level, for source separation services and their marginal willingness to accept compensation for the service attributes. We used a multinomial logit model to analyze the data, collected from 394 residents in Haidian and Dongcheng districts of Beijing City through a choice experiment. The results show there are differences of preferences on the services attributes between young, middle, and old age residents. Low compensation is not a major factor to promote young and middle age residents accept the proposed separation services. However, on average, most of them prefer services with frequent, evening, plastic bag attributes and without instructor. This study indicates that there is a potential for local government to improve the current separation services accordingly.
Lockhart, K.; Harter, T.; Grote, M.; Young, M. B.; Eppich, G.; Deinhart, A.; Wimpenny, J.; Yin, Q. Z.
2014-12-01
Groundwater quality is a concern in alluvial aquifers underlying agricultural areas worldwide, an example of which is the San Joaquin Valley, California. Nitrate from land applied fertilizers or from animal waste can leach to groundwater and contaminate drinking water resources. Dairy manure and synthetic fertilizers are the major sources of nitrate in groundwater in the San Joaquin Valley, however, septic waste can be a major source in some areas. As in other such regions around the world, the rural population in the San Joaquin Valley relies almost exclusively on shallow domestic wells (≤150 m deep), of which many have been affected by nitrate. Consumption of water containing nitrate above the drinking water limit has been linked to major health effects including low blood oxygen in infants and certain cancers. Knowledge of the proportion of each of the three main nitrate sources (manure, synthetic fertilizer, and septic waste) contributing to individual well nitrate can aid future regulatory decisions. Nitrogen, oxygen, and boron isotopes can be used as tracers to differentiate between the three main nitrate sources. Mixing models quantify the proportional contributions of sources to a mixture by using the concentration of conservative tracers within each source as a source signature. Deterministic mixing models are common, but do not allow for variability in the tracer source concentration or overlap of tracer concentrations between sources. Bayesian statistics used in conjunction with mixing models can incorporate variability in the source signature. We developed a Bayesian mixing model on a pilot network of 32 private domestic wells in the San Joaquin Valley for which nitrate as well as nitrogen, oxygen, and boron isotopes were measured. Probability distributions for nitrogen, oxygen, and boron isotope source signatures for manure, fertilizer, and septic waste were compiled from the literature and from a previous groundwater monitoring project on several
Survival of enteric bacteria in source-separated human urine used ...
African Journals Online (AJOL)
MAKAYA
Urine in Pumpkin (Cucurbita maxima) Cultivation. Agric. Food Sci. 18:57-68. Pronk W, Koné D (2010). Options for urine treatment in developing countries. Desalination 251:360-368. Schönning C, Leeming R, Stenström TA (2002). Faecal contamination of source-separated human urine based on the content of faecal sterols ...
Blind Separation of Nonstationary Sources Based on Spatial Time-Frequency Distributions
Directory of Open Access Journals (Sweden)
Zhang Yimin
2006-01-01
Full Text Available Blind source separation (BSS based on spatial time-frequency distributions (STFDs provides improved performance over blind source separation methods based on second-order statistics, when dealing with signals that are localized in the time-frequency (t-f domain. In this paper, we propose the use of STFD matrices for both whitening and recovery of the mixing matrix, which are two stages commonly required in many BSS methods, to provide robust BSS performance to noise. In addition, a simple method is proposed to select the auto- and cross-term regions of time-frequency distribution (TFD. To further improve the BSS performance, t-f grouping techniques are introduced to reduce the number of signals under consideration, and to allow the receiver array to separate more sources than the number of array sensors, provided that the sources have disjoint t-f signatures. With the use of one or more techniques proposed in this paper, improved performance of blind separation of nonstationary signals can be achieved.
Source Separation and Treament of Anthropogenic Urine (WERF Report INFR4SG09b)
Abstract: Anthropogenic urine, although only 1% of domestic wastewater flow, is responsible for 50-80% of the nutrients and a substantial portion of the pharmaceuticals and hormones present in the influent to wastewater treatment plants. Source separation and treatment of urine...
Technologies for the treatment of source-separated urine in the ...
African Journals Online (AJOL)
Technologies for the treatment of source-separated urine in the eThekwini ... This practice can lead to environmental pollution, since urine contains high amounts of ... produces only distilled water and a small amount of sludge as by-products.
Basic studies of a gas-jet-coupled ion source for on-line isotope separation
International Nuclear Information System (INIS)
Anderl, R.A.; Novick, V.J.; Greenwood, R.C.
1980-01-01
A hollow-cathode ion source was used in a gas-jet-coupled configuration to produce ion beams of fission products transported to it from a 252 Cf fission source. Solid aerosols of NaCl and Ag were used effectively as activity carriers in the gas-jet system. Flat-plate skimmers provided an effective coupling of the ion source to the gas jet. Ge(Li) spectrometric measurements of the activity deposited on an ion-beam collector relative to that deposited on a pre-skimmer collector were used to obtain separation efficiencies ranging from 0.1% to > 1% for Sr, Y, Tc, Te, Cs, Ba, Ce, Pr, Nd and Sm. The use of CCl 4 as a support gas resulted in a significant enhancement of the alkaline-earth and rare-earth separation efficiencies
Development of the high temperature ion-source for the Grenoble electromagnetic isotope separator
International Nuclear Information System (INIS)
Bouriant, M.
1968-01-01
The production of high purity stable or radioactive isotopes (≥ 99.99 per cent) using electromagnetic separation require for equipment having a high resolving power. Besides, and in order to collect rare or short half-life isotopes, the efficiency of the ion-source must be high (η > 5 to 10 per cent). With this in view, the source built operates at high temperatures (2500-3000 C) and makes use of ionisation by electronic bombardment or of thermo-ionisation. A summary is given in the first part of this work on the essential characteristics of the isotope separator ion Sources; a diagram of the principle of the source built is then given together with its characteristics. In the second part are given the values of the resolving power and of the efficiency of the Grenoble isotope separator fitted with such a source. The resolving power measured at 10 per cent of the peak height is of the order of 200. At the first magnetic stage the efficiency is between 1 and 26 per cent for a range of elements evaporating between 200 and 3000 C. Thus equipped, the separator has for example given, at the first stage, 10 mg of 180 Hf at (99.69 ± 0.1) per cent corresponding to an enrichment coefficient of 580; recently 2 mg of 150 Nd at (99.996 ± 0.002) per cent corresponding to an enrichment coefficient of 4.2 x 10 5 has been obtained at the second stage. (author) [fr
Gan, Shuwei; Wang, Shoudong; Chen, Yangkang; Chen, Xiaohong; Xiang, Kui
2016-01-01
Simultaneous-source shooting can help tremendously shorten the acquisition period and improve the quality of seismic data for better subsalt seismic imaging, but at the expense of introducing strong interference (blending noise) to the acquired seismic data. We propose to use a structural-oriented median filter to attenuate the blending noise along the structural direction of seismic profiles. The principle of the proposed approach is to first flatten the seismic record in local spatial windows and then to apply a traditional median filter (MF) to the third flattened dimension. The key component of the proposed approach is the estimation of the local slope, which can be calculated by first scanning the NMO velocity and then transferring the velocity to the local slope. Both synthetic and field data examples show that the proposed approach can successfully separate the simultaneous-source data into individual sources. We provide an open-source toy example to better demonstratethe proposed methodology.
Iterative algorithm for joint zero diagonalization with application in blind source separation.
Zhang, Wei-Tao; Lou, Shun-Tian
2011-07-01
A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.
Efficient image enhancement using sparse source separation in the Retinex theory
Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik
2017-11-01
Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.
Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang
2007-01-01
We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic...
Czech Academy of Sciences Publication Activity Database
Tichavský, Petr; Koldovský, Zbyněk
2011-01-01
Roč. 59, č. 3 (2011), s. 1037-1047 ISSN 1053-587X R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : blind source separation * tensor decomposition * Cramer-Rao lower bound Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.628, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-0356666. pdf
Cohen, Michael X; Gulbinaite, Rasa
2017-02-15
Steady-state evoked potentials (SSEPs) are rhythmic brain responses to rhythmic sensory stimulation, and are often used to study perceptual and attentional processes. We present a data analysis method for maximizing the signal-to-noise ratio of the narrow-band steady-state response in the frequency and time-frequency domains. The method, termed rhythmic entrainment source separation (RESS), is based on denoising source separation approaches that take advantage of the simultaneous but differential projection of neural activity to multiple electrodes or sensors. Our approach is a combination and extension of existing multivariate source separation methods. We demonstrate that RESS performs well on both simulated and empirical data, and outperforms conventional SSEP analysis methods based on selecting electrodes with the strongest SSEP response, as well as several other linear spatial filters. We also discuss the potential confound of overfitting, whereby the filter captures noise in absence of a signal. Matlab scripts are available to replicate and extend our simulations and methods. We conclude with some practical advice for optimizing SSEP data analyses and interpreting the results. Copyright © 2016 Elsevier Inc. All rights reserved.
Ion source development for the on-line isotope separator at GSI
International Nuclear Information System (INIS)
Kirchner, R.; Burkard, K.; Hueller, W.; Klepper, O.
1991-08-01
The progress in the understanding of ion sources for isotope separation on-line and the feasibility of bunched beams of relatively refractory elements is reported. The ultra-high temperature FEBIAD-H ion source, facilitating the mounting of catchers and window compared to the earlier F-version, enables bunched beams of the elements with adsorption enthalpies up to almost 6 eV, e.g. of Be, Al, Ca, Cr, Fe, Co, Ni, Sr, Pd, Ba, Yb, and Au. This way also chemical selectivity for these elements may be achieved, at least to some extent, for isotopes with halflives > or approx.1 minute, including especially the difficult separation of alkaline-earth isotopes from isobaric alkalines. These studies reveal, however, also a principal difficulty in the on-line separation of refractory elements, namely their tendency, increasing with ΔH a , to re-diffuse after release from the catcher into the bulk of the hot source enclosure. (orig.)
Concepts and Criteria for Blind Quantum Source Separation and Blind Quantum Process Tomography
Directory of Open Access Journals (Sweden)
Alain Deville
2017-07-01
Full Text Available Blind Source Separation (BSS is an active domain of Classical Information Processing, with well-identified methods and applications. The development of Quantum Information Processing has made possible the appearance of Blind Quantum Source Separation (BQSS, with a recent extension towards Blind Quantum Process Tomography (BQPT. This article investigates the use of several fundamental quantum concepts in the BQSS context and establishes properties already used without justification in that context. It mainly considers a pair of electron spins initially separately prepared in a pure state and then submitted to an undesired exchange coupling between these spins. Some consequences of the existence of the entanglement phenomenon, and of the probabilistic aspect of quantum measurements, upon BQSS solutions, are discussed. An unentanglement criterion is established for the state of an arbitrary qubit pair, expressed first with probability amplitudes and secondly with probabilities. The interest of using the concept of a random quantum state in the BQSS context is presented. It is stressed that the concept of statistical independence of the sources, widely used in classical BSS, should be used with care in BQSS, and possibly replaced by some disentanglement principle. It is shown that the coefficients of the development of any qubit pair pure state over the states of an orthonormal basis can be expressed with the probabilities of results in the measurements of well-chosen spin components.
An approach for evaluating the effects of source separation on municipal solid waste management
Energy Technology Data Exchange (ETDEWEB)
Tanskanen, J.H. [Finnish Environment Institute, Helsinki (Finland)
2000-07-01
An approach was developed for integrated analysis of recovery rates, waste streams, costs and emissions of municipal solid waste management (MSWM). The approach differs from most earlier models used in the strategic planning of MSWM because of a comprehensive analysis of on-site collection systems of waste materials separated at source for recovery. As a result, the recovery rates and sizes of waste streams can be calculated on the basis of the characteristics of separation strategies instead of giving them as input data. The modelling concept developed can also be applied in other regions, municipalities and districts. This thesis consists of four case studies. Three of these were performed to test the approach developed and to evaluate the effects of separation on MSWM in Finland. In these case studies the approach was applied for modelling: (1) Finland's national separation strategy for municipal solid waste, (2) the effects of separation on MSWM systems in the Helsinki region and (3) the efficiency of various waste collection methods in the Helsinki region. The models developed for these three case studies are static and linear simulation models which were constructed in the format of an Excel spreadsheet. In addition, a new version of the original Swedish MIMES/Waste model was constructed and applied in one of the case studies. The case studies proved that the approach is an applicable tool for various research settings and circumstances in the strategic planning of MSWM. The following main results were obtained from the case studies: A high recovery rate level (around 70 %wt) can be achieved in MSWM without incineration; Central sorting of mixed waste must be included in Finland's national separation strategy in order to reach the recovery rate targets of 50 %wt (year 2000) and 70 %wt (year 2005) adopted for municipal solid waste in the National Waste Plan. The feasible source separation strategies result in recovery rates around 35-40 %wt with the
Czech Academy of Sciences Publication Activity Database
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, M.; Stohl, A.
2017-01-01
Roč. 17, č. 20 (2017), s. 12677-12696 ISSN 1680-7316 R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Bayesian inverse modeling * iodine-131 * consequences of the iodine release Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 5.318, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/tichy-0480506.pdf
Full-Scale Turbofan Engine Noise-Source Separation Using a Four-Signal Method
Hultgren, Lennart S.; Arechiga, Rene O.
2016-01-01
Contributions from the combustor to the overall propulsion noise of civilian transport aircraft are starting to become important due to turbofan design trends and expected advances in mitigation of other noise sources. During on-ground, static-engine acoustic tests, combustor noise is generally sub-dominant to other engine noise sources because of the absence of in-flight effects. Consequently, noise-source separation techniques are needed to extract combustor-noise information from the total noise signature in order to further progress. A novel four-signal source-separation method is applied to data from a static, full-scale engine test and compared to previous methods. The new method is, in a sense, a combination of two- and three-signal techniques and represents an attempt to alleviate some of the weaknesses of each of those approaches. This work is supported by the NASA Advanced Air Vehicles Program, Advanced Air Transport Technology Project, Aircraft Noise Reduction Subproject and the NASA Glenn Faculty Fellowship Program.
Energy Technology Data Exchange (ETDEWEB)
Pennline, H.W.; Luebke, D.R.; Jones, K.L.; Morsi, B.I. (Univ. of Pittsburgh, PA); Heintz, Y.J. (Univ. of Pittsburgh, PA); Ilconich, J.B. (Parsons)
2007-06-01
The capture/separation step for carbon dioxide (CO2) from large-point sources is a critical one with respect to the technical feasibility and cost of the overall carbon sequestration scenario. For large-point sources, such as those found in power generation, the carbon dioxide capture techniques being investigated by the in-house research area of the National Energy Technology Laboratory possess the potential for improved efficiency and reduced costs as compared to more conventional technologies. The investigated techniques can have wide applications, but the research has focused on capture/separation of carbon dioxide from flue gas (post-combustion from fossil fuel-fired combustors) and from fuel gas (precombustion, such as integrated gasification combined cycle or IGCC). With respect to fuel gas applications, novel concepts are being developed in wet scrubbing with physical absorption; chemical absorption with solid sorbents; and separation by membranes. In one concept, a wet scrubbing technique is being investigated that uses a physical solvent process to remove CO2 from fuel gas of an IGCC system at elevated temperature and pressure. The need to define an ideal solvent has led to the study of the solubility and mass transfer properties of various solvents. Pertaining to another separation technology, fabrication techniques and mechanistic studies for membranes separating CO2 from the fuel gas produced by coal gasification are also being performed. Membranes that consist of CO2-philic ionic liquids encapsulated into a polymeric substrate have been investigated for permeability and selectivity. Finally, dry, regenerable processes based on sorbents are additional techniques for CO2 capture from fuel gas. An overview of these novel techniques is presented along with a research progress status of technologies related to membranes and physical solvents.
Lesaffre, Emmanuel
2012-01-01
The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd
A chemically selective laser ion source for the on-line isotope separation
International Nuclear Information System (INIS)
Scheerer, F.
1993-03-01
In this thesis a laser ion source is presented. In a hot chamber the atoms of the elements to be studied are resonantly by light of pulsed dye lasers, which are pumped by pulsed copper-vapor lasers with extremely high pulse repetition rate (ν rep ∼ 10 kHz), stepwise excited and ionized. By the storage of the atoms in a hot chamber and the high pulse repetition rate of the copper-vapor lasers beyond the required high efficiency (ε ∼ 10%) can be reached. First preparing measurements were performed at the off-line separator at CERN with the rare earth elements ytterbium and thulium. Starting from the results of these measurements further tests of the laser ion source were performed at the on-line separator with in a thick tantalum target produced neutron-deficient ytterbium isotopes. Under application of a time-of-flight mass spectrometer in Mainz an efficient excitation scheme on the resonance ionization of tin was found. This excitation scheme is condition for an experiment at the GSI for the production of the extremely neutron-deficient, short-lived nucleus 102 Sn. In the summer 1993 is as first application of the newly developed laser ion source at the PSB-ISOLDE at CERN an astrophysically relevant experiment for the nuclear spectroscopy of the neutron-rich silver isotopes 124-129 Ag is planned. This experiment can because of the lacking selectivity of conventional ion sources only be performed by means of the here presented laser ion source. The laser ion source shall at the PSB-ISOLDE 1993 also be applied for the selective ionization of manganese. (orig./HSI) [de
Resonance ionization laser ion sources for on-line isotope separators (invited)
International Nuclear Information System (INIS)
Marsh, B. A.
2014-01-01
A Resonance Ionization Laser Ion Source (RILIS) is today considered an essential component of the majority of Isotope Separator On Line (ISOL) facilities; there are seven laser ion sources currently operational at ISOL facilities worldwide and several more are under development. The ionization mechanism is a highly element selective multi-step resonance photo-absorption process that requires a specifically tailored laser configuration for each chemical element. For some isotopes, isomer selective ionization may even be achieved by exploiting the differences in hyperfine structures of an atomic transition for different nuclear spin states. For many radioactive ion beam experiments, laser resonance ionization is the only means of achieving an acceptable level of beam purity without compromising isotope yield. Furthermore, by performing element selection at the location of the ion source, the propagation of unwanted radioactivity downstream of the target assembly is reduced. Whilst advances in laser technology have improved the performance and reliability of laser ion sources and broadened the range of suitable commercially available laser systems, many recent developments have focused rather on the laser/atom interaction region in the quest for increased selectivity and/or improved spectral resolution. Much of the progress in this area has been achieved by decoupling the laser ionization from competing ionization processes through the use of a laser/atom interaction region that is physically separated from the target chamber. A new application of gas catcher laser ion source technology promises to expand the capabilities of projectile fragmentation facilities through the conversion of otherwise discarded reaction fragments into high-purity low-energy ion beams. A summary of recent RILIS developments and the current status of laser ion sources worldwide is presented
Energy Technology Data Exchange (ETDEWEB)
Biollaz, S; Ludwig, Ch; Stucki, S [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
1999-08-01
A literature search was carried out to determine sources and speciation of heavy metals in MSW. A combination of thermal and mechanical separation techniques is necessary to achieve the required high degrees of metal separation. Metallic goods should be separated mechanically, chemically bound heavy metals by a thermal process. (author) 1 fig., 1 tab., 6 refs.
Generation of dynamo waves by spatially separated sources in the Earth and other celestial bodies
Popova, E.
2017-12-01
The amplitude and the spatial configuration of the planetary and stellar magnetic field can changing over the years. Celestial bodies can have cyclic, chaotic or unchanging in time magnetic activity which is connected with a dynamo mechanism. This mechanism is based on the consideration of the joint influence of the alpha-effect and differential rotation. Dynamo sources can be located at different depths (active layers) of the celestial body and can have different intensities. Application of this concept allows us to get different forms of solutions and some of which can include wave propagating inside the celestial body. We analytically showed that in the case of spatially separated sources of magnetic field each source generates a wave whose frequency depends on the physical parameters of its source. We estimated parameters of sources required for the generation nondecaying waves. We discus structure of such sources and matter motion (including meridional circulation) in the liquid outer core of the Earth and active layers of other celestial bodies.
Investigation of gas discharge ion sources for on-line mass separation
International Nuclear Information System (INIS)
Kirchner, R.
1976-03-01
The development of efficient gas discharge ion sources with axial beam extraction for on-line mass separation is described. The aim of the investigation was to increase the ion source temperature, the lifetime and the ionisation yield in comparison to present low-pressure are discharge ion sources and to reduce the ion current density from usually 1 to 100 mA/cm 3 . In all ion sources the pressure range below the minimal ignition pressure of the arc discharge was investigated. As a result an ion source was developed which works at small changes in geometry and in electric device of a Nielsen source with high ionization yield (up to 50% for xenon) stabil and without ignition difficulties up to 10 -5 Torr. At a typical pressure of 3 x 10 -5 Torr ion current and ion current density are about 1 μA and 0.1 mA/cm 3 respectively besides high yield and a great emission aperture (diameter 1.2 mm). (orig.) [de
ZHOU, Lin
1996-01-01
In this paper I consider social choices under uncertainty. I prove that any social choice rule that satisfies independence of irrelevant alternatives, translation invariance, and weak anonymity is consistent with ex post Bayesian utilitarianism
PWC-ICA: A Method for Stationary Ordered Blind Source Separation with Application to EEG
Bigdely-Shamlo, Nima; Mullen, Tim; Robbins, Kay
2016-01-01
Independent component analysis (ICA) is a class of algorithms widely applied to separate sources in EEG data. Most ICA approaches use optimization criteria derived from temporal statistical independence and are invariant with respect to the actual ordering of individual observations. We propose a method of mapping real signals into a complex vector space that takes into account the temporal order of signals and enforces certain mixing stationarity constraints. The resulting procedure, which we call Pairwise Complex Independent Component Analysis (PWC-ICA), performs the ICA in a complex setting and then reinterprets the results in the original observation space. We examine the performance of our candidate approach relative to several existing ICA algorithms for the blind source separation (BSS) problem on both real and simulated EEG data. On simulated data, PWC-ICA is often capable of achieving a better solution to the BSS problem than AMICA, Extended Infomax, or FastICA. On real data, the dipole interpretations of the BSS solutions discovered by PWC-ICA are physically plausible, are competitive with existing ICA approaches, and may represent sources undiscovered by other ICA methods. In conjunction with this paper, the authors have released a MATLAB toolbox that performs PWC-ICA on real, vector-valued signals. PMID:27340397
Blind source separation of ex-vivo aorta tissue multispectral images.
Galeano, July; Perez, Sandra; Montoya, Yonatan; Botina, Deivid; Garzón, Johnson
2015-05-01
Blind Source Separation methods (BSS) aim for the decomposition of a given signal in its main components or source signals. Those techniques have been widely used in the literature for the analysis of biomedical images, in order to extract the main components of an organ or tissue under study. The analysis of skin images for the extraction of melanin and hemoglobin is an example of the use of BSS. This paper presents a proof of concept of the use of source separation of ex-vivo aorta tissue multispectral Images. The images are acquired with an interference filter-based imaging system. The images are processed by means of two algorithms: Independent Components analysis and Non-negative Matrix Factorization. In both cases, it is possible to obtain maps that quantify the concentration of the main chromophores present in aortic tissue. Also, the algorithms allow for spectral absorbance of the main tissue components. Those spectral signatures were compared against the theoretical ones by using correlation coefficients. Those coefficients report values close to 0.9, which is a good estimator of the method's performance. Also, correlation coefficients lead to the identification of the concentration maps according to the evaluated chromophore. The results suggest that Multi/hyper-spectral systems together with image processing techniques is a potential tool for the analysis of cardiovascular tissue.
PWC-ICA: A Method for Stationary Ordered Blind Source Separation with Application to EEG.
Ball, Kenneth; Bigdely-Shamlo, Nima; Mullen, Tim; Robbins, Kay
2016-01-01
Independent component analysis (ICA) is a class of algorithms widely applied to separate sources in EEG data. Most ICA approaches use optimization criteria derived from temporal statistical independence and are invariant with respect to the actual ordering of individual observations. We propose a method of mapping real signals into a complex vector space that takes into account the temporal order of signals and enforces certain mixing stationarity constraints. The resulting procedure, which we call Pairwise Complex Independent Component Analysis (PWC-ICA), performs the ICA in a complex setting and then reinterprets the results in the original observation space. We examine the performance of our candidate approach relative to several existing ICA algorithms for the blind source separation (BSS) problem on both real and simulated EEG data. On simulated data, PWC-ICA is often capable of achieving a better solution to the BSS problem than AMICA, Extended Infomax, or FastICA. On real data, the dipole interpretations of the BSS solutions discovered by PWC-ICA are physically plausible, are competitive with existing ICA approaches, and may represent sources undiscovered by other ICA methods. In conjunction with this paper, the authors have released a MATLAB toolbox that performs PWC-ICA on real, vector-valued signals.
Carbon dioxide capture and separation techniques for advanced power generation point sources
Energy Technology Data Exchange (ETDEWEB)
Pennline, H.W.; Luebke, D.R.; Morsi, B.I.; Heintz, Y.J.; Jones, K.L.; Ilconich, J.B.
2006-09-01
The capture/separation step for carbon dioxide (CO2) from large-point sources is a critical one with respect to the technical feasibility and cost of the overall carbon sequestration scenario. For large-point sources, such as those found in power generation, the carbon dioxide capture techniques being investigated by the in-house research area of the National Energy Technology Laboratory possess the potential for improved efficiency and costs as compared to more conventional technologies. The investigated techniques can have wide applications, but the research has focused on capture/separation of carbon dioxide from flue gas (postcombustion from fossil fuel-fired combustors) and from fuel gas (precombustion, such as integrated gasification combined cycle – IGCC). With respect to fuel gas applications, novel concepts are being developed in wet scrubbing with physical absorption; chemical absorption with solid sorbents; and separation by membranes. In one concept, a wet scrubbing technique is being investigated that uses a physical solvent process to remove CO2 from fuel gas of an IGCC system at elevated temperature and pressure. The need to define an ideal solvent has led to the study of the solubility and mass transfer properties of various solvents. Fabrication techniques and mechanistic studies for hybrid membranes separating CO2 from the fuel gas produced by coal gasification are also being performed. Membranes that consist of CO2-philic silanes incorporated into an alumina support or ionic liquids encapsulated into a polymeric substrate have been investigated for permeability and selectivity. An overview of two novel techniques is presented along with a research progress status of each technology.
Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.
2018-06-01
The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.
Bayesian Exponential Smoothing.
Forbes, C.S.; Snyder, R.D.; Shami, R.S.
2000-01-01
In this paper, a Bayesian version of the exponential smoothing method of forecasting is proposed. The approach is based on a state space model containing only a single source of error for each time interval. This model allows us to improve current practices surrounding exponential smoothing by providing both point predictions and measures of the uncertainty surrounding them.
DEFF Research Database (Denmark)
Glover, Kevin A.; Hansen, Michael Møller; Skaala, Oystein
2009-01-01
44 cages located on 26 farms in the Hardangerfjord, western Norway. This fjord represents one of the major salmon farming areas in Norway, with a production of 57,000 t in 2007. Based upon genetic data from 17 microsatellite markers, significant but highly variable differentiation was observed among....... Accuracy of assignment varied greatly among the individual samples. For the Bayesian clustered data set consisting of five genetic groups, overall accuracy of self-assignment was 99%, demonstrating the effectiveness of this strategy to significantly increase accuracy of assignment, albeit at the expense...
DEFF Research Database (Denmark)
Larsen, Anna Warberg; Astrup, Thomas
2011-01-01
variations between emission factors for different incinerators, but the background for these variations has not been thoroughly examined. One important reason may be variations in collection of recyclable materials as source separation alters the composition of the residual waste incinerated. The objective...... routed to incineration. Emission factors ranged from 27 to 40kg CO2/GJ. The results appeared most sensitive towards variations in waste composition and water content. Recycling rates and lower heating values could not be used as simple indicators of the resulting emission factors for residual household...... different studies and when using the values for environmental assessment purposes....
In vivo time-gated diffuse correlation spectroscopy at quasi-null source-detector separation.
Pagliazzi, M; Sekar, S Konugolu Venkata; Di Sieno, L; Colombo, L; Durduran, T; Contini, D; Torricelli, A; Pifferi, A; Mora, A Dalla
2018-06-01
We demonstrate time domain diffuse correlation spectroscopy at quasi-null source-detector separation by using a fast time-gated single-photon avalanche diode without the need of time-tagging electronics. This approach allows for increased photon collection, simplified real-time instrumentation, and reduced probe dimensions. Depth discriminating, quasi-null distance measurement of blood flow in a human subject is presented. We envision the miniaturization and integration of matrices of optical sensors of increased spatial resolution and the enhancement of the contrast of local blood flow changes.
Light mesons and separated J(PC) sources in N-antiN annihilations at rest
International Nuclear Information System (INIS)
Lucaci-Timoce, Angela; Lazanu, I.
2002-01-01
N-antiN annihilations at rest and in flight are the most productive physics tools to investigate the spectroscopy of light mesons. The annihilations proceed from more sources of initial states, with different J (PC) quantum numbers. This makes the interpretation of the experimental data much more difficult than in the case of other annihilation process, as for example the e + e - annihilations, and introduces ambiguities in the results. In this talk, the possibilities to separate the N-antiN contributions of initial quantum states in different annihilation channels, are analyzed. The interference phenomena that appear are also discussed. (authors)
Bagnardi, M.; Hooper, A. J.
2017-12-01
Inversions of geodetic observational data, such as Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite System (GNSS) measurements, are often performed to obtain information about the source of surface displacements. Inverse problem theory has been applied to study magmatic processes, the earthquake cycle, and other phenomena that cause deformation of the Earth's interior and of its surface. Together with increasing improvements in data resolution, both spatial and temporal, new satellite missions (e.g., European Commission's Sentinel-1 satellites) are providing the unprecedented opportunity to access space-geodetic data within hours from their acquisition. To truly take advantage of these opportunities we must become able to interpret geodetic data in a rapid and robust manner. Here we present the open-source Geodetic Bayesian Inversion Software (GBIS; available for download at http://comet.nerc.ac.uk/gbis). GBIS is written in Matlab and offers a series of user-friendly and interactive pre- and post-processing tools. For example, an interactive function has been developed to estimate the characteristics of noise in InSAR data by calculating the experimental semi-variogram. The inversion software uses a Markov-chain Monte Carlo algorithm, incorporating the Metropolis-Hastings algorithm with adaptive step size, to efficiently sample the posterior probability distribution of the different source parameters. The probabilistic Bayesian approach allows the user to retrieve estimates of the optimal (best-fitting) deformation source parameters together with the associated uncertainties produced by errors in the data (and by scaling, errors in the model). The current version of GBIS (V1.0) includes fast analytical forward models for magmatic sources of different geometry (e.g., point source, finite spherical source, prolate spheroid source, penny-shaped sill-like source, and dipping-dike with uniform opening) and for dipping faults with uniform
Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.
2018-03-01
The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey (SDSS) r-band images with artificial AGN point sources added which are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source PS is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover PS and host galaxy magnitudes with smaller systematic error and a lower average scatter (49%). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ±50% if it is trained on multiple PSF's. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN it is more robust and easy to use than parametric methods as it requires no input parameters.
Sources of variability among replicate samples separated by two-dimensional gel electrophoresis.
Bland, Alison M; Janech, Michael G; Almeida, Jonas S; Arthur, John M
2010-04-01
Two-dimensional gel electrophoresis (2DE) offers high-resolution separation for intact proteins. However, variability in the appearance of spots can limit the ability to identify true differences between conditions. Variability can occur at a number of levels. Individual samples can differ because of biological variability. Technical variability can occur during protein extraction, processing, or storage. Another potential source of variability occurs during analysis of the gels and is not a result of any of the causes of variability named above. We performed a study designed to focus only on the variability caused by analysis. We separated three aliquots of rat left ventricle and analyzed differences in protein abundance on the replicate 2D gels. As the samples loaded on each gel were identical, differences in protein abundance are caused by variability in separation or interpretation of the gels. Protein spots were compared across gels by quantile values to determine differences. Fourteen percent of spots had a maximum difference in intensity of 0.4 quantile values or more between replicates. We then looked individually at the spots to determine the cause of differences between the measured intensities. Reasons for differences were: failure to identify a spot (59%), differences in spot boundaries (13%), difference in the peak height (6%), and a combination of these factors (21). This study demonstrates that spot identification and characterization make major contributions to variability seen with 2DE. Methods to highlight why measured protein spot abundance is different could reduce these errors.
Blind Source Separation and Dynamic Fuzzy Neural Network for Fault Diagnosis in Machines
International Nuclear Information System (INIS)
Huang, Haifeng; Ouyang, Huajiang; Gao, Hongli
2015-01-01
Many assessment and detection methods are used to diagnose faults in machines. High accuracy in fault detection and diagnosis can be achieved by using numerical methods with noise-resistant properties. However, to some extent, noise always exists in measured data on real machines, which affects the identification results, especially in the diagnosis of early- stage faults. In view of this situation, a damage assessment method based on blind source separation and dynamic fuzzy neural network (DFNN) is presented to diagnose the early-stage machinery faults in this paper. In the processing of measurement signals, blind source separation is adopted to reduce noise. Then sensitive features of these faults are obtained by extracting low dimensional manifold characteristics from the signals. The model for fault diagnosis is established based on DFNN. Furthermore, on-line computation is accelerated by means of compressed sensing. Numerical vibration signals of ball screw fault modes are processed on the model for mechanical fault diagnosis and the results are in good agreement with the actual condition even at the early stage of fault development. This detection method is very useful in practice and feasible for early-stage fault diagnosis. (paper)
Fate of pharmaceuticals in full-scale source separated sanitation system.
Butkovskyi, A; Hernandez Leal, L; Rijnaarts, H H M; Zeeman, G
2015-11-15
Removal of 14 pharmaceuticals and 3 of their transformation products was studied in a full-scale source separated sanitation system with separate collection and treatment of black water and grey water. Black water is treated in an up-flow anaerobic sludge blanket (UASB) reactor followed by oxygen-limited autotrophic nitrification-denitrification in a rotating biological contactor and struvite precipitation. Grey water is treated in an aerobic activated sludge process. Concentration of 10 pharmaceuticals and 2 transformation products in black water ranged between low μg/l to low mg/l. Additionally, 5 pharmaceuticals were also present in grey water in low μg/l range. Pharmaceutical influent loads were distributed over two streams, i.e. diclofenac was present for 70% in grey water, while the other compounds were predominantly associated to black water. Removal in the UASB reactor fed with black water exceeded 70% for 9 pharmaceuticals out of the 12 detected, with only two pharmaceuticals removed by sorption to sludge. Ibuprofen and the transformation product of naproxen, desmethylnaproxen, were removed in the rotating biological contactor. In contrast, only paracetamol removal exceeded 90% in the grey water treatment system while removal of other 7 pharmaceuticals was below 40% or even negative. The efficiency of pharmaceutical removal in the source separated sanitation system was compared with removal in the conventional sewage treatment plants. Furthermore, effluent concentrations of black water and grey water treatment systems were compared with predicted no-effect concentrations to assess toxicity of the effluent. Concentrations of diclofenac, ibuprofen and oxazepam in both effluents were higher than predicted no-effect concentrations, indicating the necessity of post-treatment. Ciprofloxacin, metoprolol and propranolol were found in UASB sludge in μg/g range, while pharmaceutical concentrations in struvite did not exceed the detection limits. Copyright © 2015
White, David J.; Congedo, Marco; Ciorciari, Joseph
2014-01-01
A developing literature explores the use of neurofeedback in the treatment of a range of clinical conditions, particularly ADHD and epilepsy, whilst neurofeedback also provides an experimental tool for studying the functional significance of endogenous brain activity. A critical component of any neurofeedback method is the underlying physiological signal which forms the basis for the feedback. While the past decade has seen the emergence of fMRI-based protocols training spatially confined BOLD activity, traditional neurofeedback has utilized a small number of electrode sites on the scalp. As scalp EEG at a given electrode site reflects a linear mixture of activity from multiple brain sources and artifacts, efforts to successfully acquire some level of control over the signal may be confounded by these extraneous sources. Further, in the event of successful training, these traditional neurofeedback methods are likely influencing multiple brain regions and processes. The present work describes the use of source-based signal processing methods in EEG neurofeedback. The feasibility and potential utility of such methods were explored in an experiment training increased theta oscillatory activity in a source derived from Blind Source Separation (BSS) of EEG data obtained during completion of a complex cognitive task (spatial navigation). Learned increases in theta activity were observed in two of the four participants to complete 20 sessions of neurofeedback targeting this individually defined functional brain source. Source-based EEG neurofeedback methods using BSS may offer important advantages over traditional neurofeedback, by targeting the desired physiological signal in a more functionally and spatially specific manner. Having provided preliminary evidence of the feasibility of these methods, future work may study a range of clinically and experimentally relevant brain processes where individual brain sources may be targeted by source-based EEG neurofeedback. PMID
Surface-ionization ion source designed for in-beam operation with the BEMS-2 isotope separator
International Nuclear Information System (INIS)
Bogdanov, D.D.; Voboril, J.; Demyanov, A.V.; Karnaukhov, V.A.; Petrov, L.A.
1976-01-01
A surface-ionization ion source designed to operate in combination with the BEMS-2 isotope separator in a heavy ion beam is described. The ion source is adjusted for the separation of rare-earth elements. The separation efficiency for 150 Dy is determined to be equal to about 20% at the ionizer temperature of 2600 deg K. The hold-up times for praseodymium, promethium and dysprosium in the ion source range from 5 to 10 sec at the ionizer temperature of 2500-2700 deg K
Effects of Sleep on Word Pair Memory in Children – Separating Item and Source Memory Aspects
Directory of Open Access Journals (Sweden)
Jing-Yi Wang
2017-09-01
Full Text Available Word paired-associate learning is a well-established task to demonstrate sleep-dependent memory consolidation in adults as well as children. Sleep has also been proposed to benefit episodic features of memory, i.e., a memory for an event (item bound into the spatiotemporal context it has been experienced in (source. We aimed to explore if sleep enhances word pair memory in children by strengthening the episodic features of the memory, in particular. Sixty-one children (8–12 years studied two lists of word pairs with 1 h in between. Retrieval testing comprised cued recall of the target word of each word pair (item memory and recalling in which list the word pair had appeared in (source memory. Retrieval was tested either after 1 h (short retention interval or after 11 h, with this long retention interval covering either nocturnal sleep or daytime wakefulness. Compared with the wake interval, sleep enhanced separate recall of both word pairs and the lists per se, while recall of the combination of the word pair and the list it had appeared in remained unaffected by sleep. An additional comparison with adult controls (n = 37 suggested that item-source bound memory (combined recall of word pair and list is generally diminished in children. Our results argue against the view that the sleep-induced enhancement in paired-associate learning in children is a consequence of sleep specifically enhancing the episodic features of the memory representation. On the contrary, sleep in children might strengthen item and source representations in isolation, while leaving the episodic memory representations (item-source binding unaffected.
Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation.
Wang, Avery Li-Chun
This thesis summarizes several contributions to the areas of signal processing and auditory source separation. The philosophy of Frequency-Warped Signal Processing is introduced as a means for separating the AM and FM contributions to the bandwidth of a complex-valued, frequency-varying sinusoid p (n), transforming it into a signal with slowly-varying parameters. This transformation facilitates the removal of p (n) from an additive mixture while minimizing the amount of damage done to other signal components. The average winding rate of a complex-valued phasor is explored as an estimate of the instantaneous frequency. Theorems are provided showing the robustness of this measure. To implement frequency tracking, a Frequency-Locked Loop algorithm is introduced which uses the complex winding error to update its frequency estimate. The input signal is dynamically demodulated and filtered to extract the envelope. This envelope may then be remodulated to reconstruct the target partial, which may be subtracted from the original signal mixture to yield a new, quickly-adapting form of notch filtering. Enhancements to the basic tracker are made which, under certain conditions, attain the Cramer -Rao bound for the instantaneous frequency estimate. To improve tracking, the novel idea of Harmonic -Locked Loop tracking, using N harmonically constrained trackers, is introduced for tracking signals, such as voices and certain musical instruments. The estimated fundamental frequency is computed from a maximum-likelihood weighting of the N tracking estimates, making it highly robust. The result is that harmonic signals, such as voices, can be isolated from complex mixtures in the presence of other spectrally overlapping signals. Additionally, since phase information is preserved, the resynthesized harmonic signals may be removed from the original mixtures with relatively little damage to the residual signal. Finally, a new methodology is given for designing linear-phase FIR filters
Bai, Mingsian R; Lai, Chang-Sheng; Wu, Po-Chen
2017-07-01
Circular microphone arrays (CMAs) are sufficient in many immersive audio applications because azimuthal angles of sources are considered more important than the elevation angles in those occasions. However, the fact that CMAs do not resolve the elevation angle well can be a limitation for some applications which involves three-dimensional sound images. This paper proposes a 2.5-dimensional (2.5-D) CMA comprised of a CMA and a vertical logarithmic-spacing linear array (LLA) on the top. In the localization stage, two delay-and-sum beamformers are applied to the CMA and the LLA, respectively. The direction of arrival (DOA) is estimated from the product of two array output signals. In the separation stage, Tikhonov regularization and convex optimization are employed to extract the source amplitudes on the basis of the estimated DOA. The extracted signals from two arrays are further processed by the normalized least-mean-square algorithm with the internal iteration to yield the source signal with improved quality. To validate the 2.5-D CMA experimentally, a three-dimensionally printed circular array comprised of a 24-element CMA and an eight-element LLA is constructed. Objective perceptual evaluation of speech quality test and a subjective listening test are also undertaken.
Energy Technology Data Exchange (ETDEWEB)
Oros-Peusquens, Ana-Maria; Silva, Nuno da [Institute of Neuroscience and Medicine, Forschungszentrum Jülich GmbH, 52425 Jülich (Germany); Weiss, Carolin [Department of Neurosurgery, University Hospital Cologne, 50924 Cologne (Germany); Stoffels, Gabrielle; Herzog, Hans; Langen, Karl J [Institute of Neuroscience and Medicine, Forschungszentrum Jülich GmbH, 52425 Jülich (Germany); Shah, N Jon [Institute of Neuroscience and Medicine, Forschungszentrum Jülich GmbH, 52425 Jülich (Germany); Jülich-Aachen Research Alliance (JARA) - Section JARA-Brain RWTH Aachen University, 52074 Aachen (Germany)
2014-07-29
Denoising of dynamic PET data improves parameter imaging by PET and is gaining momentum. This contribution describes an analysis of dynamic PET data by blind source separation methods and comparison of the results with MR-based brain properties.
Sukholthaman, Pitchayanin; Sharp, Alice
2016-06-01
Municipal solid waste has been considered as one of the most immediate and serious problems confronting urban government in most developing and transitional economies. Providing solid waste performance highly depends on the effectiveness of waste collection and transportation process. Generally, this process involves a large amount of expenditures and has very complex and dynamic operational problems. Source separation has a major impact on effectiveness of waste management system as it causes significant changes in quantity and quality of waste reaching final disposal. To evaluate the impact of effective source separation on waste collection and transportation, this study adopts a decision support tool to comprehend cause-and-effect interactions of different variables in waste management system. A system dynamics model that envisages the relationships of source separation and effectiveness of waste management in Bangkok, Thailand is presented. Influential factors that affect waste separation attitudes are addressed; and the result of change in perception on waste separation is explained. The impacts of different separation rates on effectiveness of provided collection service are compared in six scenarios. 'Scenario 5' gives the most promising opportunities as 40% of residents are willing to conduct organic and recyclable waste separation. The results show that better service of waste collection and transportation, less monthly expense, extended landfill life, and satisfactory efficiency of the provided service at 60.48% will be achieved at the end of the simulation period. Implications of how to get public involved and conducted source separation are proposed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang
2007-01-01
We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method.
Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle
2018-05-01
Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.
Palmgren, M S; Lee, L S
1986-01-01
Two distinct reservoirs of mycotoxins exist in fungal-infected cereal grains--the fungal spores and the spore-free mycelium-substrate matrix. Many fungal spores are of respirable size and the mycelium-substrate matrix can be pulverized to form particles of respirable size during routine handling of grain. In order to determine the contribution of each source to the level of mycotoxin contamination of dust, we developed techniques to harvest and separate mycelium-substrate matrices from spores of fungi. Conventional quantitative chromatographic analyses of separated materials indicated that aflatoxin from Aspergillus parasiticus, norsolorinic acid from a mutant of A. parasiticus, and secalonic acid D from Penicillium oxalicum were concentrated in the mycelium-substrate matrices and not in the spores. In contrast, spores of Aspergillus niger and Aspergillus fumigatus contained significant concentrations of aurasperone C and fumigaclavine C, respectively; only negligible amounts of the toxins were detected in the mycelium-substrate matrices of these two fungi. PMID:3709472
Sadhu, A.; Narasimhan, S.; Antoni, J.
2017-09-01
Output-only modal identification has seen significant activity in recent years, especially in large-scale structures where controlled input force generation is often difficult to achieve. This has led to the development of new system identification methods which do not require controlled input. They often work satisfactorily if they satisfy some general assumptions - not overly restrictive - regarding the stochasticity of the input. Hundreds of papers covering a wide range of applications appear every year related to the extraction of modal properties from output measurement data in more than two dozen mechanical, aerospace and civil engineering journals. In little more than a decade, concepts of blind source separation (BSS) from the field of acoustic signal processing have been adopted by several researchers and shown that they can be attractive tools to undertake output-only modal identification. Originally intended to separate distinct audio sources from a mixture of recordings, mathematical equivalence to problems in linear structural dynamics have since been firmly established. This has enabled many of the developments in the field of BSS to be modified and applied to output-only modal identification problems. This paper reviews over hundred articles related to the application of BSS and their variants to output-only modal identification. The main contribution of the paper is to present a literature review of the papers which have appeared on the subject. While a brief treatment of the basic ideas are presented where relevant, a comprehensive and critical explanation of their contents is not attempted. Specific issues related to output-only modal identification and the relative advantages and limitations of BSS methods both from theoretical and application standpoints are discussed. Gap areas requiring additional work are also summarized and the paper concludes with possible future trends in this area.
Saline sewage treatment and source separation of urine for more sustainable urban water management.
Ekama, G A; Wilsenach, J A; Chen, G H
2011-01-01
While energy consumption and its associated carbon emission should be minimized in wastewater treatment, it has a much lower priority than human and environmental health, which are both closely related to efficient water quality management. So conservation of surface water quality and quantity are more important for sustainable development than green house gas (GHG) emissions per se. In this paper, two urban water management strategies to conserve fresh water quality and quantity are considered: (1) source separation of urine for improved water quality and (2) saline (e.g. sea) water toilet flushing for reduced fresh water consumption in coastal and mining cities. The former holds promise for simpler and shorter sludge age activated sludge wastewater treatment plants (no nitrification and denitrification), nutrient (Mg, K, P) recovery and improved effluent quality (reduced endocrine disruptor and environmental oestrogen concentrations) and the latter for significantly reduced fresh water consumption, sludge production and oxygen demand (through using anaerobic bioprocesses) and hence energy consumption. Combining source separation of urine and saline water toilet flushing can reduce sewer crown corrosion and reduce effluent P concentrations. To realize the advantages of these two approaches will require significant urban water management changes in that both need dual (fresh and saline) water distribution and (yellow and grey/brown) wastewater collection systems. While considerable work is still required to evaluate these new approaches and quantify their advantages and disadvantages, it would appear that the investment for dual water distribution and wastewater collection systems may be worth making to unlock their benefits for more sustainable urban development.
Landry, Kelly A; Boyer, Treavor H
2016-11-15
Urine source separation has the potential to reduce pharmaceutical loading to the environment, while enhancing nutrient recovery. The focus of this life cycle assessment (LCA) was to evaluate the environmental impacts and economic costs to manage nonsteroidal anti-inflammatory drugs (NSAIDs) (i.e., diclofenac, ibuprofen, ketoprofen and naproxen) and nutrients in human urine. Urine source separation was compared with centralized wastewater treatment (WWT) (biological or upgraded with ozonation). The current treatment method (i.e., centralized biological WWT) was compared with hypothetical treatment scenarios (i.e., centralized biological WWT upgraded with ozonation, and urine source separation). Alternative urine source separation scenarios included varying collection and handling methods (i.e., collection by vacuum truck, vacuum sewer, or decentralized treatment), pharmaceuticals removal by ion-exchange, and struvite precipitation. Urine source separation scenarios had 90% lower environmental impact (based on the TRACI impact assessment method) compared with the centralized wastewater scenarios due to reduced potable water production for flush water, reduced electricity use at the wastewater treatment plant, and nutrient offsets from struvite precipitation. Despite the greatest reduction of pharmaceutical toxicity, centralized treatment upgraded with ozone had the greatest ecotoxicity impacts due to ozonation operation and infrastructure. Among urine source separation scenarios, decentralized treatment of urine and centralized treatment of urine collected by vacuum truck had negligible cost differences compared with centralized wastewater treatment. Centralized treatment of urine collected by vacuum sewer and centralized treatment with ozone cost 30% more compared with conventional wastewater treatment. Copyright © 2016 Elsevier Ltd. All rights reserved.
System identification through nonstationary data using Time-Frequency Blind Source Separation
Guo, Yanlin; Kareem, Ahsan
2016-06-01
Classical output-only system identification (SI) methods are based on the assumption of stationarity of the system response. However, measured response of buildings and bridges is usually non-stationary due to strong winds (e.g. typhoon, and thunder storm etc.), earthquakes and time-varying vehicle motions. Accordingly, the response data may have time-varying frequency contents and/or overlapping of modal frequencies due to non-stationary colored excitation. This renders traditional methods problematic for modal separation and identification. To address these challenges, a new SI technique based on Time-Frequency Blind Source Separation (TFBSS) is proposed. By selectively utilizing "effective" information in local regions of the time-frequency plane, where only one mode contributes to energy, the proposed technique can successfully identify mode shapes and recover modal responses from the non-stationary response where the traditional SI methods often encounter difficulties. This technique can also handle response with closely spaced modes which is a well-known challenge for the identification of large-scale structures. Based on the separated modal responses, frequency and damping can be easily identified using SI methods based on a single degree of freedom (SDOF) system. In addition to the exclusive advantage of handling non-stationary data and closely spaced modes, the proposed technique also benefits from the absence of the end effects and low sensitivity to noise in modal separation. The efficacy of the proposed technique is demonstrated using several simulation based studies, and compared to the popular Second-Order Blind Identification (SOBI) scheme. It is also noted that even some non-stationary response data can be analyzed by the stationary method SOBI. This paper also delineates non-stationary cases where SOBI and the proposed scheme perform comparably and highlights cases where the proposed approach is more advantageous. Finally, the performance of the
Directory of Open Access Journals (Sweden)
Melesse Eshetu Moges
2018-04-01
Full Text Available Using a filter medium for organic matter removal and nutrient recovery from blackwater treatment is a novel concept and has not been investigated sufficiently to date. This paper demonstrates a combined blackwater treatment and nutrient-recovery strategy and establishes mechanisms for a more dependable source of plant nutrients aiming at a circular economy. Source-separated blackwater from a student dormitory was used as feedstock for a sludge blanket anaerobic-baffled reactor. The effluent from the reactor, with 710 mg L−1 NH4–N and 63 mg L−1 PO4–P, was treated in a sequence of upflow and downflow filtration columns using granular activated carbon, Cocos char and polonite as filter media at a flow rate of 600 L m−2 day−1 and organic loading rate of 430 g chemical oxygen demand (COD m−2 day−1. Filtration treatment of the anaerobic effluent with carbon adsorbents removed 80% of the residual organic matter, more than 90% of suspended solids, and turbidity while releasing more than 76% NH4–N and 85% of PO4–P in the liquid phase. The treatment train also removed total coliform bacteria and E. coli in the effluent, achieving concentrations below detection limit after the integration of ultraviolet (UV light. These integrated technological pathways ensure simultaneous nutrient recovery as a nutrient solution, pathogen inactivation, and reduction of active organic substances. The treated nutrient-rich water can be applied as a source of value creation for various end-use options.
Directory of Open Access Journals (Sweden)
Hongyun Han
2016-07-01
Full Text Available This paper examines how and to what degree government policies of garbage fees and voluntary source separation programs, with free indoor containers and garbage bags, can affect the effectiveness of municipal solid waste (MSW management, in the sense of achieving a desirable reduction of per capita MSW generation. Based on city-level panel data for years 1998–2012 in China, our empirical analysis indicates that per capita MSW generated is increasing with per capita disposable income, average household size, education levels of households, and the lagged per capita MSW. While both garbage fees and source separation programs have separately led to reductions in per capita waste generation, the interaction of the two policies has resulted in an increase in per capita waste generation due to the following crowding-out effects: Firstly, the positive effect of income dominates the negative effect of the garbage fee. Secondly, there are crowding-out effects of mandatory charging system and the subsidized voluntary source separation on per capita MSW generation. Thirdly, small subsidies and tax punishments have reduced the intrinsic motivation for voluntary source separation of MSW. Thus, compatible fee charging system, higher levels of subsidies, and well-designed public information and education campaigns are required to promote household waste source separation and reduction.
Lucci, Gina M; Nash, David; McDowell, Richard W; Condron, Leo M
2014-07-01
Many factors affect the magnitude of nutrient losses from dairy farm systems. Bayesian Networks (BNs) are an alternative to conventional modeling that can evaluate complex multifactor problems using forward and backward reasoning. A BN of annual total phosphorus (TP) exports was developed for a hypothetical dairy farm in the south Otago region of New Zealand and was used to investigate and integrate the effects of different management options under contrasting rainfall and drainage regimes. Published literature was consulted to quantify the relationships that underpin the BN, with preference given to data and relationships derived from the Otago region. In its default state, the BN estimated loads of 0.34 ± 0.42 kg TP ha for overland flow and 0.30 ± 0.19 kg TP ha for subsurface flow, which are in line with reported TP losses in overland flow (0-1.1 kg TP ha) and in drainage (0.15-2.2 kg TP ha). Site attributes that cannot be managed, like annual rainfall and the average slope of the farm, were found to affect the loads of TP lost from dairy farms. The greatest loads (13.4 kg TP ha) were predicted to occur with above-average annual rainfall (970 mm), where irrigation of farm dairy effluent was managed poorly, and where Olsen P concentrations were above pasture requirements (60 mg kg). Most of this loading was attributed to contributions from overland flow. This study demonstrates the value of using a BN to understand the complex interactions between site variables affecting P loss and their relative importance. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Developing methodologies for source attribution. Glass phase separation in Trinitite using NF{sub 3}
Energy Technology Data Exchange (ETDEWEB)
Koeman, Elizabeth C.; Simonetti, Antonio [Notre Dame Univ., IN (United States). Dept. of Civil and Environmental Engineering and Earth Sciences; McNamara, Bruce K.; Smith, Frances N. [Pacific Northwest National Laboratory, Richland, WA (United States). Nuclear Chemistry and Engineering; Burns, Peter C. [Notre Dame Univ., IN (United States). Dept. of Civil and Environmental Engineering and Earth Sciences; Notre Dame Univ., IN (United States). Dept. of Chemistry and Biochemistry
2017-08-01
separation of silica from minerals (i.e. naturally occurring crystalline materials) and glasses (i.e. amorphous materials), leaving behind non-volatile fluorinated species and refractory phases. The results from our investigation clearly indicate that the NF{sub 3} treatment of nuclear materials is a technique that provides effective separation of bomb components from complex matrices (e.g. post-detonation samples), which will aid with rapid and accurate source attribution.
Shah, Syed Awais Wahab
2017-11-24
This paper addresses the problem of blind demixing of instantaneous mixtures in a multiple-input multiple-output communication system. The main objective is to present efficient blind source separation (BSS) algorithms dedicated to moderate or high-order QAM constellations. Four new iterative batch BSS algorithms are presented dealing with the multimodulus (MM) and alphabet matched (AM) criteria. For the optimization of these cost functions, iterative methods of Givens and hyperbolic rotations are used. A pre-whitening operation is also utilized to reduce the complexity of design problem. It is noticed that the designed algorithms using Givens rotations gives satisfactory performance only for large number of samples. However, for small number of samples, the algorithms designed by combining both Givens and hyperbolic rotations compensate for the ill-whitening that occurs in this case and thus improves the performance. Two algorithms dealing with the MM criterion are presented for moderate order QAM signals such as 16-QAM. The other two dealing with the AM criterion are presented for high-order QAM signals. These methods are finally compared with the state of art batch BSS algorithms in terms of signal-to-interference and noise ratio, symbol error rate and convergence rate. Simulation results show that the proposed methods outperform the contemporary batch BSS algorithms.
Feature Selection and Blind Source Separation in an EEG-Based Brain-Computer Interface
Directory of Open Access Journals (Sweden)
Michael H. Thaut
2005-11-01
Full Text Available Most EEG-based BCI systems make use of well-studied patterns of brain activity. However, those systems involve tasks that indirectly map to simple binary commands such as Ã¢Â€ÂœyesÃ¢Â€Â or Ã¢Â€ÂœnoÃ¢Â€Â or require many weeks of biofeedback training. We hypothesized that signal processing and machine learning methods can be used to discriminate EEG in a direct Ã¢Â€ÂœyesÃ¢Â€Â/Ã¢Â€ÂœnoÃ¢Â€Â BCI from a single session. Blind source separation (BSS and spectral transformations of the EEG produced a 180-dimensional feature space. We used a modified genetic algorithm (GA wrapped around a support vector machine (SVM classifier to search the space of feature subsets. The GA-based search found feature subsets that outperform full feature sets and random feature subsets. Also, BSS transformations of the EEG outperformed the original time series, particularly in conjunction with a subset search of both spaces. The results suggest that BSS and feature selection can be used to improve the performance of even a Ã¢Â€Âœdirect,Ã¢Â€Â single-session BCI.
Directory of Open Access Journals (Sweden)
Qiang Guo
2018-01-01
Full Text Available In modern electronic warfare, multiple input multiple output (MIMO radar has become an important tool for electronic reconnaissance and intelligence transmission because of its anti-stealth, high resolution, low intercept and anti-destruction characteristics. As a common MIMO radar signal, discrete frequency coding waveform (DFCW has a serious overlap of both time and frequency, so it cannot be directly used in the current radar signal separation problems. The existing fuzzy clustering algorithms have problems in initial value selection, low convergence rate and local extreme values which will lead to the low accuracy of the mixing matrix estimation. Consequently, a novel mixing matrix estimation algorithm based on data field and improved fuzzy C-means (FCM clustering is proposed. First of all, the sparsity and linear clustering characteristics of the time–frequency domain MIMO radar signals are enhanced by using the single-source principal value of complex angular detection. Secondly, the data field uses the potential energy information to analyze the particle distribution, thus design a new clustering number selection scheme. Then the particle swarm optimization algorithm is introduced to improve the iterative clustering process of FCM, and finally get the estimated value of the mixing matrix. The simulation results show that the proposed algorithm improves both the estimation accuracy and the robustness of the mixing matrix.
Fate of personal care and household products in source separated sanitation.
Butkovskyi, A; Rijnaarts, H H M; Zeeman, G; Hernandez Leal, L
2016-12-15
Removal of twelve micropollutants, namely biocides, fragrances, ultraviolet (UV)-filters and preservatives in source separated grey and black water treatment systems was studied. All compounds were present in influent grey water in μg/l range. Seven compounds were found in influent black water. Their removal in an aerobic activated sludge system treating grey water ranged from 59% for avobenzone to >99% for hexylcinnamaldehyde. High concentrations of hydrophobic micropollutants in sludge of aerobic activated sludge system indicated the importance of sorption for their removal. Six micropollutants were found in sludge of an Up-flow anaerobic sludge blanket (UASB) reactor treating black water, with four of them being present at significantly higher concentrations after addition of grey water sludge to the reactor. Hence, addition of grey water sludge to the UASB reactor is likely to increase micropollutant content in UASB sludge. This approach should not be followed when excess UASB sludge is designed to be reused as soil amendment. Copyright © 2016 Elsevier B.V. All rights reserved.
Insects associated with the composting process of solid urban waste separated at the source
Directory of Open Access Journals (Sweden)
Gladis Estela Morales
2010-01-01
Full Text Available Sarcosaprophagous macroinvertebrates (earthworms, termites and a number of Diptera larvae enhance changes in the physical and chemical properties of organic matter during degradation and stabilization processes in composting, causing a decrease in the molecular weights of compounds. This activity makes these organisms excellent recyclers of organic matter. This article evaluates the succession of insects associated with the decomposition of solid urban waste separated at the source. The study was carried out in the city of Medellin, Colombia. A total of 11,732 individuals were determined, belonging to the classes Insecta and Arachnida. Species of three orders of Insecta were identified, Diptera, Coleoptera and Hymenoptera. Diptera corresponding to 98.5% of the total, was the most abundant and diverse group, with 16 families (Calliphoridae, Drosophilidae, Psychodidae, Fanniidae, Muscidae, Milichiidae, Ulidiidae, Scatopsidae, Sepsidae, Sphaeroceridae, Heleomyzidae, Stratiomyidae, Syrphidae, Phoridae, Tephritidae and Curtonotidae followed by Coleoptera with five families (Carabidae, Staphylinidae, Ptiliidae, Hydrophilidae and Phalacaridae. Three stages were observed during the composting process, allowing species associated with each stage to be identified. Other species were also present throughout the whole process. In terms of number of species, Diptera was the most important group observed, particularly Ornidia obesa, considered a highly invasive species, and Hermetia illuscens, both reported as beneficial for decomposition of organic matter.
Shah, Syed Awais Wahab; Abed-Meraim, Karim; Al-Naffouri, Tareq Y.
2017-01-01
This paper addresses the problem of blind demixing of instantaneous mixtures in a multiple-input multiple-output communication system. The main objective is to present efficient blind source separation (BSS) algorithms dedicated to moderate or high-order QAM constellations. Four new iterative batch BSS algorithms are presented dealing with the multimodulus (MM) and alphabet matched (AM) criteria. For the optimization of these cost functions, iterative methods of Givens and hyperbolic rotations are used. A pre-whitening operation is also utilized to reduce the complexity of design problem. It is noticed that the designed algorithms using Givens rotations gives satisfactory performance only for large number of samples. However, for small number of samples, the algorithms designed by combining both Givens and hyperbolic rotations compensate for the ill-whitening that occurs in this case and thus improves the performance. Two algorithms dealing with the MM criterion are presented for moderate order QAM signals such as 16-QAM. The other two dealing with the AM criterion are presented for high-order QAM signals. These methods are finally compared with the state of art batch BSS algorithms in terms of signal-to-interference and noise ratio, symbol error rate and convergence rate. Simulation results show that the proposed methods outperform the contemporary batch BSS algorithms.
Xia, Ya-Rong; Zhang, Shun-Li; Xin, Xiang-Peng
2018-03-01
In this paper, we propose the concept of the perturbed invariant subspaces (PISs), and study the approximate generalized functional variable separation solution for the nonlinear diffusion-convection equation with weak source by the approximate generalized conditional symmetries (AGCSs) related to the PISs. Complete classification of the perturbed equations which admit the approximate generalized functional separable solutions (AGFSSs) is obtained. As a consequence, some AGFSSs to the resulting equations are explicitly constructed by way of examples.
Wang, Rui; Jin, Xin; Wang, Ziyuan; Gu, Wantao; Wei, Zhechao; Huang, Yuanjie; Qiu, Zhuang; Jin, Pengkang
2018-01-01
This paper proposes a new system of multilevel reuse with source separation in printing and dyeing wastewater (PDWW) treatment in order to dramatically improve the water reuse rate to 35%. By analysing the characteristics of the sources and concentrations of pollutants produced in different printing and dyeing processes, special, highly, and less contaminated wastewaters (SCW, HCW, and LCW, respectively) were collected and treated separately. Specially, a large quantity of LCW was sequentially reused at multiple levels to meet the water quality requirements for different production processes. Based on this concept, a multilevel reuse system with a source separation process was established in a typical printing and dyeing enterprise. The water reuse rate increased dramatically to 62%, and the reclaimed water was reused in different printing and dyeing processes based on the water quality. This study provides promising leads in water management for wastewater reclamation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ishii, Stephanie K L; Boyer, Treavor H
2015-08-01
Alternative approaches to wastewater management including urine source separation have the potential to simultaneously improve multiple aspects of wastewater treatment, including reduced use of potable water for waste conveyance and improved contaminant removal, especially nutrients. In order to pursue such radical changes, system-level evaluations of urine source separation in community contexts are required. The focus of this life cycle assessment (LCA) is managing nutrients from urine produced in a residential setting with urine source separation and struvite precipitation, as compared with a centralized wastewater treatment approach. The life cycle impacts evaluated in this study pertain to construction of the urine source separation system and operation of drinking water treatment, decentralized urine treatment, and centralized wastewater treatment. System boundaries include fertilizer offsets resulting from the production of urine based struvite fertilizer. As calculated by the Tool for the Reduction and Assessment of Chemical and Other Environmental Impacts (TRACI), urine source separation with MgO addition for subsequent struvite precipitation with high P recovery (Scenario B) has the smallest environmental cost relative to existing centralized wastewater treatment (Scenario A) and urine source separation with MgO and Na3PO4 addition for subsequent struvite precipitation with concurrent high P and N recovery (Scenario C). Preliminary economic evaluations show that the three urine management scenarios are relatively equal on a monetary basis (<13% difference). The impacts of each urine management scenario are most sensitive to the assumed urine composition, the selected urine storage time, and the assumed electricity required to treat influent urine and toilet water used to convey urine at the centralized wastewater treatment plant. The importance of full nutrient recovery from urine in combination with the substantial chemical inputs required for N recovery
Fehr, M
2014-09-01
Business opportunities in the household waste sector in emerging economies still evolve around the activities of bulk collection and tipping with an open material balance. This research, conducted in Brazil, pursued the objective of shifting opportunities from tipping to reverse logistics in order to close the balance. To do this, it illustrated how specific knowledge of sorted waste composition and reverse logistics operations can be used to determine realistic temporal and quantitative landfill diversion targets in an emerging economy context. Experimentation constructed and confirmed the recycling trilogy that consists of source separation, collection infrastructure and reverse logistics. The study on source separation demonstrated the vital difference between raw and sorted waste compositions. Raw waste contained 70% biodegradable and 30% inert matter. Source separation produced 47% biodegradable, 20% inert and 33% mixed material. The study on collection infrastructure developed the necessary receiving facilities. The study on reverse logistics identified private operators capable of collecting and processing all separated inert items. Recycling activities for biodegradable material were scarce and erratic. Only farmers would take the material as animal feed. No composting initiatives existed. The management challenge was identified as stimulating these activities in order to complete the trilogy and divert the 47% source-separated biodegradable discards from the landfills. © The Author(s) 2014.
Estimation of nitrite in source-separated nitrified urine with UV spectrophotometry.
Mašić, Alma; Santos, Ana T L; Etter, Bastian; Udert, Kai M; Villez, Kris
2015-11-15
Monitoring of nitrite is essential for an immediate response and prevention of irreversible failure of decentralized biological urine nitrification reactors. Although a few sensors are available for nitrite measurement, none of them are suitable for applications in which both nitrite and nitrate are present in very high concentrations. Such is the case in collected source-separated urine, stabilized by nitrification for long-term storage. Ultraviolet (UV) spectrophotometry in combination with chemometrics is a promising option for monitoring of nitrite. In this study, an immersible in situ UV sensor is investigated for the first time so to establish a relationship between UV absorbance spectra and nitrite concentrations in nitrified urine. The study focuses on the effects of suspended particles and saturation on the absorbance spectra and the chemometric model performance. Detailed analysis indicates that suspended particles in nitrified urine have a negligible effect on nitrite estimation, concluding that sample filtration is not necessary as pretreatment. In contrast, saturation due to very high concentrations affects the model performance severely, suggesting dilution as an essential sample preparation step. However, this can also be mitigated by simple removal of the saturated, lower end of the UV absorbance spectra, and extraction of information from the secondary, weaker nitrite absorbance peak. This approach allows for estimation of nitrite with a simple chemometric model and without sample dilution. These results are promising for a practical application of the UV sensor as an in situ nitrite measurement in a urine nitrification reactor given the exceptional quality of the nitrite estimates in comparison to previous studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Complete nutrient recovery from source-separated urine by nitrification and distillation.
Udert, K M; Wächter, M
2012-02-01
In this study we present a method to recover all nutrients from source-separated urine in a dry solid by combining biological nitrification with distillation. In a first process step, a membrane-aerated biofilm reactor was operated stably for more than 12 months, producing a nutrient solution with a pH between 6.2 and 7.0 (depending on the pH set-point), and an ammonium to nitrate ratio between 0.87 and 1.15 gN gN(-1). The maximum nitrification rate was 1.8 ± 0.3 gN m(-2) d(-1). Process stability was achieved by controlling the pH via the influent. In the second process step, real nitrified urine and synthetic solutions were concentrated in lab-scale distillation reactors. All nutrients were recovered in a dry powder except for some ammonia (less than 3% of total nitrogen). We estimate that the primary energy demand for a simple nitrification/distillation process is four to five times higher than removing nitrogen and phosphorus in a conventional wastewater treatment plant and producing the equivalent amount of phosphorus and nitrogen fertilizers. However, the primary energy demand can be reduced to values very close to conventional treatment, if 80% of the water is removed with reverse osmosis and distillation is operated with vapor compression. The ammonium nitrate content of the solid residue is below the limit at which stringent EU safety regulations for fertilizers come into effect; nevertheless, we propose some additional process steps that will increase the thermal stability of the solid product. Copyright © 2011 Elsevier Ltd. All rights reserved.
Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel
2013-01-01
Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean
Olivares, Ela I; Lage-Castellanos, Agustín; Bobes, María A; Iglesias, Jaime
2018-01-01
We investigated the neural correlates of the access to and retrieval of face structure information in contrast to those concerning the access to and retrieval of person-related verbal information, triggered by faces. We experimentally induced stimulus familiarity via a systematic learning procedure including faces with and without associated verbal information. Then, we recorded event-related potentials (ERPs) in both intra-domain (face-feature) and cross-domain (face-occupation) matching tasks while N400-like responses were elicited by incorrect eyes-eyebrows completions and occupations, respectively. A novel Bayesian source reconstruction approach plus conjunction analysis of group effects revealed that in both cases the generated N170s were of similar amplitude but had different neural origin. Thus, whereas the N170 of faces was associated predominantly to right fusiform and occipital regions (the so-called "Fusiform Face Area", "FFA" and "Occipital Face Area", "OFA", respectively), the N170 of occupations was associated to a bilateral very posterior activity, suggestive of basic perceptual processes. Importantly, the right-sided perceptual P200 and the face-related N250 were evoked exclusively in the intra-domain task, with sources in OFA and extensively in the fusiform region, respectively. Regarding later latencies, the intra-domain N400 seemed to be generated in right posterior brain regions encompassing mainly OFA and, to some extent, the FFA, likely reflecting neural operations triggered by structural incongruities. In turn, the cross-domain N400 was related to more anterior left-sided fusiform and temporal inferior sources, paralleling those described previously for the classic verbal N400. These results support the existence of differentiated neural streams for face structure and person-related verbal processing triggered by faces, which can be activated differentially according to specific task demands.
Directory of Open Access Journals (Sweden)
Ela I. Olivares
2018-03-01
Full Text Available We investigated the neural correlates of the access to and retrieval of face structure information in contrast to those concerning the access to and retrieval of person-related verbal information, triggered by faces. We experimentally induced stimulus familiarity via a systematic learning procedure including faces with and without associated verbal information. Then, we recorded event-related potentials (ERPs in both intra-domain (face-feature and cross-domain (face-occupation matching tasks while N400-like responses were elicited by incorrect eyes-eyebrows completions and occupations, respectively. A novel Bayesian source reconstruction approach plus conjunction analysis of group effects revealed that in both cases the generated N170s were of similar amplitude but had different neural origin. Thus, whereas the N170 of faces was associated predominantly to right fusiform and occipital regions (the so-called “Fusiform Face Area”, “FFA” and “Occipital Face Area”, “OFA”, respectively, the N170 of occupations was associated to a bilateral very posterior activity, suggestive of basic perceptual processes. Importantly, the right-sided perceptual P200 and the face-related N250 were evoked exclusively in the intra-domain task, with sources in OFA and extensively in the fusiform region, respectively. Regarding later latencies, the intra-domain N400 seemed to be generated in right posterior brain regions encompassing mainly OFA and, to some extent, the FFA, likely reflecting neural operations triggered by structural incongruities. In turn, the cross-domain N400 was related to more anterior left-sided fusiform and temporal inferior sources, paralleling those described previously for the classic verbal N400. These results support the existence of differentiated neural streams for face structure and person-related verbal processing triggered by faces, which can be activated differentially according to specific task demands.
DEFF Research Database (Denmark)
Oh, Geok Lian; Brunskog, Jonas
2014-01-01
Techniques have been studied for the localization of an underground source with seismic interrogation signals. Much of the work has involved defining either a P-wave acoustic model or a dispersive surface wave model to the received signal and applying the time-delay processing technique and frequ...... that for field data, inversion for localization is most advantageous when the forward model completely describe all the elastic wave components as is the case of the FDTD 3D elastic model....
International Nuclear Information System (INIS)
Bernstad, Anna; Cour Jansen, Jes la; Aspegren, Henrik
2011-01-01
Through an agreement with EEE producers, Swedish municipalities are responsible for collection of hazardous waste and waste electrical and electronic equipment (WEEE). In most Swedish municipalities, collection of these waste fractions is concentrated to waste recycling centres where households can source-separate and deposit hazardous waste and WEEE free of charge. However, the centres are often located on the outskirts of city centres and cars are needed in order to use the facilities in most cases. A full-scale experiment was performed in a residential area in southern Sweden to evaluate effects of a system for property-close source separation of hazardous waste and WEEE. After the system was introduced, results show a clear reduction in the amount of hazardous waste and WEEE disposed of incorrectly amongst residual waste or dry recyclables. The systems resulted in a source separation ratio of 70 wt% for hazardous waste and 76 wt% in the case of WEEE. Results show that households in the study area were willing to increase source separation of hazardous waste and WEEE when accessibility was improved and that this and similar collection systems can play an important role in building up increasingly sustainable solid waste management systems.
DEFF Research Database (Denmark)
Hansen, Trine Lund; Svärd, Å; Angelidaki, Irini
2003-01-01
A research project has investigated the biogas potential of pre-screened source-separated organic waste. Wastes from five Danish cities have been pre-treated by three methods: screw press; disc screen; and shredder and magnet. This paper outlines the sampling procedure used, the chemical...... composition of the wastes and the estimated methane potentials....
Zeeman, G.; Kujawa, K.; Mes, de T.Z.D.; Graaff, de M.S.; Abu-Ghunmi, L.N.A.H.; Mels, A.R.; Meulman, B.; Temmink, B.G.; Buisman, C.J.N.; Lier, van J.B.; Lettinga, G.
2008-01-01
Based on results of pilot scale research with source-separated black water (BW) and grey water (GW), a new sanitation concept is proposed. BW and GW are both treated in a UASB (-septic tank) for recovery of CH4 gas. Kitchen waste is added to the anaerobic BW treatment for doubling the biogas
Pathogens and pharmaceuticals in source-separated urine in eThekwini, South Africa.
Bischel, Heather N; Özel Duygan, Birge D; Strande, Linda; McArdell, Christa S; Udert, Kai M; Kohn, Tamar
2015-11-15
In eThekwini, South Africa, the production of agricultural fertilizers from human urine collected from urine-diverting dry toilets is being evaluated at a municipality scale as a way to help finance a decentralized, dry sanitation system. The present study aimed to assess a range of human and environmental health hazards in source-separated urine, which was presumed to be contaminated with feces, by evaluating the presence of human pathogens, pharmaceuticals, and an antibiotic resistance gene. Composite urine samples from households enrolled in a urine collection trial were obtained from urine storage tanks installed in three regions of eThekwini. Polymerase chain reaction (PCR) assays targeted 9 viral and 10 bacterial human pathogens transmitted by the fecal-oral route. The most frequently detected viral pathogens were JC polyomavirus, rotavirus, and human adenovirus in 100%, 34% and 31% of samples, respectively. Aeromonas spp. and Shigella spp. were frequently detected gram negative bacteria, in 94% and 61% of samples, respectively. The gram positive bacterium, Clostridium perfringens, which is known to survive for extended times in urine, was found in 72% of samples. A screening of 41 trace organic compounds in the urine facilitated selection of 12 priority pharmaceuticals for further evaluation. The antibiotics sulfamethoxazole and trimethoprim, which are frequently prescribed as prophylaxis for HIV-positive patients, were detected in 95% and 85% of samples, reaching maximum concentrations of 6800 μg/L and 1280 μg/L, respectively. The antiretroviral drug emtricitabine was also detected in 40% of urine samples. A sulfonamide antibiotic resistance gene (sul1) was detected in 100% of urine samples. By coupling analysis of pathogens and pharmaceuticals in geographically dispersed samples in eThekwini, this study reveals a range of human and environmental health hazards in urine intended for fertilizer production. Collection of urine offers the benefit of
Yang, Yang; Li, Xiukun
2016-06-01
Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.
DEFF Research Database (Denmark)
Jensen, Finn Verner; Nielsen, Thomas Dyhre
2016-01-01
Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes...
J Padilla, Alcides; Trujillo, Juan C
2018-04-01
Solid waste management in many cities of developing countries is not environmentally sustainable. People traditionally dispose of their solid waste in unsuitable urban areas like sidewalks and satellite dumpsites. This situation nowadays has become a serious public health problem in big Latin American conurbations. Among these densely-populated urban spaces, the Colombia's capital and main city stands out as a special case. In this study, we aim to identify the factors that shape the attitudes towards source-separated recycling among households in Bogotá. Using data from the Colombian Department of Statistics and Bogotá's multi-purpose survey, we estimated a multivariate Probit model. In general, our results show that the higher the household's socioeconomic class, the greater its effort for separating solid wastes. Likewise, our findings also allowed us to characterize household profiles regarding solid waste separation and considering each socioeconomic class. Among these profiles, we found that at lower socioeconomic classes, the attitudes towards solid waste separation are influenced by the use of Internet, the membership to an environmentalist organization, the level of education of the head of household and the homeownership. Hence, increasing the education levels within the poorest segment of the population, promoting affordable housing policies and facilitating Internet access for the vulnerable population could reinforce households' attitudes towards a greater source-separated recycling effort. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bayesian networks improve causal environmental ...
Rule-based weight of evidence approaches to ecological risk assessment may not account for uncertainties and generally lack probabilistic integration of lines of evidence. Bayesian networks allow causal inferences to be made from evidence by including causal knowledge about the problem, using this knowledge with probabilistic calculus to combine multiple lines of evidence, and minimizing biases in predicting or diagnosing causal relationships. Too often, sources of uncertainty in conventional weight of evidence approaches are ignored that can be accounted for with Bayesian networks. Specifying and propagating uncertainties improve the ability of models to incorporate strength of the evidence in the risk management phase of an assessment. Probabilistic inference from a Bayesian network allows evaluation of changes in uncertainty for variables from the evidence. The network structure and probabilistic framework of a Bayesian approach provide advantages over qualitative approaches in weight of evidence for capturing the impacts of multiple sources of quantifiable uncertainty on predictions of ecological risk. Bayesian networks can facilitate the development of evidence-based policy under conditions of uncertainty by incorporating analytical inaccuracies or the implications of imperfect information, structuring and communicating causal issues through qualitative directed graph formulations, and quantitatively comparing the causal power of multiple stressors on value
Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation
DEFF Research Database (Denmark)
Schmidt, Mikkel N.; Mørup, Morten
2006-01-01
We present a novel method for blind separation of instruments in polyphonic music based on a non-negative matrix factor 2-D deconvolution algorithm. Using a model which is convolutive in both time and frequency we factorize a spectrogram representation of music into components corresponding...
Prospects of Source-Separation-Based Sanitation Concepts: A Model-Based Study
Tervahauta, T.H.; Trang Hoang,; Hernández, L.; Zeeman, G.; Buisman, C.J.N.
2013-01-01
Separation of different domestic wastewater streams and targeted on-site treatment for resource recovery has been recognized as one of the most promising sanitation concepts to re-establish the balance in carbon, nutrient and water cycles. In this study a model was developed based on literature data
DEFF Research Database (Denmark)
Tjernsbekk, M. T.; Tauson, A. H.; Kraugerud, O. F.
2017-01-01
Protein quality was evaluated for mechanically separated chicken meat (MSC) and salmon protein hydrolysate (SPH), and for extruded dog foods where MSC or SPH partially replaced poultry meal (PM). Apparent total tract digestibility (ATTD) of crude protein (CP) and amino acids (AA) in the protein...
Czech Academy of Sciences Publication Activity Database
Tichavský, Petr; Koldovský, Zbyněk; Yeredor, A.; Gómez-Herrero, G.; Doron, E.
2008-01-01
Roč. 19, č. 3 (2008), s. 421-430 ISSN 1045-9227 R&D Projects: GA MŠk 1M0572 Grant - others:GA ČR(CZ) GP102/07/P384 Program:GP Institutional research plan: CEZ:AV0Z10750506 Keywords : blind source separation * independent component analysis Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.726, year: 2008
International Nuclear Information System (INIS)
Whipple, R.E.; Grant, P.M.; Daniels, R.J.; Daniels, W.R.; O'Brien, H.A.Jr.
1976-01-01
As the precursor of its 88 Y daughter, 88 Zr could be advantageously included in the active component of the 88 Y-Be photoneutron source for several reasons. The spallation of Mo targets with medium-energy protons at LAMPF procedure has been developed to separate radiozirconium from the target material and various spallogenic impurities. 88 Zr can consequently be obtained carrier-free and in quantitative yield. (author)
Bayesian flood forecasting methods: A review
Han, Shasha; Coulibaly, Paulin
2017-08-01
Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been
Method and apparatus for suppressing electron generation in a vapor source for isotope separation
International Nuclear Information System (INIS)
Janes, G.S.
1979-01-01
A system for applying accelerating forces to ionized particles of a vapor in a manner to suppress the flow of electron current from the vapor source. The accelerating forces are applied as an electric field in a configuration orthogonal to a magnetic field. The electric field is applied between one or more anodes in the plasma and one or more cathodes operated as electron emitting surfaces. The circuit for applying the electric field floats the cathodes with respect to the vapor source, thereby removing the vapor source from the circuit of electron flow through the plasma and suppressing the flow of electrons from the vapor source. The potential of other conducting structures contacting the plasma is controlled at or permitted to seek a level which further suppresses the flow of electron currents from the vapor source. Reducing the flow of electrons from the vapor source is particularly useful where the vapor is ionized with isotopic selectivity because it avoids superenergization of the vapor by the electron current
Quednau, Philipp; Trommer, Ralph; Schmidt, Lorenz-Peter
2016-03-01
Wireless transmission systems in smart metering networks share the advantage of lower installation costs due to the expandability of separate infrastructure but suffer from transmission problems. In this paper the issue of interference of wireless transmitted smart meter data with third party systems and data from other meters is investigated and an approach for solving the problem is presented. A multi-channel wireless m-bus receiver was developed to separate the desired data from unwanted interferers by spatial filtering. The according algorithms are presented and the influence of different antenna types on the spatial filtering is investigated. The performance of the spatial filtering is evaluated by extensive measurements in a realistic surrounding with several hundreds of active wireless m-bus transponders. These measurements correspond to the future environment for data-collectors as they took place in rural and urban areas with smart gas meters equipped with wireless m-bus transponders installed in almost all surrounding buildings.
Lavagnolo, Maria Cristina; Malagoli, Mario; Alibardi, Luca; Garbo, Francesco; Pivato, Alberto; Cossu, Raffaello
2017-05-01
Efficient and economic reuse of waste is one of the pillars of modern environmental engineering. In the field of domestic sewage management, source separation of yellow (urine), brown (faecal matter) and grey waters aims to recover the organic substances concentrated in brown water, the nutrients (nitrogen and phosphorous) in the urine and to ensure an easier treatment and recycling of grey waters. With the objective of emphasizing the potential of recovery of resources from sewage management, a lab-scale research study was carried out at the University of Padova in order to evaluate the performances of oleaginous plants (suitable for biodiesel production) in the phytotreatment of source separated yellow and grey waters. The plant species used were Brassica napus (rapeseed), Glycine max (soybean) and Helianthus annuus (sunflower). Phytotreatment tests were carried out using 20L pots. Different testing runs were performed at an increasing nitrogen concentration in the feedstock. The results proved that oleaginous species can conveniently be used for the phytotreatment of grey and yellow waters from source separation of domestic sewage, displaying high removal efficiencies of nutrients and organic substances (nitrogen>80%; phosphorous >90%; COD nearly 90%). No inhibition was registered in the growth of plants irrigated with different mixtures of yellow and grey waters, where the characteristics of the two streams were reciprocally and beneficially integrated. Copyright © 2016. Published by Elsevier B.V.
Introduction to Bayesian statistics
Bolstad, William M
2017-01-01
There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...
International Nuclear Information System (INIS)
Vullings, R; Bergmans, J W M; Peters, C H L; Hermans, M J M; Wijn, P F F; Oei, S G
2010-01-01
The use of the non-invasively obtained fetal electrocardiogram (ECG) in fetal monitoring is complicated by the low signal-to-noise ratio (SNR) of ECG signals. Even after removal of the predominant interference (i.e. the maternal ECG), the SNR is generally too low for medical diagnostics, and hence additional signal processing is still required. To this end, several methods for exploiting the spatial correlation of multi-channel fetal ECG recordings from the maternal abdomen have been proposed in the literature, of which principal component analysis (PCA) and independent component analysis (ICA) are the most prominent. Both PCA and ICA, however, suffer from the drawback that they are blind source separation (BSS) techniques and as such suboptimum in that they do not consider a priori knowledge on the abdominal electrode configuration and fetal heart activity. In this paper we propose a source separation technique that is based on the physiology of the fetal heart and on the knowledge of the electrode configuration. This technique operates by calculating the spatial fetal vectorcardiogram (VCG) and approximating the VCG for several overlayed heartbeats by an ellipse. By subsequently projecting the VCG onto the long axis of this ellipse, a source signal of the fetal ECG can be obtained. To evaluate the developed technique, its performance is compared to that of both PCA and ICA and to that of augmented versions of these techniques (aPCA and aICA; PCA and ICA applied on preprocessed signals) in generating a fetal ECG source signal with enhanced SNR that can be used to detect fetal QRS complexes. The evaluation shows that the developed source separation technique performs slightly better than aPCA and aICA and outperforms PCA and ICA and has the main advantage that, with respect to aPCA/PCA and aICA/ICA, it performs more robustly. This advantage renders it favorable for employment in automated, real-time fetal monitoring applications
Variational Bayesian Learning for Wavelet Independent Component Analysis
Roussos, E.; Roberts, S.; Daubechies, I.
2005-11-01
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.
International Nuclear Information System (INIS)
Cuna, Stela; Pendall, Elise; Miller, John B.; Tans, Pieter P.; Dlugokencky, Ed; White, James W.C.
2008-01-01
The Danube Delta-Black Sea region of Romania is an important wetland, and this preliminary study evaluates the significance of this region as a source of atmospheric CH 4 . Measurements of the mixing ratio and δ 13 C in CH 4 are reported from air and water samples collected at eight sites in the Danube Delta. High mixing ratios of CH 4 were found in air (2500-14,000 ppb) and dissolved in water samples (∼1-10 μmol L -1 ), demonstrating that the Danube Delta is an important natural source of CH 4 . The intercepts on Keeling plots of about -62 per mille show that the main source of CH 4 in this region is microbial, probably resulting primarily from acetate fermentation. Atmospheric CH 4 and CO data from the NOAA/ESRL (National Oceanic and Atmospheric Administration/Earth System Research Laboratory) were used to make a preliminary estimate of biogenic CH 4 at the Black Sea sampling site at Constanta (BSC). These data were used to calculate ratios of CH 4 /CO in air samples, and using an assumed CH 4 /CO anthropogenic emissions ratio of 0.6, fossil fuel emissions at BSC were estimated. Biogenic CH 4 emissions were then estimated by a simple mass balance approach. Keeling plots of well-mixed air from the BSC site suggested a stronger wetland source in summer and a stronger fossil fuel source in winter
Gaussian process based independent analysis for temporal source separation in fMRI
DEFF Research Database (Denmark)
Hald, Ditte Høvenhoff; Henao, Ricardo; Winther, Ole
2017-01-01
Functional Magnetic Resonance Imaging (fMRI) gives us a unique insight into the processes of the brain, and opens up for analyzing the functional activation patterns of the underlying sources. Task-inferred supervised learning with restrictive assumptions in the regression set-up, restricts...... the exploratory nature of the analysis. Fully unsupervised independent component analysis (ICA) algorithms, on the other hand, can struggle to detect clear classifiable components on single-subject data. We attribute this shortcoming to inadequate modeling of the fMRI source signals by failing to incorporate its...
Bayesian artificial intelligence
Korb, Kevin B
2010-01-01
Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente
Bayesian artificial intelligence
Korb, Kevin B
2003-01-01
As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.
Time-domain beamforming and blind source separation speech input in the car environment
Bourgeois, Julien
2009-01-01
The development of computer and telecommunication technologies led to a revolutioninthewaythatpeopleworkandcommunicatewitheachother.One of the results is that large amount of information will increasingly be held in a form that is natural for users, as speech in natural language. In the presented work, we investigate the speech signal capture problem, which includes the separation of multiple interfering speakers using microphone arrays. Adaptive beamforming is a classical approach which has been developed since the seventies. However it requires a double-talk detector (DTD) that interrupts th
Separation of blended impulsive sources using an iterative estimation-subtraction algorithm
Doulgeris, P.; Mahdad, A.; Blacquière, G.
2010-01-01
Traditional data acquisition practice dictates the existence of sufficient time intervals between the firing of sequential impulsive sources in the field. However, much attention has been drawn recently to the possibility of shooting in an overlapping fashion. Numerous publications have addressed
Ion-source dependence of the distributions of internuclear separations in 2-MeV HeH+ beams
International Nuclear Information System (INIS)
Kanter, E.P.; Gemmell, D.S.; Plesser, I.; Vager, Z.
1981-01-01
Experiments involving the use of MeV molecular-ion beams have yielded new information on atomic collisions in solids. A central part of the analyses of such experiments is a knowledge of the distribution of internuclear separations contained in the incident beam. In an attempt to determine how these distributions depend on ion-source gas conditions, we have studied foil-induced dissociations of H 2+ , H 3+ , HeH + , and OH 2+ ions. Although changes of ion-source gas compositions and pressure were found to have no measurable influence on the vibrational state populations of the beams reaching our target, for HeH + we found that beams produced in our rf source were vibrationally hotter than beams produced in a duoplasmatron. This was also seen in studies of neutral fragments and transmitted molecules
International Nuclear Information System (INIS)
Duesterhoeft, H.; Pippig, R.
1986-01-01
An alkali-metal ion source working without a store of alkali-metals is described. The alkali-metal ions are produced by evaporation of alkali salts and ionization in a low-voltage arc discharge stabilized with a noble gas plasma or in the case of small alkali-metal ion currents on the base of the well known thermic ionization at a hot tungsten wire. The source is very simple in construction and produces a stable ion current of 0.3 μA for more than 100 h. It is possible to change the ion species in a short time. This source is applicable to all SIMS equipments using mass separation for primary ions. (author)
Wolgemuth, D. J.; Gizang-Ginsberg, E.; Engelmyer, E.; Gavin, B. J.; Ponzetto, C.
1985-01-01
The use of a self-contained unit-gravity cell separation apparatus for separation of populations of mouse testicular cells is described. The apparatus, a Celsep (TM), maximizes the unit area over which sedimentation occurs, reduces the amount of separation medium employed, and is quite reproducible. Cells thus isolated have been good sources for isolation of DNA, and notably, high molecular weight RNA.
Directory of Open Access Journals (Sweden)
Kellermann Walter
2007-01-01
Full Text Available We address the problem of underdetermined BSS. While most previous approaches are designed for instantaneous mixtures, we propose a time-frequency-domain algorithm for convolutive mixtures. We adopt a two-step method based on a general maximum a posteriori (MAP approach. In the first step, we estimate the mixing matrix based on hierarchical clustering, assuming that the source signals are sufficiently sparse. The algorithm works directly on the complex-valued data in the time-frequency domain and shows better convergence than algorithms based on self-organizing maps. The assumption of Laplacian priors for the source signals in the second step leads to an algorithm for estimating the source signals. It involves the -norm minimization of complex numbers because of the use of the time-frequency-domain approach. We compare a combinatorial approach initially designed for real numbers with a second-order cone programming (SOCP approach designed for complex numbers. We found that although the former approach is not theoretically justified for complex numbers, its results are comparable to, or even better than, the SOCP solution. The advantage is a lower computational cost for problems with low input/output dimensions.
Yuan, Ying; MacKinnon, David P.
2009-01-01
This article proposes Bayesian analysis of mediation effects. Compared to conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian mediation analysis, inference is straightforward and exact, which makes it appealing for studies with small samples. Third, the Bayesian approach is conceptua...
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Non-parametric Bayesian models of response function in dynamic image sequences
Czech Academy of Sciences Publication Activity Database
Tichý, Ondřej; Šmídl, Václav
2016-01-01
Roč. 151, č. 1 (2016), s. 90-100 ISSN 1077-3142 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Response function * Blind source separation * Dynamic medical imaging * Probabilistic models * Bayesian methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.498, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/tichy-0456983.pdf
Quality assurance of nuclear analytical techniques based on Bayesian characteristic limits
International Nuclear Information System (INIS)
Michel, R.
2000-01-01
Based on Bayesian statistics, characteristic limits such as decision threshold, detection limit and confidence limits can be calculated taking into account all sources of experimental uncertainties. This approach separates the complete evaluation of a measurement according to the ISO Guide to the Expression of Uncertainty in Measurement from the determination of the characteristic limits. Using the principle of maximum entropy the characteristic limits are determined from the complete standard uncertainty of the measurand. (author)
Marsman, M.; Wagenmakers, E.-J.
2017-01-01
We illustrate the Bayesian approach to data analysis using the newly developed statistical software program JASP. With JASP, researchers are able to take advantage of the benefits that the Bayesian framework has to offer in terms of parameter estimation and hypothesis testing. The Bayesian
Directory of Open Access Journals (Sweden)
Gary E Strangman
Full Text Available Understanding the spatial and depth sensitivity of non-invasive near-infrared spectroscopy (NIRS measurements to brain tissue-i.e., near-infrared neuromonitoring (NIN - is essential for designing experiments as well as interpreting research findings. However, a thorough characterization of such sensitivity in realistic head models has remained unavailable. In this study, we conducted 3,555 Monte Carlo (MC simulations to densely cover the scalp of a well-characterized, adult male template brain (Colin27. We sought to evaluate: (i the spatial sensitivity profile of NIRS to brain tissue as a function of source-detector separation, (ii the NIRS sensitivity to brain tissue as a function of depth in this realistic and complex head model, and (iii the effect of NIRS instrument sensitivity on detecting brain activation. We found that increasing the source-detector (SD separation from 20 to 65 mm provides monotonic increases in sensitivity to brain tissue. For every 10 mm increase in SD separation (up to ~45 mm, sensitivity to gray matter increased an additional 4%. Our analyses also demonstrate that sensitivity in depth (S decreases exponentially, with a "rule-of-thumb" formula S=0.75*0.85(depth. Thus, while the depth sensitivity of NIRS is not strictly limited, NIN signals in adult humans are strongly biased towards the outermost 10-15 mm of intracranial space. These general results, along with the detailed quantitation of sensitivity estimates around the head, can provide detailed guidance for interpreting the likely sources of NIRS signals, as well as help NIRS investigators design and plan better NIRS experiments, head probes and instruments.
Directory of Open Access Journals (Sweden)
Goto Masataka
2010-01-01
Full Text Available We describe a novel query-by-example (QBE approach in music information retrieval that allows a user to customize query examples by directly modifying the volume of different instrument parts. The underlying hypothesis of this approach is that the musical mood of retrieved results changes in relation to the volume balance of different instruments. On the basis of this hypothesis, we aim to clarify the relationship between the change in the volume balance of a query and the genre of the retrieved pieces, called genre classification shift. Such an understanding would allow us to instruct users in how to generate alternative queries without finding other appropriate pieces. Our QBE system first separates all instrument parts from the audio signal of a piece with the help of its musical score, and then it allows users remix these parts to change the acoustic features that represent the musical mood of the piece. Experimental results showed that the genre classification shift was actually caused by the volume change in the vocal, guitar, and drum parts.
Investigation of A = 152 radioactivities with mass-separated sources: Identification of /sup 152/Lu
International Nuclear Information System (INIS)
Toth, K.S.; Sousa, D.C.; Nitschke, J.M.; Wilmarth, P.A.
1987-01-01
Nuclides with A = 152 were produced in /sup 58/Ni bombardments of /sup 96/Ru and their decay properties were investigated following on-line mass separation. The isotope /sup 152/Lu (T/sub 1/2/ = 0.7 +- 0.1 s) was identified by γ rays in its β-decay daughter, /sup 152/Yb. Based on its decay characteristics, the parent state has a probable spin and parity assignment of (4,5,6 - ). Several new transitions were observed to follow the β decays of /sup 152/Yb and the /sup 152/Tm low-spin isomer; they established previously unknown levels in both /sup 152/Tm and /sup 152/Er. The additional γ rays in /sup 152/Yb decay reduce from 100% to 88% the direct feeding to the one excited state in /sup 152/Tm that had been known earlier. Nevertheless, the corresponding logft value is calculated to be 3.5, indicating that this is an allowed β transition which connects the 0 + parent with a 1 + excited state in /sup 152/Tm. By comparing the β-decay rates of /sup 148/Dy and /sup 152/Ho, and the α- and β-decay rates of /sup 152/Er, an α branch of 90 +- 4 % was deduced for /sup 152/Er
Noise-Source Separation Using Internal and Far-Field Sensors for a Full-Scale Turbofan Engine
Hultgren, Lennart S.; Miles, Jeffrey H.
2009-01-01
Noise-source separation techniques for the extraction of the sub-dominant combustion noise from the total noise signatures obtained in static-engine tests are described. Three methods are applied to data from a static, full-scale engine test. Both 1/3-octave and narrow-band results are discussed. The results are used to assess the combustion-noise prediction capability of the Aircraft Noise Prediction Program (ANOPP). A new additional phase-angle-based discriminator for the three-signal method is also introduced.
APPLICATION OF COMPENSATION METHOD FOR SEPARATION USEFUL SIGNAL AND INTERFERENCE FROM NEARBY SOURCES
Directory of Open Access Journals (Sweden)
2016-01-01
Full Text Available Based on the comparative analysis of the known methods of noise suppression the ratios for the angular measurements are obtained. A mathematical model experiment to estimate the dependence of measurement error on the relative position of interference source and useful signal has been conducted. The modified method of interfer- ence compensation is tested experimentally. The analysis of obtained angular measurements for the considered methods shows that the modified method of compensation allows obtaining more precise estimates. The analyzed methodsallow considerably eliminating the useful signal from the antenna additional channel which reduces errors of angular misalignment.To determine the degree of the radar error analytically is not always possible, and in future comparison of the ef- fectiveness of various methods of interference compensation will be expected to conduct by means of mathematical model-ing of radar closed contour.
The separation of control variables in an H/sup /minus// ion source
International Nuclear Information System (INIS)
Bowling, P.S.; Brown, S.K.
1988-01-01
This paper describes a successful methodology which was used to classify a series if waveforms taken from a 100 ma H/sup /minus// ion source at Los Alamos. The series of 260 waveforms was divided into a ''training'' set and a ''test'' set. A sequence of mathematical transformations was performed on the ''training'' waveforms data and then it was subjected to discriminant analysis. The analysis generates a set of filters which will allow classification of an unknown waveform in the ''test'' set as being either stable or unstable; if stable, whether optimal or not; if not optimal, which of the six control parameters should be adjusted to bring it to an optimal condition. We have found that the probability of successful classification using this methodology is 91.5%. 3 refs., 4 figs., 2 tabs
Silva, Jorge; Chau, Tom
2005-09-01
Recent advances in sensor technology for muscle activity monitoring have resulted in the development of a coupled microphone-accelerometer sensor pair for physiological acousti signal recording. This sensor can be used to eliminate interfering sources in practical settings where the contamination of an acoustic signal by ambient noise confounds detection but cannot be easily removed [e.g., mechanomyography (MMG), swallowing sounds, respiration, and heart sounds]. This paper presents a mathematical model for the coupled microphone-accelerometer vibration sensor pair, specifically applied to muscle activity monitoring (i.e., MMG) and noise discrimination in externally powered prostheses for below-elbow amputees. While the model provides a simple and reliable source separation technique for MMG signals, it can also be easily adapted to other aplications where the recording of low-frequency (< 1 kHz) physiological vibration signals is required.
Laassiri, M.; Hamzaoui, E.-M.; Cherkaoui El Moursli, R.
2018-02-01
Inside nuclear reactors, gamma-rays emitted from nuclei together with the neutrons introduce unwanted backgrounds in neutron spectra. For this reason, powerful extraction methods are needed to extract useful neutron signal from recorded mixture and thus to obtain clearer neutron flux spectrum. Actually, several techniques have been developed to discriminate between neutrons and gamma-rays in a mixed radiation field. Most of these techniques, tackle using analogue discrimination methods. Others propose to use some organic scintillators to achieve the discrimination task. Recently, systems based on digital signal processors are commercially available to replace the analog systems. As alternative to these systems, we aim in this work to verify the feasibility of using a Nonnegative Tensor Factorization (NTF) to blind extract neutron component from mixture signals recorded at the output of fission chamber (WL-7657). This last have been simulated through the Geant4 linked to Garfield++ using a 252Cf neutron source. To achieve our objective of obtaining the best possible neutron-gamma discrimination, we have applied the two different NTF algorithms, which have been found to be the best methods that allow us to analyse this kind of nuclear data.
Korucu, M Kemal; Kaplan, Özgür; Büyük, Osman; Güllü, M Kemal
2016-10-01
In this study, we investigate the usability of sound recognition for source separation of packaging wastes in reverse vending machines (RVMs). For this purpose, an experimental setup equipped with a sound recording mechanism was prepared. Packaging waste sounds generated by three physical impacts such as free falling, pneumatic hitting and hydraulic crushing were separately recorded using two different microphones. To classify the waste types and sizes based on sound features of the wastes, a support vector machine (SVM) and a hidden Markov model (HMM) based sound classification systems were developed. In the basic experimental setup in which only free falling impact type was considered, SVM and HMM systems provided 100% classification accuracy for both microphones. In the expanded experimental setup which includes all three impact types, material type classification accuracies were 96.5% for dynamic microphone and 97.7% for condenser microphone. When both the material type and the size of the wastes were classified, the accuracy was 88.6% for the microphones. The modeling studies indicated that hydraulic crushing impact type recordings were very noisy for an effective sound recognition application. In the detailed analysis of the recognition errors, it was observed that most of the errors occurred in the hitting impact type. According to the experimental results, it can be said that the proposed novel approach for the separation of packaging wastes could provide a high classification performance for RVMs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Yuan, B.; Coggon, M.; Koss, A.; Warneke, C.; Eilerman, S. J.; Neuman, J. A.; Peischl, J.; Aikin, K. C.; Ryerson, T. B.; De Gouw, J. A.
2016-12-01
Concentrated animal feeding operations (CAFOs) are important sources of volatile organic compounds (VOCs) in the atmosphere. We used a hydronium ion time-of-flight chemical ionization mass spectrometer (H3O+ ToF-CIMS) to measure VOC emissions from CAFOs in the Northern Front Range of Colorado during an aircraft campaign (SONGNEX) for regional contributions and from a mobile laboratory sampling for chemical characterizations of individual animal feedlots. The main VOCs emitted from CAFOs include carboxylic acids, alcohols, carbonyls, phenolic species, sulfur- and nitrogen-containing species. Alcohols and carboxylic acids dominate VOC concentrations. Sulfur-containing and phenolic species become more important in terms of odor activity values and NO3 reactivity, respectively. The high time-resolution mobile measurements allow the separation of the sources of VOCs from different parts of the operations occurring within the facilities. We show that the increase of ethanol concentrations were primarily associated with feed storage and handling. We apply a multivariate regression analysis using NH3 and ethanol as tracers to attribute the relative importance of animal-related emissions (animal exhalation and waste) and feed-related emissions (feed storage and handling) for different VOC species. Feed storage and handling contribute significantly to emissions of alcohols, carbonyls and carboxylic acids. Phenolic species and nitrogen-containing species are predominantly associated with animals and their waste. VOC ratios can be potentially used as indicators for the separation of emissions from dairy and beef cattle from the regional aircraft measurements.
Bayesian nonparametric hierarchical modeling.
Dunson, David B
2009-04-01
In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.
DEFF Research Database (Denmark)
Naroznova, Irina; Møller, Jacob; Scheutz, Charlotte
2016-01-01
This study compared the environmental profiles of anaerobic digestion (AD) and incineration, in relation to global warming potential (GWP), for treating individual material fractions that may occur in source-separated organic household waste (SSOHW). Different framework conditions representative...
DEFF Research Database (Denmark)
Naroznova, Irina; Møller, Jacob; Larsen, Bjarne
2016-01-01
A new technology for pre-treating source-separated organic household waste prior to anaerobic digestion was assessed, and its performance was compared to existing alternative pre-treatment technologies. This pre-treatment technology is based on waste pulping with water, using a specially developed...... screw mechanism. The pre-treatment technology rejects more than 95% (wet weight) of non-biodegradable impurities in waste collected from households and generates biopulp ready for anaerobic digestion. Overall, 84-99% of biodegradable material (on a dry weight basis) in the waste was recovered...... in the biopulp. The biochemical methane potential for the biopulp was 469±7mL CH4/g ash-free mass. Moreover, all Danish and European Union requirements regarding the content of hazardous substances in biomass intended for land application were fulfilled. Compared to other pre-treatment alternatives, the screw...
Zeeman, Grietje; Kujawa, Katarzyna; de Mes, Titia; Hernandez, Lucia; de Graaff, Marthe; Abu-Ghunmi, Lina; Mels, Adriaan; Meulman, Brendo; Temmink, Hardy; Buisman, Cees; van Lier, Jules; Lettinga, Gatze
2008-01-01
Based on results of pilot scale research with source-separated black water (BW) and grey water (GW), a new sanitation concept is proposed. BW and GW are both treated in a UASB (-septic tank) for recovery of CH4 gas. Kitchen waste is added to the anaerobic BW treatment for doubling the biogas production. Post-treatment of the effluent is providing recovery of phosphorus and removal of remaining COD and nitrogen. The total energy saving of the new sanitation concept amounts to 200 MJ/year in comparison with conventional sanitation, moreover 0.14 kg P/p/year and 90 litres of potential reusable water are produced. (c) IWA Publishing 2008.
DEFF Research Database (Denmark)
Naroznova, Irina; Møller, Jacob; Scheutz, Charlotte
2016-01-01
This study is dedicated to characterising the chemical composition and biochemical methane potential (BMP) of individual material fractions in untreated Danish source-separated organic household waste (SSOHW). First, data on SSOHW in different countries, available in the literature, were evaluated...... and then, secondly, laboratory analyses for eight organic material fractions comprising Danish SSOHW were conducted. No data were found in the literature that fully covered the objectives of the present study. Based on laboratory analyses, all fractions were assigned according to their specific properties......) and material degradability (BMP from laboratory incubation tests divided by TBMP) were expressed. Moreover, the degradability of lignocellulose biofibres (the share of volatile lignocellulose biofibre solids degraded in laboratory incubation tests) was calculated. Finally, BMP for average SSOHW composition...
Energy Technology Data Exchange (ETDEWEB)
1978-01-01
This guide is intended to serve as a manual for organizing and managing office waste paper recovery programs in Canadian federal buildings. Waste paper generated in such buildings is of particular interest for recycling as it is produced in sufficiently large amounts, and contains large amounts of high-grade waste paper which obtain good prices from paper mills. The key to successful recovery of such paper is separation, at the source of waste generation, from other less-valuable papers and non-paper materials. In recommending ways to do this, the manual covers assessment of the viability of a collection program in a particular building, estimating the quantities of waste generated, calculating storage space necessary, marketing the paper collected, using proper collection and storage containers, promoting employee awareness, and administering and monitoring the program. A sample cost-benefit analysis is given for a general office building with 1,000 employees. Includes glossary. 14 refs., 10 figs., 5 tabs.
Dong, Shaojiang; Sun, Dihua; Xu, Xiangyang; Tang, Baoping
2017-06-01
Aiming at the problem that it is difficult to extract the feature information from the space bearing vibration signal because of different noise, for example the running trend information, high-frequency noise and especially the existence of lot of power line interference (50Hz) and its octave ingredients of the running space simulated equipment in the ground. This article proposed a combination method to eliminate them. Firstly, the EMD is used to remove the running trend item information of the signal, the running trend that affect the signal processing accuracy is eliminated. Then the morphological filter is used to eliminate high-frequency noise. Finally, the components and characteristics of the power line interference are researched, based on the characteristics of the interference, the revised blind source separation model is used to remove the power line interferences. Through analysis of simulation and practical application, results suggest that the proposed method can effectively eliminate those noise.
Directory of Open Access Journals (Sweden)
Duarte L.T.
2014-03-01
Full Text Available The development of chemical sensor arrays based on Blind Source Separation (BSS provides a promising solution to overcome the interference problem associated with Ion-Selective Electrodes (ISE. The main motivation behind this new approach is to ease the time-demanding calibration stage. While the first works on this problem only considered the case in which the ions under analysis have equal valences, the present work aims at developing a BSS technique that works when the ions have different charges. In this situation, the resulting mixing model belongs to a particular class of nonlinear systems that have never been studied in the BSS literature. In order to tackle this sort of mixing process, we adopted a recurrent network as separating system. Moreover, concerning the BSS learning strategy, we develop a mutual information minimization approach based on the notion of the differential of the mutual information. The method works requires a batch operation, and, thus, can be used to perform off-line analysis. The validity of our approach is supported by experiments where the mixing model parameters were extracted from actual data.
Understanding Computational Bayesian Statistics
Bolstad, William M
2011-01-01
A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic
Bayesian statistics an introduction
Lee, Peter M
2012-01-01
Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel
Multisnapshot Sparse Bayesian Learning for DOA
DEFF Research Database (Denmark)
Gerstoft, Peter; Mecklenbrauker, Christoph F.; Xenaki, Angeliki
2016-01-01
The directions of arrival (DOA) of plane waves are estimated from multisnapshot sensor array data using sparse Bayesian learning (SBL). The prior for the source amplitudes is assumed independent zero-mean complex Gaussian distributed with hyperparameters, the unknown variances (i.e., the source...
Yuan, Ying; MacKinnon, David P.
2009-01-01
In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…
Kleibergen, F.R.; Kleijn, R.; Paap, R.
2000-01-01
We propose a novel Bayesian test under a (noninformative) Jeffreys'priorspecification. We check whether the fixed scalar value of the so-calledBayesian Score Statistic (BSS) under the null hypothesis is aplausiblerealization from its known and standardized distribution under thealternative. Unlike
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Zöllig, Hanspeter; Fritzsche, Cristina; Morgenroth, Eberhard; Udert, Kai M
2015-02-01
Electrolysis can be a viable technology for ammonia removal from source-separated urine. Compared to biological nitrogen removal, electrolysis is more robust and is highly amenable to automation, which makes it especially attractive for on-site reactors. In electrolytic wastewater treatment, ammonia is usually removed by indirect oxidation through active chlorine which is produced in-situ at elevated anode potentials. However, the evolution of chlorine can lead to the formation of chlorate, perchlorate, chlorinated organic by-products and chloramines that are toxic. This study focuses on using direct ammonia oxidation on graphite at low anode potentials in order to overcome the formation of toxic by-products. With the aid of cyclic voltammetry, we demonstrated that graphite is active for direct ammonia oxidation without concomitant chlorine formation if the anode potential is between 1.1 and 1.6 V vs. SHE (standard hydrogen electrode). A comparison of potentiostatic bulk electrolysis experiments in synthetic stored urine with and without chloride confirmed that ammonia was removed exclusively by continuous direct oxidation. Direct oxidation required high pH values (pH > 9) because free ammonia was the actual reactant. In real stored urine (pH = 9.0), an ammonia removal rate of 2.9 ± 0.3 gN·m(-2)·d(-1) was achieved and the specific energy demand was 42 Wh·gN(-1) at an anode potential of 1.31 V vs. SHE. The measurements of chlorate and perchlorate as well as selected chlorinated organic by-products confirmed that no chlorinated by-products were formed in real urine. Electrode corrosion through graphite exfoliation was prevented and the surface was not poisoned by intermediate oxidation products. We conclude that direct ammonia oxidation on graphite electrodes is a treatment option for source-separated urine with three major advantages: The formation of chlorinated by-products is prevented, less energy is consumed than in indirect ammonia oxidation and
Dong, Jun; Ni, Mingjiang; Chi, Yong; Zou, Daoan; Fu, Chao
2013-08-01
In China, the continuously increasing amount of municipal solid waste (MSW) has resulted in an urgent need for changing the current municipal solid waste management (MSWM) system based on mixed collection. A pilot program focusing on source-separated MSW collection was thus launched (2010) in Hangzhou, China, to lessen the related environmental loads. And greenhouse gas (GHG) emissions (Kyoto Protocol) are singled out in particular. This paper uses life cycle assessment modeling to evaluate the potential environmental improvement with regard to GHG emissions. The pre-existing MSWM system is assessed as baseline, while the source separation scenario is compared internally. Results show that 23 % GHG emissions can be decreased by source-separated collection compared with the base scenario. In addition, the use of composting and anaerobic digestion (AD) is suggested for further optimizing the management of food waste. 260.79, 82.21, and -86.21 thousand tonnes of GHG emissions are emitted from food waste landfill, composting, and AD, respectively, proving the emission reduction potential brought by advanced food waste treatment technologies. Realizing the fact, a modified MSWM system is proposed by taking AD as food waste substitution option, with additional 44 % GHG emissions saved than current source separation scenario. Moreover, a preliminary economic assessment is implemented. It is demonstrated that both source separation scenarios have a good cost reduction potential than mixed collection, with the proposed new system the most cost-effective one.
Isomer separation of $^{70g}Cu$ and $^{70m}Cu$ with a resonance ionization laser ion source
Köster, U; Mishin, V I; Weissman, L; Huyse, M; Kruglov, K; Müller, W F; Van Duppen, P; Van Roosbroeck, J; Thirolf, P G; Thomas, H C; Weisshaar, D W; Schulze, W; Borcea, R; La Commara, M; Schatz, H; Schmidt, K; Röttger, S; Huber, G; Sebastian, V; Kratz, K L; Catherall, R; Georg, U; Lettry, Jacques; Oinonen, M; Ravn, H L; Simon, H
2000-01-01
Radioactive copper isotopes were ionized with the resonance ionization laser ion source at the on-line isotope separator ISOLDE (CERN). Using the different hyperfine structure in the 3d/sup 10/ 4s /sup 2/S/sub 1/2/-3d/sup 10/ 4p /sup 2/P/sub 1/2//sup 0/ transition the low- and high-spin isomers of /sup 70/Cu were selectively enhanced by tuning the laser wavelength. The light was provided by a narrow-bandwidth dye laser pumped by copper vapor lasers and frequency doubled in a BBO crystal. The ground state to isomeric state intensity ratio could be varied by a factor of 30, allowing to assign gamma transitions unambiguously to the decay of the individual isomers. It is shown that the method can also be used to determine magnetic moments. In a first experiment for the 1/sup +/ ground state of /sup 70/Cu a magnetic moment of (+)1.8(3) mu /sub N/ and for the high-spin isomer of /sup 70/Cu a magnetic moment of (+or-)1.2(3) mu /sub N/ could be deduced. (20 refs).
Graf, John; Taylor, Dale; Martinez, James
2014-01-01
]. Combined with a mechanical compressor, a Solid Electrolyte Oxygen Separator (SEOS) should be capable of producing ABO grade oxygen at pressures >2400 psia, on the space station. Feasibility tests using a SEOS integrated with a mechanical compressor identified an unexpected contaminant in the oxygen: water vapour was found in the oxygen product, sometimes at concentrations higher than 40 ppm (the ABO limit for water vapour is 7 ppm). If solid electrolyte membranes are really "infinitely selective" to oxygen as they are reported to be, where did the water come from? If water is getting into the oxygen, what other contaminants might get into the oxygen? Microscopic analyses of wafers, welds, and oxygen delivery tubes were performed in an attempt to find the source of the water vapour contamination. Hot and cold pressure decay tests were performed. Measurements of water vapour as a function of O2 delivery rate, O2 delivery pressure, and process air humidity levels were the most instructive in finding the source of water contamination (Fig 3). Water contamination was directly affected by oxygen delivery rate (doubling the oxygen production rate cut the water level in half). Water was affected by process air humidity levels and delivery pressure in a way that indicates the water was diffusing into the oxygen delivery system.
Bayesian data analysis for newcomers.
Kruschke, John K; Liddell, Torrin M
2018-02-01
This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.
Bayesian methods for data analysis
Carlin, Bradley P.
2009-01-01
Approaches for statistical inference Introduction Motivating Vignettes Defining the Approaches The Bayes-Frequentist Controversy Some Basic Bayesian Models The Bayes approach Introduction Prior Distributions Bayesian Inference Hierarchical Modeling Model Assessment Nonparametric Methods Bayesian computation Introduction Asymptotic Methods Noniterative Monte Carlo Methods Markov Chain Monte Carlo Methods Model criticism and selection Bayesian Modeling Bayesian Robustness Model Assessment Bayes Factors via Marginal Density Estimation Bayes Factors
Yuan, Bin; Coggon, Matthew M.; Koss, Abigail R.; Warneke, Carsten; Eilerman, Scott; Peischl, Jeff; Aikin, Kenneth C.; Ryerson, Thomas B.; de Gouw, Joost A.
2017-04-01
Concentrated animal feeding operations (CAFOs) emit a large number of volatile organic compounds (VOCs) to the atmosphere. In this study, we conducted mobile laboratory measurements of VOCs, methane (CH4) and ammonia (NH3) downwind of dairy cattle, beef cattle, sheep and chicken CAFO facilities in northeastern Colorado using a hydronium ion time-of-flight chemical-ionization mass spectrometer (H3O+ ToF-CIMS), which can detect numerous VOCs. Regional measurements of CAFO emissions in northeastern Colorado were also performed using the NOAA WP-3D aircraft during the Shale Oil and Natural Gas Nexus (SONGNEX) campaign. Alcohols and carboxylic acids dominate VOC concentrations and the reactivity of the VOCs with hydroxyl (OH) radicals. Sulfur-containing and phenolic species provide the largest contributions to the odor activity values and the nitrate radical (NO3) reactivity of VOC emissions, respectively. VOC compositions determined from mobile laboratory and aircraft measurements generally agree well with each other. The high time-resolution mobile measurements allow for the separation of the sources of VOCs from different parts of the operations occurring within the facilities. We show that the emissions of ethanol are primarily associated with feed storage and handling. Based on mobile laboratory measurements, we apply a multivariate regression analysis using NH3 and ethanol as tracers to determine the relative importance of animal-related emissions (animal exhalation and waste) and feed-related emissions (feed storage and handling) for different VOC species. Feed storage and handling contribute significantly to emissions of alcohols, carbonyls, carboxylic acids and sulfur-containing species. Emissions of phenolic species and nitrogen-containing species are predominantly associated with animals and their waste.
Naroznova, Irina; Møller, Jacob; Scheutz, Charlotte
2016-04-01
This study is dedicated to characterising the chemical composition and biochemical methane potential (BMP) of individual material fractions in untreated Danish source-separated organic household waste (SSOHW). First, data on SSOHW in different countries, available in the literature, were evaluated and then, secondly, laboratory analyses for eight organic material fractions comprising Danish SSOHW were conducted. No data were found in the literature that fully covered the objectives of the present study. Based on laboratory analyses, all fractions were assigned according to their specific properties in relation to BMP, protein content, lipids, lignocellulose biofibres and easily degradable carbohydrates (carbohydrates other than lignocellulose biofibres). The three components in lignocellulose biofibres, i.e. lignin, cellulose and hemicellulose, were differentiated, and theoretical BMP (TBMP) and material degradability (BMP from laboratory incubation tests divided by TBMP) were expressed. Moreover, the degradability of lignocellulose biofibres (the share of volatile lignocellulose biofibre solids degraded in laboratory incubation tests) was calculated. Finally, BMP for average SSOHW composition in Denmark (untreated) was calculated, and the BMP contribution of the individual material fractions was then evaluated. Material fractions of the two general waste types, defined as "food waste" and "fibre-rich waste," were found to be anaerobically degradable with considerable BMP. Material degradability of material fractions such as vegetation waste, moulded fibres, animal straw, dirty paper and dirty cardboard, however, was constrained by lignin content. BMP for overall SSOHW (untreated) was 404 mL CH4 per g VS, which might increase if the relative content of material fractions, such as animal and vegetable food waste, kitchen tissue and dirty paper in the waste, becomes larger. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Vincent Zoes
2011-02-01
Full Text Available A greenhouse experiment was conducted to evaluate the use of growth substrates, made with duck excreta enriched wood shaving compost (DMC and the organic fraction of source-separated municipal solid waste (MSW compost, on the growth and yield of tomato (Lycopersicum esculentum Mill. cv. Campbell 1327. Substrate A consisted of 3:2 (W/W proportion of DMC and MSW composts. Substrates B and C were the same as A but contained 15% (W/W ratio of brick dust and shredded plastic, respectively. Three control substrates consisted of the commercially available peat-based substrate (Pr, an in-house sphagnum peat-based substrate (Gs, and black earth mixed with sandy loam soil (BE/S in a 1:4 (W/W ratio. Substrates (A, B, C and controls received nitrogen (N, phosphate (P and potassium (K at equivalent rates of 780 mg/pot, 625 mg/pot, and 625 mg/pot, respectively, or were used without mineral fertilizers. Compared to the controls (Pr, Gs and BE/S, tomato plants grown on A, B, and C produced a greater total number and dry mass of fruits, with no significant differences between them. On average, total plant dry-matter biomass in substrate A, B, and C was 19% lower than that produced on Pr, but 28% greater than biomass obtained for plant grown, on Gs and BE/S. Plant height, stem diameter and chlorophyll concentrations indicate that substrates A, B, and C were particularly suitable for plant growth. Although the presence of excess N in composted substrates favoured vegetative rather than reproductive growth, the continuous supply of nutrients throughout the growing cycle, as well as the high water retention capacity that resulted in a reduced watering by 50%, suggest that substrates A, B, and C were suitable growing mixes, offering environmental and agronomic advantages.
Noncausal Bayesian Vector Autoregression
DEFF Research Database (Denmark)
Lanne, Markku; Luoto, Jani
We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution...
Statistics: a Bayesian perspective
National Research Council Canada - National Science Library
Berry, Donald A
1996-01-01
...: it is the only introductory textbook based on Bayesian ideas, it combines concepts and methods, it presents statistics as a means of integrating data into the significant process, it develops ideas...
Fox, Gerardus J.A.; van den Berg, Stéphanie Martine; Veldkamp, Bernard P.; Irwing, P.; Booth, T.; Hughes, D.
2015-01-01
In educational and psychological studies, psychometric methods are involved in the measurement of constructs, and in constructing and validating measurement instruments. Assessment results are typically used to measure student proficiency levels and test characteristics. Recently, Bayesian item
Bayesian Networks An Introduction
Koski, Timo
2009-01-01
Bayesian Networks: An Introduction provides a self-contained introduction to the theory and applications of Bayesian networks, a topic of interest and importance for statisticians, computer scientists and those involved in modelling complex data sets. The material has been extensively tested in classroom teaching and assumes a basic knowledge of probability, statistics and mathematics. All notions are carefully explained and feature exercises throughout. Features include:.: An introduction to Dirichlet Distribution, Exponential Families and their applications.; A detailed description of learni
Maeda, Shin-ichi
2014-01-01
Dropout is one of the key techniques to prevent the learning from overfitting. It is explained that dropout works as a kind of modified L2 regularization. Here, we shed light on the dropout from Bayesian standpoint. Bayesian interpretation enables us to optimize the dropout rate, which is beneficial for learning of weight parameters and prediction after learning. The experiment result also encourages the optimization of the dropout.
Ghosh, Sujit K
2010-01-01
Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.
International Nuclear Information System (INIS)
Karim Ghani, Wan Azlina Wan Ab.; Rusli, Iffah Farizan; Biak, Dayang Radiah Awang; Idris, Azni
2013-01-01
Highlights: ► Theory of planned behaviour (TPB) has been conducted to identify the influencing factors for participation in source separation of food waste using self administered questionnaires. ► The findings suggested several implications for the development and implementation of waste separation at home programme. ► The analysis indicates that the attitude towards waste separation is determined as the main predictors where this in turn could be a significant predictor of the repondent’s actual food waste separation behaviour. ► To date, none of similar have been reported elsewhere and this finding will be beneficial to local Authorities as indicator in designing campaigns to promote the use of waste separation programmes to reinforce the positive attitudes. - Abstract: Tremendous increases in biodegradable (food waste) generation significantly impact the local authorities, who are responsible to manage, treat and dispose of this waste. The process of separation of food waste at its generation source is identified as effective means in reducing the amount food waste sent to landfill and can be reused as feedstock to downstream treatment processes namely composting or anaerobic digestion. However, these efforts will only succeed with positive attitudes and highly participations rate by the public towards the scheme. Thus, the social survey (using questionnaires) to analyse public’s view and influencing factors towards participation in source separation of food waste in households based on the theory of planned behaviour technique (TPB) was performed in June and July 2011 among selected staff in Universiti Putra Malaysia, Serdang, Selangor. The survey demonstrates that the public has positive intention in participating provided the opportunities, facilities and knowledge on waste separation at source are adequately prepared by the respective local authorities. Furthermore, good moral values and situational factors such as storage convenience and
Energy Technology Data Exchange (ETDEWEB)
Karim Ghani, Wan Azlina Wan Ab., E-mail: wanaz@eng.upm.edu.my [Department of Chemical and Environmental Engineering, Faculty of Engineering, University Putra Malaysia, 43400 Serdang, Selangor Darul Ehsan (Malaysia); Rusli, Iffah Farizan, E-mail: iffahrusli@yahoo.com [Department of Chemical and Environmental Engineering, Faculty of Engineering, University Putra Malaysia, 43400 Serdang, Selangor Darul Ehsan (Malaysia); Biak, Dayang Radiah Awang, E-mail: dayang@eng.upm.edu.my [Department of Chemical and Environmental Engineering, Faculty of Engineering, University Putra Malaysia, 43400 Serdang, Selangor Darul Ehsan (Malaysia); Idris, Azni, E-mail: azni@eng.upm.edu.my [Department of Chemical and Environmental Engineering, Faculty of Engineering, University Putra Malaysia, 43400 Serdang, Selangor Darul Ehsan (Malaysia)
2013-05-15
Highlights: ► Theory of planned behaviour (TPB) has been conducted to identify the influencing factors for participation in source separation of food waste using self administered questionnaires. ► The findings suggested several implications for the development and implementation of waste separation at home programme. ► The analysis indicates that the attitude towards waste separation is determined as the main predictors where this in turn could be a significant predictor of the repondent’s actual food waste separation behaviour. ► To date, none of similar have been reported elsewhere and this finding will be beneficial to local Authorities as indicator in designing campaigns to promote the use of waste separation programmes to reinforce the positive attitudes. - Abstract: Tremendous increases in biodegradable (food waste) generation significantly impact the local authorities, who are responsible to manage, treat and dispose of this waste. The process of separation of food waste at its generation source is identified as effective means in reducing the amount food waste sent to landfill and can be reused as feedstock to downstream treatment processes namely composting or anaerobic digestion. However, these efforts will only succeed with positive attitudes and highly participations rate by the public towards the scheme. Thus, the social survey (using questionnaires) to analyse public’s view and influencing factors towards participation in source separation of food waste in households based on the theory of planned behaviour technique (TPB) was performed in June and July 2011 among selected staff in Universiti Putra Malaysia, Serdang, Selangor. The survey demonstrates that the public has positive intention in participating provided the opportunities, facilities and knowledge on waste separation at source are adequately prepared by the respective local authorities. Furthermore, good moral values and situational factors such as storage convenience and
Karim Ghani, Wan Azlina Wan Ab; Rusli, Iffah Farizan; Biak, Dayang Radiah Awang; Idris, Azni
2013-05-01
Tremendous increases in biodegradable (food waste) generation significantly impact the local authorities, who are responsible to manage, treat and dispose of this waste. The process of separation of food waste at its generation source is identified as effective means in reducing the amount food waste sent to landfill and can be reused as feedstock to downstream treatment processes namely composting or anaerobic digestion. However, these efforts will only succeed with positive attitudes and highly participations rate by the public towards the scheme. Thus, the social survey (using questionnaires) to analyse public's view and influencing factors towards participation in source separation of food waste in households based on the theory of planned behaviour technique (TPB) was performed in June and July 2011 among selected staff in Universiti Putra Malaysia, Serdang, Selangor. The survey demonstrates that the public has positive intention in participating provided the opportunities, facilities and knowledge on waste separation at source are adequately prepared by the respective local authorities. Furthermore, good moral values and situational factors such as storage convenience and collection times are also encouraged public's involvement and consequently, the participations rate. The findings from this study may provide useful indicator to the waste management authorities in Malaysia in identifying mechanisms for future development and implementation of food waste source separation activities in household programmes and communication campaign which advocate the use of these programmes. Copyright © 2012 Elsevier Ltd. All rights reserved.
The Chandra Source Catalog 2.0: Estimating Source Fluxes
Primini, Francis Anthony; Allen, Christopher E.; Miller, Joseph; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
The Second Chandra Source Catalog (CSC2.0) will provide information on approximately 316,000 point or compact extended x-ray sources, derived from over 10,000 ACIS and HRC-I imaging observations available in the public archive at the end of 2014. As in the previous catalog release (CSC1.1), fluxes for these sources will be determined separately from source detection, using a Bayesian formalism that accounts for background, spatial resolution effects, and contamination from nearby sources. However, the CSC2.0 procedure differs from that used in CSC1.1 in three important aspects. First, for sources in crowded regions in which photometric apertures overlap, fluxes are determined jointly, using an extension of the CSC1.1 algorithm, as discussed in Primini & Kashyap (2014ApJ...796…24P). Second, an MCMC procedure is used to estimate marginalized posterior probability distributions for source fluxes. Finally, for sources observed in multiple observations, a Bayesian Blocks algorithm (Scargle, et al. 2013ApJ...764..167S) is used to group observations into blocks of constant source flux.In this poster we present details of the CSC2.0 photometry algorithms and illustrate their performance in actual CSC2.0 datasets.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
Sparse linear models: Variational approximate inference and Bayesian experimental design
International Nuclear Information System (INIS)
Seeger, Matthias W
2009-01-01
A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.
Sparse linear models: Variational approximate inference and Bayesian experimental design
Energy Technology Data Exchange (ETDEWEB)
Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)
2009-12-01
A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.
Bayesian networks with examples in R
Scutari, Marco
2014-01-01
Introduction. The Discrete Case: Multinomial Bayesian Networks. The Continuous Case: Gaussian Bayesian Networks. More Complex Cases. Theory and Algorithms for Bayesian Networks. Real-World Applications of Bayesian Networks. Appendices. Bibliography.
Bayesian methods in reliability
Sander, P.; Badoux, R.
1991-11-01
The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.
Zhang, Dongliang; Huang, Guangqing; Yin, Xiaoling; Gong, Qinghua
2015-01-01
Understanding the factors that affect residents’ waste separation behaviors helps in constructing effective environmental campaigns for a community. Using the theory of planned behavior (TPB), this study examines factors associated with waste separation behaviors by analyzing responses to questionnaires distributed in Guangzhou, China. Data drawn from 208 of 1000-field questionnaires were used to assess socio-demographic factors and the TPB constructs (i.e., attitudes, subjective norms, perceived behavioral control, intentions, and situational factors). The questionnaire data revealed that attitudes, subjective norms, perceived behavioral control, intentions, and situational factors significantly predicted household waste behaviors in Guangzhou, China. Through a structural equation modeling analysis, we concluded that campaigns targeting moral obligations may be particularly effective for increasing the participation rate in waste separation behaviors. PMID:26274969
DEFF Research Database (Denmark)
Fernandez Grande, Efren; Jacobsen, Finn
2010-01-01
to the source. Thus the radiated free-field component is estimated simultaneously with solving the inverse problem of reconstructing the sound field near the source. The method is particularly suited to cases in which the overall contribution of reflected sound in the measurement plane is significant....
CSIR Research Space (South Africa)
Rosman, Benjamin
2016-02-01
Full Text Available Keywords Policy Reuse · Reinforcement Learning · Online Learning · Online Bandits · Transfer Learning · Bayesian Optimisation · Bayesian Decision Theory. 1 Introduction As robots and software agents are becoming more ubiquitous in many applications.... The agent has access to a library of policies (pi1, pi2 and pi3), and has previously experienced a set of task instances (τ1, τ2, τ3, τ4), as well as samples of the utilities of the library policies on these instances (the black dots indicate the means...
Loveless, Sian E.; Bloomfield, John P.; Ward, Robert S.; Hart, Alwyn J.; Davey, Ian R.; Lewis, Melinda A.
2018-03-01
Shale gas is considered by many to have the potential to provide the UK with greater energy security, economic growth and jobs. However, development of a shale gas industry is highly contentious due to environmental concerns including the risk of groundwater pollution. Evidence suggests that the vertical separation between exploited shale units and aquifers is an important factor in the risk to groundwater from shale gas exploitation. A methodology is presented to assess the vertical separation between different pairs of aquifers and shales that are present across England and Wales. The application of the method is then demonstrated for two of these pairs—the Cretaceous Chalk Group aquifer and the Upper Jurassic Kimmeridge Clay Formation, and the Triassic sandstone aquifer and the Carboniferous Bowland Shale Formation. Challenges in defining what might be considered criteria for `safe separation' between a shale gas formation and an overlying aquifer are discussed, in particular with respect to uncertainties in geological properties, aquifer extents and determination of socially acceptable risk levels. Modelled vertical separations suggest that the risk of aquifer contamination from shale exploration will vary greatly between shale-aquifer pairs and between regions and this will need to be considered carefully as part of the risk assessment and management for any shale gas development.
Soenens, Bart; Vansteenkiste, Maarten; Duriez, Bart; Goossens, Luc
2006-01-01
This study investigated the role of two dimensions of parental separation anxiety--Anxiety about Adolescent Distancing (AAD) and Comfort with Secure Base Role (CSBR)--and parental maladaptive perfectionism in the prediction of psychologically controlling parenting. In a sample of middle adolescents and their parents (N=677), it was found that…
Bayesian network modelling of upper gastrointestinal bleeding
Aisha, Nazziwa; Shohaimi, Shamarina; Adam, Mohd Bakri
2013-09-01
Bayesian networks are graphical probabilistic models that represent causal and other relationships between domain variables. In the context of medical decision making, these models have been explored to help in medical diagnosis and prognosis. In this paper, we discuss the Bayesian network formalism in building medical support systems and we learn a tree augmented naive Bayes Network (TAN) from gastrointestinal bleeding data. The accuracy of the TAN in classifying the source of gastrointestinal bleeding into upper or lower source is obtained. The TAN achieves a high classification accuracy of 86% and an area under curve of 92%. A sensitivity analysis of the model shows relatively high levels of entropy reduction for color of the stool, history of gastrointestinal bleeding, consistency and the ratio of blood urea nitrogen to creatinine. The TAN facilitates the identification of the source of GIB and requires further validation.
Fedosseev, V; Marsh, B A; CERN. Geneva. AB Department
2006-01-01
At the ISOLDE on-line isotope separation facility, the resonance ionization laser ion source (RILIS) can be used to ionize reaction products as they effuse from the target. The RILIS process of laser step-wise resonance ionization of atoms in a hot metal cavity provides a highly element selective stage in the preparation of the radioactive ion beam. As a result, the ISOLDE mass separators can provide beams of a chosen isotope with greatly reduced isobaric contamination. The number of elements available at RILIS has been extended to 26, with the addition of a new three-step ionization scheme for gold. The optimal ionization scheme was determined during an extensive study of the atomic energy levels and auto-ionizing states of gold, carried out by means of in-source resonance ionization spectroscopy. Details of the ionization scheme and a summary of the spectroscopy study are presented.
Bayesian methods for hackers probabilistic programming and Bayesian inference
Davidson-Pilon, Cameron
2016-01-01
Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...
International Nuclear Information System (INIS)
Mazumdar, A.K.; Wagner, H.; Walcher, W.; Lund, T.
1976-01-01
A helium jet system was connected to a hollow cathode ion source. Using fission products the efficiencies of the different steps were measured by β-, X-ray and γ-counting while the mass spectrum and the focussing of the extracted ion beam were observed with a small deflecting magnet. Mean transport efficiencies of 50% through the 12 m capillary were obtained and ion source efficiencies in the percent range for several elements. (Auth.)
Bayesian logistic regression analysis
Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.
2012-01-01
In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an
Korattikara, A.; Rathod, V.; Murphy, K.; Welling, M.; Cortes, C.; Lawrence, N.D.; Lee, D.D.; Sugiyama, M.; Garnett, R.
2015-01-01
We consider the problem of Bayesian parameter estimation for deep neural networks, which is important in problem settings where we may have little data, and/ or where we need accurate posterior predictive densities p(y|x, D), e.g., for applications involving bandits or active learning. One simple
Bayesian Geostatistical Design
DEFF Research Database (Denmark)
Diggle, Peter; Lophaven, Søren Nymand
2006-01-01
locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model...
Bayesian statistical inference
Directory of Open Access Journals (Sweden)
Bruno De Finetti
2017-04-01
Full Text Available This work was translated into English and published in the volume: Bruno De Finetti, Induction and Probability, Biblioteca di Statistica, eds. P. Monari, D. Cocchi, Clueb, Bologna, 1993.Bayesian statistical Inference is one of the last fundamental philosophical papers in which we can find the essential De Finetti's approach to the statistical inference.
DEFF Research Database (Denmark)
Hartelius, Karsten; Carstensen, Jens Michael
2003-01-01
A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which r...
Directory of Open Access Journals (Sweden)
Chengjie Li
2016-01-01
Full Text Available In Passive Radar System, obtaining the mixed weak object signal against the super power signal (jamming is still a challenging task. In this paper, a novel framework based on Passive Radar System is designed for weak object signal separation. Firstly, we propose an Interference Cancellation algorithm (IC-algorithm to extract the mixed weak object signals from the strong jamming. Then, an improved FastICA algorithm with K-means cluster is designed to separate each weak signal from the mixed weak object signals. At last, we discuss the performance of the proposed method and verify the novel method based on several simulations. The experimental results demonstrate the effectiveness of the proposed method.
A Bayesian Framework of Uncertainties Integration in 3D Geological Model
Liang, D.; Liu, X.
2017-12-01
3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.
International Nuclear Information System (INIS)
Zou, Yonghong; Wang, Lixia; Christensen, Erik R.
2015-01-01
This work intended to explain the challenges of the fingerprints based source apportionment method for polycyclic aromatic hydrocarbons (PAH) in the aquatic environment, and to illustrate a practical and robust solution. The PAH data detected in the sediment cores from the Illinois River provide the basis of this study. Principal component analysis (PCA) separates PAH compounds into two groups reflecting their possible airborne transport patterns; but it is not able to suggest specific sources. Not all positive matrix factorization (PMF) determined sources are distinguishable due to the variability of source fingerprints. However, they constitute useful suggestions for inputs for a Bayesian chemical mass balance (CMB) analysis. The Bayesian CMB analysis takes into account the measurement errors as well as the variations of source fingerprints, and provides a credible source apportionment. Major PAH sources for Illinois River sediments are traffic (35%), coke oven (24%), coal combustion (18%), and wood combustion (14%). - Highlights: • Fingerprint variability poses challenges in PAH source apportionment analysis. • PCA can be used to group compounds or cluster measurements. • PMF requires results validation but is useful for source suggestion. • Bayesian CMB provide practical and credible solution. - A Bayesian CMB model combined with PMF is a practical and credible fingerprints based PAH source apportionment method
Bayesian optimization for materials science
Packwood, Daniel
2017-01-01
This book provides a short and concise introduction to Bayesian optimization specifically for experimental and computational materials scientists. After explaining the basic idea behind Bayesian optimization and some applications to materials science in Chapter 1, the mathematical theory of Bayesian optimization is outlined in Chapter 2. Finally, Chapter 3 discusses an application of Bayesian optimization to a complicated structure optimization problem in computational surface science. Bayesian optimization is a promising global optimization technique that originates in the field of machine learning and is starting to gain attention in materials science. For the purpose of materials design, Bayesian optimization can be used to predict new materials with novel properties without extensive screening of candidate materials. For the purpose of computational materials science, Bayesian optimization can be incorporated into first-principles calculations to perform efficient, global structure optimizations. While re...
Directory of Open Access Journals (Sweden)
Ali Mohammad-Djafari
2015-06-01
Full Text Available The main content of this review article is first to review the main inference tools using Bayes rule, the maximum entropy principle (MEP, information theory, relative entropy and the Kullback–Leibler (KL divergence, Fisher information and its corresponding geometries. For each of these tools, the precise context of their use is described. The second part of the paper is focused on the ways these tools have been used in data, signal and image processing and in the inverse problems, which arise in different physical sciences and engineering applications. A few examples of the applications are described: entropy in independent components analysis (ICA and in blind source separation, Fisher information in data model selection, different maximum entropy-based methods in time series spectral estimation and in linear inverse problems and, finally, the Bayesian inference for general inverse problems. Some original materials concerning the approximate Bayesian computation (ABC and, in particular, the variational Bayesian approximation (VBA methods are also presented. VBA is used for proposing an alternative Bayesian computational tool to the classical Markov chain Monte Carlo (MCMC methods. We will also see that VBA englobes joint maximum a posteriori (MAP, as well as the different expectation-maximization (EM algorithms as particular cases.
Oosugi, Naoya; Kitajo, Keiichi; Hasegawa, Naomi; Nagasaka, Yasuo; Okanoya, Kazuo; Fujii, Naotaka
2017-09-01
Blind source separation (BSS) algorithms extract neural signals from electroencephalography (EEG) data. However, it is difficult to quantify source separation performance because there is no criterion to dissociate neural signals and noise in EEG signals. This study develops a method for evaluating BSS performance. The idea is neural signals in EEG can be estimated by comparison with simultaneously measured electrocorticography (ECoG). Because the ECoG electrodes cover the majority of the lateral cortical surface and should capture most of the original neural sources in the EEG signals. We measured real EEG and ECoG data and developed an algorithm for evaluating BSS performance. First, EEG signals are separated into EEG components using the BSS algorithm. Second, the EEG components are ranked using the correlation coefficients of the ECoG regression and the components are grouped into subsets based on their ranks. Third, canonical correlation analysis estimates how much information is shared between the subsets of the EEG components and the ECoG signals. We used our algorithm to compare the performance of BSS algorithms (PCA, AMUSE, SOBI, JADE, fastICA) via the EEG and ECoG data of anesthetized nonhuman primates. The results (Best case >JADE = fastICA >AMUSE = SOBI ≥ PCA >random separation) were common to the two subjects. To encourage the further development of better BSS algorithms, our EEG and ECoG data are available on our Web site (http://neurotycho.org/) as a common testing platform. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Cao, H; Besio, W; Jones, S; Medvedev, A
2009-01-01
Tripolar electrodes have been shown to have less mutual information and higher spatial resolution than disc electrodes. In this work, a four-layer anisotropic concentric spherical head computer model was programmed, then four configurations of time-varying dipole signals were used to generate the scalp surface signals that would be obtained with tripolar and disc electrodes, and four important EEG artifacts were tested: eye blinking, cheek movements, jaw movements, and talking. Finally, a fast fixed-point algorithm was used for signal independent component analysis (ICA). The results show that signals from tripolar electrodes generated better ICA separation results than from disc electrodes for EEG signals with these four types of artifacts.
Probability and Bayesian statistics
1987-01-01
This book contains selected and refereed contributions to the "Inter national Symposium on Probability and Bayesian Statistics" which was orga nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...
DEFF Research Database (Denmark)
Mørup, Morten; Schmidt, Mikkel N
2012-01-01
Many networks of scientific interest naturally decompose into clusters or communities with comparatively fewer external than internal links; however, current Bayesian models of network communities do not exert this intuitive notion of communities. We formulate a nonparametric Bayesian model...... for community detection consistent with an intuitive definition of communities and present a Markov chain Monte Carlo procedure for inferring the community structure. A Matlab toolbox with the proposed inference procedure is available for download. On synthetic and real networks, our model detects communities...... consistent with ground truth, and on real networks, it outperforms existing approaches in predicting missing links. This suggests that community structure is an important structural property of networks that should be explicitly modeled....
Approximate Bayesian recursive estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav
2014-01-01
Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf
DEFF Research Database (Denmark)
Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.
2015-01-01
A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimenta...... economics, with careful controls for the confounding effects of risk aversion. Our results show that risk aversion significantly alters inferences on deviations from Bayes’ Rule....
Energy Technology Data Exchange (ETDEWEB)
Andrews, Stephen A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sigeti, David E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-11-15
These are a set of slides about Bayesian hypothesis testing, where many hypotheses are tested. The conclusions are the following: The value of the Bayes factor obtained when using the median of the posterior marginal is almost the minimum value of the Bayes factor. The value of τ^{2} which minimizes the Bayes factor is a reasonable choice for this parameter. This allows a likelihood ratio to be computed with is the least favorable to H_{0}.
Introduction to Bayesian statistics
Koch, Karl-Rudolf
2007-01-01
This book presents Bayes' theorem, the estimation of unknown parameters, the determination of confidence regions and the derivation of tests of hypotheses for the unknown parameters. It does so in a simple manner that is easy to comprehend. The book compares traditional and Bayesian methods with the rules of probability presented in a logical way allowing an intuitive understanding of random variables and their probability distributions to be formed.
Bayesian ARTMAP for regression.
Sasu, L M; Andonie, R
2013-10-01
Bayesian ARTMAP (BA) is a recently introduced neural architecture which uses a combination of Fuzzy ARTMAP competitive learning and Bayesian learning. Training is generally performed online, in a single-epoch. During training, BA creates input data clusters as Gaussian categories, and also infers the conditional probabilities between input patterns and categories, and between categories and classes. During prediction, BA uses Bayesian posterior probability estimation. So far, BA was used only for classification. The goal of this paper is to analyze the efficiency of BA for regression problems. Our contributions are: (i) we generalize the BA algorithm using the clustering functionality of both ART modules, and name it BA for Regression (BAR); (ii) we prove that BAR is a universal approximator with the best approximation property. In other words, BAR approximates arbitrarily well any continuous function (universal approximation) and, for every given continuous function, there is one in the set of BAR approximators situated at minimum distance (best approximation); (iii) we experimentally compare the online trained BAR with several neural models, on the following standard regression benchmarks: CPU Computer Hardware, Boston Housing, Wisconsin Breast Cancer, and Communities and Crime. Our results show that BAR is an appropriate tool for regression tasks, both for theoretical and practical reasons. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bayesian theory and applications
Dellaportas, Petros; Polson, Nicholas G; Stephens, David A
2013-01-01
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...
Bayesian inference for psychology. Part II: Example applications with JASP.
Wagenmakers, Eric-Jan; Love, Jonathon; Marsman, Maarten; Jamil, Tahira; Ly, Alexander; Verhagen, Josine; Selker, Ravi; Gronau, Quentin F; Dropmann, Damian; Boutin, Bruno; Meerhoff, Frans; Knight, Patrick; Raj, Akash; van Kesteren, Erik-Jan; van Doorn, Johnny; Šmíra, Martin; Epskamp, Sacha; Etz, Alexander; Matzke, Dora; de Jong, Tim; van den Bergh, Don; Sarafoglou, Alexandra; Steingroever, Helen; Derks, Koen; Rouder, Jeffrey N; Morey, Richard D
2018-02-01
Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.
Directory of Open Access Journals (Sweden)
Liangliang Wei
2018-02-01
Full Text Available To effectively de-noise the Gaussian white noise and periodic narrow-band interference in the background noise of partial discharge ultra-high frequency (PD UHF signals in field tests, a novel de-noising method, based on a single-channel blind source separation algorithm, is proposed. Compared with traditional methods, the proposed method can effectively de-noise the noise interference, and the distortion of the de-noising PD signal is smaller. Firstly, the PD UHF signal is time-frequency analyzed by S-transform to obtain the number of source signals. Then, the single-channel detected PD signal is converted into multi-channel signals by singular value decomposition (SVD, and background noise is separated from multi-channel PD UHF signals by the joint approximate diagonalization of eigen-matrix method. At last, the source PD signal is estimated and recovered by the l1-norm minimization method. The proposed de-noising method was applied on the simulation test and field test detected signals, and the de-noising performance of the different methods was compared. The simulation and field test results demonstrate the effectiveness and correctness of the proposed method.
Barminova, H Y; Saratovskyh, M S
2016-02-01
The experiment automation system is supposed to be developed for experimental facility for material science at ITEP, based on a Bernas ion source. The program CAMFT is assumed to be involved into the program of the experiment automation. CAMFT is developed to simulate the intense charged particle bunch motion in the external magnetic fields with arbitrary geometry by means of the accurate solution of the particle motion equation. Program allows the consideration of the bunch intensity up to 10(10) ppb. Preliminary calculations are performed at ITEP supercomputer. The results of the simulation of the beam pre-acceleration and following turn in magnetic field are presented for different initial conditions.
International Nuclear Information System (INIS)
Blicharska, Magdalena; Bartoś, Barbara; Krajewski, Seweryn; Bilewicz, Aleksander
2014-01-01
Brachytherapy is the common method for treating various tumors, and currently 106 Ru and 125 I applicators are the most frequently used. Considering that 106 Ru is a β emitter with maximum energy of 3.54 MeV, it is best indicated in the treatment of small melanomas, with up to 20 mm tissue range. 106 Ru is commercially obtained from neutron irradiated high enrichment 235 U target in process of production 99 Mo. At present, there are only a handful of ageing reactors worldwide capable of producing the 99 Mo, therefore alternative strategies for production of this key medical isotope are explored. In our work, we propose to use liquid high-level radioactive waste as a source of high activity of 106 Ru. Simple calculations indicate that 1 dm 3 of HLLW solution after 4 years of cooling contains about 500 GBq of 106 Ru. This amount of activity is enough for production of about few thousands of brachytherapy sources. Present communication reports results of our process development studies on the recovery of ruthenium radioisotopes from simulated solution of high level radioactive waste using oxidation-extraction method
Zou, Yonghong; Wang, Lixia; Christensen, Erik R
2015-10-01
This work intended to explain the challenges of the fingerprints based source apportionment method for polycyclic aromatic hydrocarbons (PAH) in the aquatic environment, and to illustrate a practical and robust solution. The PAH data detected in the sediment cores from the Illinois River provide the basis of this study. Principal component analysis (PCA) separates PAH compounds into two groups reflecting their possible airborne transport patterns; but it is not able to suggest specific sources. Not all positive matrix factorization (PMF) determined sources are distinguishable due to the variability of source fingerprints. However, they constitute useful suggestions for inputs for a Bayesian chemical mass balance (CMB) analysis. The Bayesian CMB analysis takes into account the measurement errors as well as the variations of source fingerprints, and provides a credible source apportionment. Major PAH sources for Illinois River sediments are traffic (35%), coke oven (24%), coal combustion (18%), and wood combustion (14%). Copyright © 2015. Published by Elsevier Ltd.
International Nuclear Information System (INIS)
Kawase, Y.; Okano, K.; Aoki, K.
1987-01-01
By using a high-temperature thermal ion source coupled to a He-jet system, neutron-rich isotopes of rare-earth elements such as cerium, praseodymium, neodymium and promethium produced by the thermal-neutron fission of /sup 235/U were ionized and successfully separated. The temperature dependence of the ionization efficiency has been measured and found to be explained qualitatively by the vapour pressure of the relevant elements. The characteristic temperature dependence of the ionization efficiency has been utilized for Z-identification of several isobars of rare-earth elements. The heaviest isotopes of neodymium and promethium, /sup 155/Nd and /sup 156/Pm, have recently been identified
Li, L.; Xu, C.-Y.; Engeland, K.
2012-04-01
With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD
Bayesian analysis in plant pathology.
Mila, A L; Carriquiry, A L
2004-09-01
ABSTRACT Bayesian methods are currently much discussed and applied in several disciplines from molecular biology to engineering. Bayesian inference is the process of fitting a probability model to a set of data and summarizing the results via probability distributions on the parameters of the model and unobserved quantities such as predictions for new observations. In this paper, after a short introduction of Bayesian inference, we present the basic features of Bayesian methodology using examples from sequencing genomic fragments and analyzing microarray gene-expressing levels, reconstructing disease maps, and designing experiments.
International Nuclear Information System (INIS)
Braendli, Rahel C.; Bucheli, Thomas D.; Kupper, Thomas; Mayer, Jochen; Stadelmann, Franz X.; Tarradellas, Joseph
2007-01-01
Composting and digestion are important waste management strategies. However, the resulting products can contain significant amounts of organic pollutants such as polychlorinated biphenyls (PCBs) and polycyclic aromatic hydrocarbons (PAHs). In this study we followed the concentration changes of PCBs and PAHs during composting and digestion on field-scale for the first time. Concentrations of low-chlorinated PCBs increased during composting (about 30%), whereas a slight decrease was observed for the higher chlorinated congeners (about 10%). Enantiomeric fractions of atropisomeric PCBs were essentially racemic and stable over time. Levels of low-molecular-weight PAHs declined during composting (50-90% reduction), whereas high-molecular-weight compounds were stable. The PCBs and PAHs concentrations did not seem to vary during digestion. Source apportionment by applying characteristic PAH ratios and molecular markers in input material did not give any clear results. Some of these parameters changed considerably during composting. Hence, their diagnostic potential for finished compost must be questioned. - During field-scale composting, low molecular weight PCBs and PAHs increased and decreased, respectively, whereas high molecular weight compounds remained stable
Energy Technology Data Exchange (ETDEWEB)
Bouriant, M [Commissariat a l' Energie Atomique, Grenoble (France). Centre d' Etudes Nucleaires
1967-12-01
The production of high purity stable or radioactive isotopes ({>=} 99.99 per cent) using electromagnetic separation require for equipment having a high resolving power. Besides, and in order to collect rare or short half-life isotopes, the efficiency of the ion-source must be high ({eta} > 5 to 10 per cent). With this in view, the source built operates at high temperatures (2500-3000 C) and makes use of ionisation by electronic bombardment or of thermo-ionisation. A summary is given in the first part of this work on the essential characteristics of the isotope separator ion Sources; a diagram of the principle of the source built is then given together with its characteristics. In the second part are given the values of the resolving power and of the efficiency of the Grenoble isotope separator fitted with such a source. The resolving power measured at 10 per cent of the peak height is of the order of 200. At the first magnetic stage the efficiency is between 1 and 26 per cent for a range of elements evaporating between 200 and 3000 C. Thus equipped, the separator has for example given, at the first stage, 10 mg of {sup 180}Hf at (99.69 {+-} 0.1) per cent corresponding to an enrichment coefficient of 580; recently 2 mg of {sup 150}Nd at (99.996 {+-} 0.002) per cent corresponding to an enrichment coefficient of 4.2 x 10{sup 5} has been obtained at the second stage. (author) [French] La production d'isotopes stables ou radioactifs de haute purete isotopique ({>=} 99.99 pour cent), par separation electromagnetique, exige des appareils de haut pouvoir de resolution. En outre, et en vue de collecter des isotopes de tres faible abondance ou de periode tres courte, le rendement des sources d'ions doit etre eleve ({eta} > 5 a 10 pour cent). Dans ce but, la source realisee fonctionne a haute temperature (2500-3000 C) et utilise l'ionisation par bombardement electronique, ou la thermoionisation. Dans la premiere partie de ce travail, on resume d'abord les caracteristiques
Le Grandois, Julie; Marchioni, Eric; Zhao, Minjie; Giuffrida, Francesca; Ennahar, Saïd; Bindler, Françoise
2009-07-22
This study is a contribution to the exploration of natural phospholipid (PL) sources rich in long-chain polyunsaturated fatty acids (LC-PUFAs) with nutritional interest. Phosphatidylcholines (PCs) were purified from total lipid extracts of different food matrices, and their molecular species were separated and identified by liquid chromatography-electrospray ionization-tandem mass spectrometry (LC-ESI-MS(2)). Fragmentation of lithiated adducts allowed for the identification of fatty acids linked to the glycerol backbone. Soy PC was particularly rich in species containing essential fatty acids, such as (18:2-18:2)PC (34.0%), (16:0-18:2)PC (20.8%), and (18:1-18:2)PC (16.3%). PC from animal sources (ox liver and egg yolk) contained major molecular species, such as (16:0-18:2)PC, (16:0-18:1)PC, (18:0-18:2)PC, or (18:0-18:1)PC. Finally, marine source (krill oil), which was particularly rich in (16:0-20:5)PC and (16:0-22:6)PC, appeared to be an interesting potential source for food supplementation with LC-PUFA-PLs, particularly eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA).
Cortical hierarchies perform Bayesian causal inference in multisensory perception.
Directory of Open Access Journals (Sweden)
Tim Rohe
2015-02-01
Full Text Available To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the "causal inference problem." Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI, and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation. At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion. Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world.
Discriminative Bayesian Dictionary Learning for Classification.
Akhtar, Naveed; Shafait, Faisal; Mian, Ajmal
2016-12-01
We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.
Bayesian nonparametric data analysis
Müller, Peter; Jara, Alejandro; Hanson, Tim
2015-01-01
This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.
Congdon, Peter
2014-01-01
This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU
International Nuclear Information System (INIS)
Duffy, L.P.
1991-01-01
This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community
Inference in hybrid Bayesian networks
DEFF Research Database (Denmark)
Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2009-01-01
Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....
Searching Algorithm Using Bayesian Updates
Caudle, Kyle
2010-01-01
In late October 1967, the USS Scorpion was lost at sea, somewhere between the Azores and Norfolk Virginia. Dr. Craven of the U.S. Navy's Special Projects Division is credited with using Bayesian Search Theory to locate the submarine. Bayesian Search Theory is a straightforward and interesting application of Bayes' theorem which involves searching…
Bayesian Data Analysis (lecture 2)
CERN. Geneva
2018-01-01
framework but we will also go into more detail and discuss for example the role of the prior. The second part of the lecture will cover further examples and applications that heavily rely on the bayesian approach, as well as some computational tools needed to perform a bayesian analysis.
Bayesian Data Analysis (lecture 1)
CERN. Geneva
2018-01-01
framework but we will also go into more detail and discuss for example the role of the prior. The second part of the lecture will cover further examples and applications that heavily rely on the bayesian approach, as well as some computational tools needed to perform a bayesian analysis.
The Bayesian Covariance Lasso.
Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G
2013-04-01
Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.
Bayesian dynamic mediation analysis.
Huang, Jing; Yuan, Ying
2017-12-01
Most existing methods for mediation analysis assume that mediation is a stationary, time-invariant process, which overlooks the inherently dynamic nature of many human psychological processes and behavioral activities. In this article, we consider mediation as a dynamic process that continuously changes over time. We propose Bayesian multilevel time-varying coefficient models to describe and estimate such dynamic mediation effects. By taking the nonparametric penalized spline approach, the proposed method is flexible and able to accommodate any shape of the relationship between time and mediation effects. Simulation studies show that the proposed method works well and faithfully reflects the true nature of the mediation process. By modeling mediation effect nonparametrically as a continuous function of time, our method provides a valuable tool to help researchers obtain a more complete understanding of the dynamic nature of the mediation process underlying psychological and behavioral phenomena. We also briefly discuss an alternative approach of using dynamic autoregressive mediation model to estimate the dynamic mediation effect. The computer code is provided to implement the proposed Bayesian dynamic mediation analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Approximate Bayesian computation.
Directory of Open Access Journals (Sweden)
Mikael Sunnåker
Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.
Bayesian inference with ecological applications
Link, William A
2009-01-01
This text is written to provide a mathematically sound but accessible and engaging introduction to Bayesian inference specifically for environmental scientists, ecologists and wildlife biologists. It emphasizes the power and usefulness of Bayesian methods in an ecological context. The advent of fast personal computers and easily available software has simplified the use of Bayesian and hierarchical models . One obstacle remains for ecologists and wildlife biologists, namely the near absence of Bayesian texts written specifically for them. The book includes many relevant examples, is supported by software and examples on a companion website and will become an essential grounding in this approach for students and research ecologists. Engagingly written text specifically designed to demystify a complex subject Examples drawn from ecology and wildlife research An essential grounding for graduate and research ecologists in the increasingly prevalent Bayesian approach to inference Companion website with analyt...
Bayesian Inference on Gravitational Waves
Directory of Open Access Journals (Sweden)
Asad Ali
2015-12-01
Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.
Energy Technology Data Exchange (ETDEWEB)
Monroy G, F.; Escobar A, L.; Zepeda R, C. P.; Balcazar, M., E-mail: fabiola.monroy@inin.gob.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)
2017-09-15
The radioisotopes Am-243, Cm-244, Pu-242 and U-232 are used as radioactive tracers in the processes of separation and quantification of the radioisotopes of Am, Cm, Pu and U contained in radioactive and nuclear wastes, with the purpose of determining the efficiency of said processes. For this, in this work, alpha sources of Am-243, Cm-244, Pu-242 and U-232 standards were prepared by two methods: evaporation and electro-deposition, and they were analyzed by means of alpha spectrometry to verify the properties of these radioactive standards. The alpha sources prepared by electro-deposition were analyzed by Raman spectrometry to determine the chemical form in which the actinide was deposited; the good homogeneity in the distribution of the deposit was determined with solid nuclear trace detectors. The resolution of the alpha spectra obtained with surface barrier detectors of the standards is greater when deposited by electro-deposition. The FWHM of the alpha sources prepared by electro-deposition is always lower than that prepared by evaporation. Actinides are electrodeposited in the form of hydroxides and oxo compounds. (Author)
International Nuclear Information System (INIS)
Comerford, Julia M.; Pooley, David; Gerke, Brian F.; Madejski, Greg M.
2011-01-01
We report Chandra observations of a double X-ray source in the z = 0.1569 galaxy SDSS J171544.05+600835.7. The galaxy was initially identified as a dual active galactic nucleus (AGN) candidate based on the double-peaked [O III] λ5007 emission lines, with a line-of-sight velocity separation of 350 km s -1 , in its Sloan Digital Sky Survey spectrum. We used the Kast Spectrograph at Lick Observatory to obtain two long-slit spectra of the galaxy at two different position angles, which reveal that the two Type 2 AGN emission components have not only a velocity offset, but also a projected spatial offset of 1.9 h -1 70 kpc on the sky. Chandra/ACIS observations of two X-ray sources with the same spatial offset and orientation as the optical emission suggest that the galaxy most likely contains Compton-thick dual AGNs, although the observations could also be explained by AGN jets. Deeper X-ray observations that reveal Fe K lines, if present, would distinguish between the two scenarios. The observations of a double X-ray source in SDSS J171544.05+600835.7 are a proof of concept for a new, systematic detection method that selects promising dual AGN candidates from ground-based spectroscopy that exhibits both velocity and spatial offsets in the AGN emission features.
Borsboom, D.; Haig, B.D.
2013-01-01
Unlike most other statistical frameworks, Bayesian statistical inference is wedded to a particular approach in the philosophy of science (see Howson & Urbach, 2006); this approach is called Bayesianism. Rather than being concerned with model fitting, this position in the philosophy of science
How to become a Bayesian in eight easy steps : An annotated reading list
Etz, A.; Gronau, Q.F.; Dablander, F.; Edelsbrunner, P.A.; Baribault, B.
In this guide, we present a reading list to serve as a concise introduction to Bayesian data analysis. The introduction is geared toward reviewers, editors, and interested researchers who are new to Bayesian statistics. We provide commentary for eight recommended sources, which together cover the
Directory of Open Access Journals (Sweden)
Shu-Yin Chiang
2002-01-01
Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.
Yadav, Kunwar D; Tare, Vinod; Ahammed, M Mansoor
2011-06-01
The main objective of the present study was to determine the optimum stocking density for feed consumption rate, biomass growth and reproduction of earthworm Eisenia fetida as well as determining and characterising vermicompost quantity and product, respectively, during vermicomposting of source-separated human faeces. For this, a number of experiments spanning up to 3 months were conducted using soil and vermicompost as support materials. Stocking density in the range of 0.25-5.00 kg/m(2) was employed in different tests. The results showed that 0.40-0.45 kg-feed/kg-worm/day was the maximum feed consumption rate by E. fetida in human faeces. The optimum stocking densities were 3.00 kg/m(2) for bioconversion of human faeces to vermicompost, and 0.50 kg/m(2) for earthworm biomass growth and reproduction. Copyright © 2011 Elsevier Ltd. All rights reserved.
Separation of uranium isotopes
International Nuclear Information System (INIS)
Porter, J.T.
1980-01-01
Methods and apparatus are disclosed for separation of uranium isotopes by selective isotopic excitation of photochemically reactive uranyl salt source material at cryogenic temperatures, followed by chemical separation of selectively photochemically reduced U+4 thereby produced from remaining uranyl source material
Energy Technology Data Exchange (ETDEWEB)
Amirov, R. Kh.; Vorona, N. A.; Gavrikov, A. V.; Liziakin, G. D.; Polistchook, V. P.; Samoylov, I. S.; Smirnov, V. P.; Usmanov, R. A., E-mail: ravus46@yandex.ru; Yartsev, I. M. [Russian Academy of Sciences, Joint Institute for High Temperatures (Russian Federation)
2015-12-15
One of the key problems in the development of plasma separation technology is designing a plasma source which uses condensed spent nuclear fuel (SNF) or nuclear wastes as a raw material. This paper covers the experimental study of the evaporation and ionization of model materials (gadolinium, niobium oxide, and titanium oxide). For these purposes, a vacuum arc with a heated cathode on the studied material was initiated and its parameters in different regimes were studied. During the experiment, the cathode temperature, arc current, arc voltage, and plasma radiation spectra were measured, and also probe measurements were carried out. It was found that the increase in the cathode heating power leads to the decrease in the arc voltage (to 3 V). This fact makes it possible to reduce the electron energy and achieve singly ionized plasma with a high degree of ionization to fulfill one of the requirements for plasma separation of SNF. This finding is supported by the analysis of the plasma radiation spectrum and the results of the probe diagnostics.
Ma, Jing; Song, Zhi-Qiang; Yan, Fu-Hua
2014-01-01
To explore the feasibility of dual-source dual-energy computed tomography (DSDECT) for hepatic iron and fat separation in vivo. All of the procedures in this study were approved by the Research Animal Resource Center of Shanghai Ruijin Hospital. Sixty rats that underwent DECT scanning were divided into the normal group, fatty liver group, liver iron group, and coexisting liver iron and fat group, according to Prussian blue and HE staining. The data for each group were reconstructed and post-processed by an iron-specific, three-material decomposition algorithm. The iron enhancement value and the virtual non-iron contrast value, which indicated overloaded liver iron and residual liver tissue, respectively, were measured. Spearman's correlation and one-way analysis of variance (ANOVA) were performed, respectively, to analyze statistically the correlations with the histopathological results and differences among groups. The iron enhancement values were positively correlated with the iron pathology grading (r = 0.729, pVNC) values were negatively correlated with the fat pathology grading (r = -0.642,pVNC values (F = 25.308,pVNC values were only observed between the fat-present and fat-absent groups. Separation of hepatic iron and fat by dual energy material decomposition in vivo was feasible, even when they coexisted.
International Nuclear Information System (INIS)
Rajabalinejad, M.
2010-01-01
To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.
EXONEST: The Bayesian Exoplanetary Explorer
Directory of Open Access Journals (Sweden)
Kevin H. Knuth
2017-10-01
Full Text Available The fields of astronomy and astrophysics are currently engaged in an unprecedented era of discovery as recent missions have revealed thousands of exoplanets orbiting other stars. While the Kepler Space Telescope mission has enabled most of these exoplanets to be detected by identifying transiting events, exoplanets often exhibit additional photometric effects that can be used to improve the characterization of exoplanets. The EXONEST Exoplanetary Explorer is a Bayesian exoplanet inference engine based on nested sampling and originally designed to analyze archived Kepler Space Telescope and CoRoT (Convection Rotation et Transits planétaires exoplanet mission data. We discuss the EXONEST software package and describe how it accommodates plug-and-play models of exoplanet-associated photometric effects for the purpose of exoplanet detection, characterization and scientific hypothesis testing. The current suite of models allows for both circular and eccentric orbits in conjunction with photometric effects, such as the primary transit and secondary eclipse, reflected light, thermal emissions, ellipsoidal variations, Doppler beaming and superrotation. We discuss our new efforts to expand the capabilities of the software to include more subtle photometric effects involving reflected and refracted light. We discuss the EXONEST inference engine design and introduce our plans to port the current MATLAB-based EXONEST software package over to the next generation Exoplanetary Explorer, which will be a Python-based open source project with the capability to employ third-party plug-and-play models of exoplanet-related photometric effects.
Book review: Bayesian analysis for population ecology
Link, William A.
2011-01-01
Brian Dennis described the field of ecology as “fertile, uncolonized ground for Bayesian ideas.” He continued: “The Bayesian propagule has arrived at the shore. Ecologists need to think long and hard about the consequences of a Bayesian ecology. The Bayesian outlook is a successful competitor, but is it a weed? I think so.” (Dennis 2004)
A combined evidence Bayesian method for human ancestry inference applied to Afro-Colombians.
Rishishwar, Lavanya; Conley, Andrew B; Vidakovic, Brani; Jordan, I King
2015-12-15
Uniparental genetic markers, mitochondrial DNA (mtDNA) and Y chromosomal DNA, are widely used for the inference of human ancestry. However, the resolution of ancestral origins based on mtDNA haplotypes is limited by the fact that such haplotypes are often found to be distributed across wide geographical regions. We have addressed this issue here by combining two sources of ancestry information that have typically been considered separately: historical records regarding population origins and genetic information on mtDNA haplotypes. To combine these distinct data sources, we applied a Bayesian approach that considers historical records, in the form of prior probabilities, together with data on the geographical distribution of mtDNA haplotypes, formulated as likelihoods, to yield ancestry assignments from posterior probabilities. This combined evidence Bayesian approach to ancestry assignment was evaluated for its ability to accurately assign sub-continental African ancestral origins to Afro-Colombians based on their mtDNA haplotypes. We demonstrate that the incorporation of historical prior probabilities via this analytical framework can provide for substantially increased resolution in sub-continental African ancestry assignment for members of this population. In addition, a personalized approach to ancestry assignment that involves the tuning of priors to individual mtDNA haplotypes yields even greater resolution for individual ancestry assignment. Despite the fact that Colombia has a large population of Afro-descendants, the ancestry of this community has been understudied relative to populations with primarily European and Native American ancestry. Thus, the application of the kind of combined evidence approach developed here to the study of ancestry in the Afro-Colombian population has the potential to be impactful. The formal Bayesian analytical framework we propose for combining historical and genetic information also has the potential to be widely applied
Current trends in Bayesian methodology with applications
Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia
2015-01-01
Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on
Bayesian image restoration, using configurations
Thorarinsdottir, Thordis
2006-01-01
In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the re...
Bayesian Networks and Influence Diagrams
DEFF Research Database (Denmark)
Kjærulff, Uffe Bro; Madsen, Anders Læsø
Probabilistic networks, also known as Bayesian networks and influence diagrams, have become one of the most promising technologies in the area of applied artificial intelligence, offering intuitive, efficient, and reliable methods for diagnosis, prediction, decision making, classification......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...
Stan: A Probabilistic Programming Language for Bayesian Inference and Optimization
Gelman, Andrew; Lee, Daniel; Guo, Jiqiang
2015-01-01
Stan is a free and open-source C++ program that performs Bayesian inference or optimization for arbitrary user-specified models and can be called from the command line, R, Python, Matlab, or Julia and has great promise for fitting large and complex statistical models in many areas of application. We discuss Stan from users' and developers'…
Directory of Open Access Journals (Sweden)
Jing Ma
Full Text Available OBJECTIVE: To explore the feasibility of dual-source dual-energy computed tomography (DSDECT for hepatic iron and fat separation in vivo. MATERIALS AND METHODS: All of the procedures in this study were approved by the Research Animal Resource Center of Shanghai Ruijin Hospital. Sixty rats that underwent DECT scanning were divided into the normal group, fatty liver group, liver iron group, and coexisting liver iron and fat group, according to Prussian blue and HE staining. The data for each group were reconstructed and post-processed by an iron-specific, three-material decomposition algorithm. The iron enhancement value and the virtual non-iron contrast value, which indicated overloaded liver iron and residual liver tissue, respectively, were measured. Spearman's correlation and one-way analysis of variance (ANOVA were performed, respectively, to analyze statistically the correlations with the histopathological results and differences among groups. RESULTS: The iron enhancement values were positively correlated with the iron pathology grading (r = 0.729, p<0.001. Virtual non-iron contrast (VNC values were negatively correlated with the fat pathology grading (r = -0.642,p<0.0001. Different groups showed significantly different iron enhancement values and VNC values (F = 25.308,p<0.001; F = 10.911, p<0.001, respectively. Among the groups, significant differences in iron enhancement values were only observed between the iron-present and iron-absent groups, and differences in VNC values were only observed between the fat-present and fat-absent groups. CONCLUSION: Separation of hepatic iron and fat by dual energy material decomposition in vivo was feasible, even when they coexisted.
Embedding the results of focussed Bayesian fusion into a global context
Sander, Jennifer; Heizmann, Michael
2014-05-01
Bayesian statistics offers a well-founded and powerful fusion methodology also for the fusion of heterogeneous information sources. However, except in special cases, the needed posterior distribution is not analytically derivable. As consequence, Bayesian fusion may cause unacceptably high computational and storage costs in practice. Local Bayesian fusion approaches aim at reducing the complexity of the Bayesian fusion methodology significantly. This is done by concentrating the actual Bayesian fusion on the potentially most task relevant parts of the domain of the Properties of Interest. Our research on these approaches is motivated by an analogy to criminal investigations where criminalists pursue clues also only locally. This publication follows previous publications on a special local Bayesian fusion technique called focussed Bayesian fusion. Here, the actual calculation of the posterior distribution gets completely restricted to a suitably chosen local context. By this, the global posterior distribution is not completely determined. Strategies for using the results of a focussed Bayesian analysis appropriately are needed. In this publication, we primarily contrast different ways of embedding the results of focussed Bayesian fusion explicitly into a global context. To obtain a unique global posterior distribution, we analyze the application of the Maximum Entropy Principle that has been shown to be successfully applicable in metrology and in different other areas. To address the special need for making further decisions subsequently to the actual fusion task, we further analyze criteria for decision making under partial information.
Accurate phenotyping: Reconciling approaches through Bayesian model averaging.
Directory of Open Access Journals (Sweden)
Carla Chia-Ming Chen
Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.
The Bayesian Approach to Association
Arora, N. S.
2017-12-01
The Bayesian approach to Association focuses mainly on quantifying the physics of the domain. In the case of seismic association for instance let X be the set of all significant events (above some threshold) and their attributes, such as location, time, and magnitude, Y1 be the set of detections that are caused by significant events and their attributes such as seismic phase, arrival time, amplitude etc., Y2 be the set of detections that are not caused by significant events, and finally Y be the set of observed detections We would now define the joint distribution P(X, Y1, Y2, Y) = P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2) ; where the last term simply states that Y1 and Y2 are a partitioning of Y. Given the above joint distribution the inference problem is simply to find the X, Y1, and Y2 that maximizes posterior probability P(X, Y1, Y2| Y) which reduces to maximizing P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2). In this expression P(X) captures our prior belief about event locations. P(Y1 | X) captures notions of travel time, residual error distributions as well as detection and mis-detection probabilities. While P(Y2) captures the false detection rate of our seismic network. The elegance of this approach is that all of the assumptions are stated clearly in the model for P(X), P(Y1|X) and P(Y2). The implementation of the inference is merely a by-product of this model. In contrast some of the other methods such as GA hide a number of assumptions in the implementation details of the inference - such as the so called "driver cells." The other important aspect of this approach is that all seismic knowledge including knowledge from other domains such as infrasound and hydroacoustic can be included in the same model. So, we don't need to separately account for misdetections or merge seismic and infrasound events as a separate step. Finally, it should be noted that the objective of automatic association is to simplify the job of humans who are publishing seismic bulletins based on this
Rigaux, Clémence; Denis, Jean-Baptiste; Albert, Isabelle; Carlin, Frédéric
2013-02-01
Predicting microbial survival requires reference parameters for each micro-organism of concern. When data are abundant and publicly available, a meta-analysis is a useful approach for assessment of these parameters, which can be performed with hierarchical Bayesian modeling. Geobacillus stearothermophilus is a major agent of microbial spoilage of canned foods and is therefore a persistent problem in the food industry. The thermal inactivation parameters of G. stearothermophilus (D(ref), i.e.the decimal reduction time D at the reference temperature 121.1°C and pH 7.0, z(T) and z(pH)) were estimated from a large set of 430 D values mainly collected from scientific literature. Between-study variability hypotheses on the inactivation parameters D(ref), z(T) and z(pH) were explored, using three different hierarchical Bayesian models. Parameter estimations were made using Bayesian inference and the models were compared with a graphical and a Bayesian criterion. Results show the necessity to account for random effects associated with between-study variability. Assuming variability on D(ref), z(T) and z(pH), the resulting distributions for D(ref), z(T) and z(pH) led to a mean of 3.3 min for D(ref) (95% Credible Interval CI=[0.8; 9.6]), to a mean of 9.1°C for z(T) (CI=[5.4; 13.1]) and to a mean of 4.3 pH units for z(pH) (CI=[2.9; 6.3]), in the range pH 3 to pH 7.5. Results are also given separating variability and uncertainty in these distributions, as well as adjusted parametric distributions to facilitate further use of these results in aqueous canned foods such as canned vegetables. Copyright © 2012 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Ng, C.G.; Sumiani Yusoff
2015-01-01
In Malaysia, the greenhouse gases (GHGs) emissions reduction via composting of source-separated organic waste (SOW) in municipal solid waste (MSW) has not been assessed. Assessment of GHG emissions reduction via composting of SOW is important as environmental impacts from waste management are waste-specific and local-specific. The study presents the case study for potential carbon reduction via composting of SOW in University of Malaya (UM). In this study, a series of calculations were used to evaluate the GHG emission of different SOW management scenarios. The calculations based on IPCC calculation methods (AM0025) include GHGs emissions from land filling, fuel consumption in transportation and SOW composting activity. The methods were applied to assess the GHG emissions from five alternative SOW management scenarios in UM. From the baseline scenario (S0), a total of 1,636.18 tCO2e was generated. In conjunction with target of 22 % recycling rate, as shown in S1, 14 % reduction in potential GHG emission can be achieved. The carbon reduction can be further enhanced by increasing the SOW composting capacity. The net GHG emission for S1, S2, S3 and S4 were 1,399.52, 1,161.29, 857.70 and 1,060.48 tCO2e, respectively. In general, waste diversion for composting proved a significant net GHG emission reduction as shown in S3 (47 %), S4 (35 %) and S2 (29 %). Despite the emission due to direct on-site activity, the significant reduction in methane generation at landfill has reduced the net GHG emission. The emission source of each scenario was studied and analysed. (author)
International Nuclear Information System (INIS)
Li, Zhixiong; Peng, Z
2016-01-01
The normal operation of propulsion gearboxes ensures the ship safety. Chaos indicators could efficiently indicate the state change of the gearboxes. However, accurate detection of gearbox hybrid faults using Chaos indicators is a challenging task and the detection under speed variation conditions is attracting considerable attentions. Literature review suggests that the gearbox vibration is a kind of nonlinear mixture of variant vibration sources and the blind source separation (BSS) is reported to be a promising technique for fault vibration analysis, but very limited work has addressed the nonlinear BSS approach for hybrid faults decoupling diagnosis. Aiming to enhance the fault detection performance of Chaos indicators, this work presents a new nonlinear BSS algorithm for gearbox hybrid faults detection under a speed variation condition. This new method appropriately introduces the kernel spectral regression (KSR) framework into the morphological component analysis (MCA). The original vibration data are projected into the reproducing kernel Hilbert space (RKHS) where the instinct nonlinear structure in the original data can be linearized by KSR. Thus the MCA is able to deal with nonlinear BSS in the KSR space. Reliable hybrid faults decoupling is then achieved by this new nonlinear MCA (NMCA). Subsequently, by calculating the Chaos indicators of the decoupled fault components and comparing them with benchmarks, the hybrid faults can be precisely identified. Two specially designed case studies were implemented to evaluate the proposed NMCA-Chaos method on hybrid gear faults decoupling diagnosis. The performance of the NMCA-Chaos was compared with state of art techniques. The analysis results show high performance of the proposed method on hybrid faults detection in a marine propulsion gearbox with large speed variations.
Bayesian Modeling of the Assimilative Capacity Component of Stream Nutrient Export
Implementing stream restoration techniques and best management practices to reduce nonpoint source nutrients implies enhancement of the assimilative capacity for the stream system. In this paper, a Bayesian method for evaluating this component of a TMDL load capacity is developed...
Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn
2013-04-01
SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.
Kavvada, Olga; Tarpeh, William A; Horvath, Arpad; Nelson, Kara L
2017-11-07
Nitrogen standards for discharge of wastewater effluent into aquatic bodies are becoming more stringent, requiring some treatment plants to reduce effluent nitrogen concentrations. This study aimed to assess, from a life-cycle perspective, an innovative decentralized approach to nitrogen recovery: ion exchange of source-separated urine. We modeled an approach in which nitrogen from urine at individual buildings is sorbed onto resins, then transported by truck to regeneration and fertilizer production facilities. To provide insight into impacts from transportation, we enhanced the traditional economic and environmental assessment approach by combining spatial analysis, system-scale evaluation, and detailed last-mile logistics modeling using the city of San Francisco as an illustrative case study. The major contributor to energy intensity and greenhouse gas (GHG) emissions was the production of sulfuric acid to regenerate resins, rather than transportation. Energy and GHG emissions were not significantly sensitive to the number of regeneration facilities. Cost, however, increased with decentralization as rental costs per unit area are higher for smaller areas. The metrics assessed (unit energy, GHG emissions, and cost) were not significantly influenced by facility location in this high-density urban area. We determined that this decentralized approach has lower cost, unit energy, and GHG emissions than centralized nitrogen management via nitrification-denitrification if fertilizer production offsets are taken into account.
International Nuclear Information System (INIS)
Fedosseev, V.N.; )
2005-01-01
Full text: The resonance ionisation laser ion source (RILIS) of the ISOLDE on-line isotope separation facility at CERN is based on the method of laser step-wise resonance ionisation of atoms in a hot metal cavity. Using the system of dye lasers pumped by copper vapour lasers the ion beams of many different metallic elements have been produced at ISOLDE with an ionization efficiency of up to 27%. The high selectivity of the resonance ionization is an important asset for the study of short-lived nuclides produced in targets bombarded by the proton beam of the CERN Booster accelerator. Radioactive ion beams of Be, Mg, Al, Mn, Ni, Cu, Zn, Ga, Ag, Cd, In, Sn, Sb, Tb, Yb, Tl, Pb and Bi have been generated with the RILIS. Setting the RILIS laser in the narrow line-width mode provides conditions for a high-resolution study of hyperfine structure and isotopic shifts of atomic lines for short-lived isotopes. The isomer selective ionization of Cu, Ag and Pb isotopes has been achieved by appropriate tuning of laser wavelengths
Mohammad-Djafari, Ali
2015-01-01
The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.
Bayesian seismic AVO inversion
Energy Technology Data Exchange (ETDEWEB)
Buland, Arild
2002-07-01
A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S
Bayesian Latent Class Analysis Tutorial.
Li, Yuelin; Lord-Bessen, Jennifer; Shiyko, Mariya; Loeb, Rebecca
2018-01-01
This article is a how-to guide on Bayesian computation using Gibbs sampling, demonstrated in the context of Latent Class Analysis (LCA). It is written for students in quantitative psychology or related fields who have a working knowledge of Bayes Theorem and conditional probability and have experience in writing computer programs in the statistical language R . The overall goals are to provide an accessible and self-contained tutorial, along with a practical computation tool. We begin with how Bayesian computation is typically described in academic articles. Technical difficulties are addressed by a hypothetical, worked-out example. We show how Bayesian computation can be broken down into a series of simpler calculations, which can then be assembled together to complete a computationally more complex model. The details are described much more explicitly than what is typically available in elementary introductions to Bayesian modeling so that readers are not overwhelmed by the mathematics. Moreover, the provided computer program shows how Bayesian LCA can be implemented with relative ease. The computer program is then applied in a large, real-world data set and explained line-by-line. We outline the general steps in how to extend these considerations to other methodological applications. We conclude with suggestions for further readings.
Kernel Bayesian ART and ARTMAP.
Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan
2018-02-01
Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Dynamic BI–Orthogonal Field Equation Approach to Efficient Bayesian Inversion
Directory of Open Access Journals (Sweden)
Tagade Piyush M.
2017-06-01
Full Text Available This paper proposes a novel computationally efficient stochastic spectral projection based approach to Bayesian inversion of a computer simulator with high dimensional parametric and model structure uncertainty. The proposed method is based on the decomposition of the solution into its mean and a random field using a generic Karhunen-Loève expansion. The random field is represented as a convolution of separable Hilbert spaces in stochastic and spatial dimensions that are spectrally represented using respective orthogonal bases. In particular, the present paper investigates generalized polynomial chaos bases for the stochastic dimension and eigenfunction bases for the spatial dimension. Dynamic orthogonality is used to derive closed-form equations for the time evolution of mean, spatial and the stochastic fields. The resultant system of equations consists of a partial differential equation (PDE that defines the dynamic evolution of the mean, a set of PDEs to define the time evolution of eigenfunction bases, while a set of ordinary differential equations (ODEs define dynamics of the stochastic field. This system of dynamic evolution equations efficiently propagates the prior parametric uncertainty to the system response. The resulting bi-orthogonal expansion of the system response is used to reformulate the Bayesian inference for efficient exploration of the posterior distribution. The efficacy of the proposed method is investigated for calibration of a 2D transient diffusion simulator with an uncertain source location and diffusivity. The computational efficiency of the method is demonstrated against a Monte Carlo method and a generalized polynomial chaos approach.
Rahpeyma, Sahar; Halldorsson, Benedikt; Hrafnkelsson, Birgir; Jonsson, Sigurjon
2018-01-01
Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.
Rahpeyma, Sahar
2018-04-17
Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.
Bayesian image processing in two and three dimensions
International Nuclear Information System (INIS)
Hart, H.; Liang, Z.
1986-01-01
Tomographic image processing customarily analyzes data acquired over a series of projective orientations. If, however, the point source function (the matrix R) of the system is strongly depth dependent, tomographic information is also obtainable from a series of parallel planar images corresponding to different ''focal'' depths. Bayesian image processing (BIP) was carried out for two and three dimensional spatially uncorrelated discrete amplitude a priori source distributions
BATSE gamma-ray burst line search. 2: Bayesian consistency methodology
Band, D. L.; Ford, L. A.; Matteson, J. L.; Briggs, M.; Paciesas, W.; Pendleton, G.; Preece, R.; Palmer, D.; Teegarden, B.; Schaefer, B.
1994-01-01
We describe a Bayesian methodology to evaluate the consistency between the reported Ginga and Burst and Transient Source Experiment (BATSE) detections of absorption features in gamma-ray burst spectra. Currently no features have been detected by BATSE, but this methodology will still be applicable if and when such features are discovered. The Bayesian methodology permits the comparison of hypotheses regarding the two detectors' observations and makes explicit the subjective aspects of our analysis (e.g., the quantification of our confidence in detector performance). We also present non-Bayesian consistency statistics. Based on preliminary calculations of line detectability, we find that both the Bayesian and non-Bayesian techniques show that the BATSE and Ginga observations are consistent given our understanding of these detectors.
Flood quantile estimation at ungauged sites by Bayesian networks
Mediero, L.; Santillán, D.; Garrote, L.
2012-04-01
Estimating flood quantiles at a site for which no observed measurements are available is essential for water resources planning and management. Ungauged sites have no observations about the magnitude of floods, but some site and basin characteristics are known. The most common technique used is the multiple regression analysis, which relates physical and climatic basin characteristic to flood quantiles. Regression equations are fitted from flood frequency data and basin characteristics at gauged sites. Regression equations are a rigid technique that assumes linear relationships between variables and cannot take the measurement errors into account. In addition, the prediction intervals are estimated in a very simplistic way from the variance of the residuals in the estimated model. Bayesian networks are a probabilistic computational structure taken from the field of Artificial Intelligence, which have been widely and successfully applied to many scientific fields like medicine and informatics, but application to the field of hydrology is recent. Bayesian networks infer the joint probability distribution of several related variables from observations through nodes, which represent random variables, and links, which represent causal dependencies between them. A Bayesian network is more flexible than regression equations, as they capture non-linear relationships between variables. In addition, the probabilistic nature of Bayesian networks allows taking the different sources of estimation uncertainty into account, as they give a probability distribution as result. A homogeneous region in the Tagus Basin was selected as case study. A regression equation was fitted taking the basin area, the annual maximum 24-hour rainfall for a given recurrence interval and the mean height as explanatory variables. Flood quantiles at ungauged sites were estimated by Bayesian networks. Bayesian networks need to be learnt from a huge enough data set. As observational data are reduced, a
Interactive Instruction in Bayesian Inference
DEFF Research Database (Denmark)
Khan, Azam; Breslav, Simon; Hornbæk, Kasper
2018-01-01
An instructional approach is presented to improve human performance in solving Bayesian inference problems. Starting from the original text of the classic Mammography Problem, the textual expression is modified and visualizations are added according to Mayer’s principles of instruction. These pri......An instructional approach is presented to improve human performance in solving Bayesian inference problems. Starting from the original text of the classic Mammography Problem, the textual expression is modified and visualizations are added according to Mayer’s principles of instruction....... These principles concern coherence, personalization, signaling, segmenting, multimedia, spatial contiguity, and pretraining. Principles of self-explanation and interactivity are also applied. Four experiments on the Mammography Problem showed that these principles help participants answer the questions...... that an instructional approach to improving human performance in Bayesian inference is a promising direction....
Probability biases as Bayesian inference
Directory of Open Access Journals (Sweden)
Andre; C. R. Martins
2006-11-01
Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.
Bayesian analysis of CCDM models
Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.
2017-09-01
Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3αH0 model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.
Bayesian analysis of CCDM models
Energy Technology Data Exchange (ETDEWEB)
Jesus, J.F. [Universidade Estadual Paulista (Unesp), Câmpus Experimental de Itapeva, Rua Geraldo Alckmin 519, Vila N. Sra. de Fátima, Itapeva, SP, 18409-010 Brazil (Brazil); Valentim, R. [Departamento de Física, Instituto de Ciências Ambientais, Químicas e Farmacêuticas—ICAQF, Universidade Federal de São Paulo (UNIFESP), Unidade José Alencar, Rua São Nicolau No. 210, Diadema, SP, 09913-030 Brazil (Brazil); Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk [Institute of Cosmology and Gravitation—University of Portsmouth, Burnaby Road, Portsmouth, PO1 3FX United Kingdom (United Kingdom)
2017-09-01
Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.
Energy Technology Data Exchange (ETDEWEB)
Wojtasiewicz, A. [Warsaw Univ., Inst. of Experimental Physics, Nuclear Physics Div., Warsaw (Poland)
1997-12-31
Since 1995 the University of Warsaw Isotope Separator group has participated in the ISOL/IGISOL project at the Heavy Ion Cyclotron. This project consists in installation of an isotope separator (on line with cyclotron heavy ion beam) with a hot plasma ion source (ISOL system) and/or with an ion guide source (IGISOL system). In the report the short description of the present status of the project is presented. 2 figs, 10 refs.
Separating Underdetermined Convolutive Speech Mixtures
DEFF Research Database (Denmark)
Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan
2006-01-01
a method for underdetermined blind source separation of convolutive mixtures. The proposed framework is applicable for separation of instantaneous as well as convolutive speech mixtures. It is possible to iteratively extract each speech signal from the mixture by combining blind source separation...
Learning Bayesian networks for discrete data
Liang, Faming; Zhang, Jian
2009-01-01
Bayesian networks have received much attention in the recent literature. In this article, we propose an approach to learn Bayesian networks using the stochastic approximation Monte Carlo (SAMC) algorithm. Our approach has two nice features. Firstly
Bayesian Network Induction via Local Neighborhoods
National Research Council Canada - National Science Library
Margaritis, Dimitris
1999-01-01
.... We present an efficient algorithm for learning Bayesian networks from data. Our approach constructs Bayesian networks by first identifying each node's Markov blankets, then connecting nodes in a consistent way...
Can a significance test be genuinely Bayesian?
Pereira, Carlos A. de B.; Stern, Julio Michael; Wechsler, Sergio
2008-01-01
The Full Bayesian Significance Test, FBST, is extensively reviewed. Its test statistic, a genuine Bayesian measure of evidence, is discussed in detail. Its behavior in some problems of statistical inference like testing for independence in contingency tables is discussed.
Bayesian modeling using WinBUGS
Ntzoufras, Ioannis
2009-01-01
A hands-on introduction to the principles of Bayesian modeling using WinBUGS Bayesian Modeling Using WinBUGS provides an easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. The author provides an accessible treatment of the topic, offering readers a smooth introduction to the principles of Bayesian modeling with detailed guidance on the practical implementation of key principles. The book begins with a basic introduction to Bayesian inference and the WinBUGS software and goes on to cover key topics, including: Markov Chain Monte Carlo algorithms in Bayesian inference Generalized linear models Bayesian hierarchical models Predictive distribution and model checking Bayesian model and variable evaluation Computational notes and screen captures illustrate the use of both WinBUGS as well as R software to apply the discussed techniques. Exercises at the end of each chapter allow readers to test their understanding of the presented concepts and all ...
Oviedo-Ocaña, E R; Torres-Lozada, P; Marmolejo-Rebellon, L F; Torres-López, W A; Dominguez, I; Komilis, D; Sánchez, A
2017-04-01
Biowaste is commonly the largest fraction of municipal solid waste (MSW) in developing countries. Although composting is an effective method to treat source separated biowaste (SSB), there are certain limitations in terms of operation, partly due to insufficient control to the variability of SSB quality, which affects process kinetics and product quality. This study assesses the variability of the SSB physicochemical quality in a composting facility located in a small town of Colombia, in which SSB collection was performed twice a week. Likewise, the influence of the SSB physicochemical variability on the variability of compost parameters was assessed. Parametric and non-parametric tests (i.e. Student's t-test and the Mann-Whitney test) showed no significant differences in the quality parameters of SSB among collection days, and therefore, it was unnecessary to establish specific operation and maintenance regulations for each collection day. Significant variability was found in eight of the twelve quality parameters analyzed in the inlet stream, with corresponding coefficients of variation (CV) higher than 23%. The CVs for the eight parameters analyzed in the final compost (i.e. pH, moisture, total organic carbon, total nitrogen, C/N ratio, total phosphorus, total potassium and ash) ranged from 9.6% to 49.4%, with significant variations in five of those parameters (CV>20%). The above indicate that variability in the inlet stream can affect the variability of the end-product. Results suggest the need to consider variability of the inlet stream in the performance of composting facilities to achieve a compost of consistent quality. Copyright © 2017 Elsevier Ltd. All rights reserved.
Naroznova, Irina; Møller, Jacob; Scheutz, Charlotte
2016-12-01
This study compared the environmental profiles of anaerobic digestion (AD) and incineration, in relation to global warming potential (GWP), for treating individual material fractions that may occur in source-separated organic household waste (SSOHW). Different framework conditions representative for the European Union member countries were considered. For AD, biogas utilisation with a biogas engine was considered and two potential situations investigated - biogas combustion with (1) combined heat and power production (CHP) and (2) electricity production only. For incineration, four technology options currently available in Europe were covered: (1) an average incinerator with CHP production, (2) an average incinerator with mainly electricity production, (3) an average incinerator with mainly heat production and (4) a state-of-the art incinerator with CHP working at high energy recovery efficiencies. The study was performed using a life cycle assessment in its consequential approach. Furthermore, the role of waste-sorting guidelines (defined by the material fractions allowed for SSOHW) in relation to GWP of treating overall SSOHW with AD was investigated. A case-study of treating 1tonne of SSOHW under framework conditions in Denmark was conducted. Under the given assumptions, vegetable food waste was the only material fraction which was always better for AD compared to incineration. For animal food waste, kitchen tissue, vegetation waste and dirty paper, AD utilisation was better unless it was compared to a highly efficient incinerator. Material fractions such as moulded fibres and dirty cardboard were attractive for AD, albeit only when AD with CHP and incineration with mainly heat production were compared. Animal straw, in contrast, was always better to incinerate. Considering the total amounts of individual material fractions in waste generated within households in Denmark, food waste (both animal and vegetable derived) and kitchen tissue are the main material
Inference in hybrid Bayesian networks
International Nuclear Information System (INIS)
Langseth, Helge; Nielsen, Thomas D.; Rumi, Rafael; Salmeron, Antonio
2009-01-01
Since the 1980s, Bayesian networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability techniques (like fault trees and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (the so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability.
3D Bayesian contextual classifiers
DEFF Research Database (Denmark)
Larsen, Rasmus
2000-01-01
We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....
Bayesian methods for proteomic biomarker development
Directory of Open Access Journals (Sweden)
Belinda Hernández
2015-12-01
In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.
Bayesian networks and food security - An introduction
Stein, A.
2004-01-01
This paper gives an introduction to Bayesian networks. Networks are defined and put into a Bayesian context. Directed acyclical graphs play a crucial role here. Two simple examples from food security are addressed. Possible uses of Bayesian networks for implementation and further use in decision
Plug & Play object oriented Bayesian networks
DEFF Research Database (Denmark)
Bangsø, Olav; Flores, J.; Jensen, Finn Verner
2003-01-01
been shown to be quite suitable for dynamic domains as well. However, processing object oriented Bayesian networks in practice does not take advantage of their modular structure. Normally the object oriented Bayesian network is transformed into a Bayesian network and, inference is performed...... dynamic domains. The communication needed between instances is achieved by means of a fill-in propagation scheme....
A Bayesian framework for risk perception
van Erp, H.R.N.
2017-01-01
We present here a Bayesian framework of risk perception. This framework encompasses plausibility judgments, decision making, and question asking. Plausibility judgments are modeled by way of Bayesian probability theory, decision making is modeled by way of a Bayesian decision theory, and relevancy
Merging Digital Surface Models Implementing Bayesian Approaches
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES
Directory of Open Access Journals (Sweden)
H. Sadeq
2016-06-01
Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Bayesian NL interpretation and learning
Zeevat, H.
2011-01-01
Everyday natural language communication is normally successful, even though contemporary computational linguistics has shown that NL is characterised by very high degree of ambiguity and the results of stochastic methods are not good enough to explain the high success rate. Bayesian natural language
Bayesian image restoration, using configurations
DEFF Research Database (Denmark)
Thorarinsdottir, Thordis
configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed...
Bayesian image restoration, using configurations
DEFF Research Database (Denmark)
Thorarinsdottir, Thordis Linda
2006-01-01
configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for the salt and pepper noise. The inference in the model is discussed...
Differentiated Bayesian Conjoint Choice Designs
Z. Sándor (Zsolt); M. Wedel (Michel)
2003-01-01
textabstractPrevious conjoint choice design construction procedures have produced a single design that is administered to all subjects. This paper proposes to construct a limited set of different designs. The designs are constructed in a Bayesian fashion, taking into account prior uncertainty about
Bayesian Networks and Influence Diagrams
DEFF Research Database (Denmark)
Kjærulff, Uffe Bro; Madsen, Anders Læsø
Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new...
Bayesian Sampling using Condition Indicators
DEFF Research Database (Denmark)
Faber, Michael H.; Sørensen, John Dalsgaard
2002-01-01
of condition indicators introduced by Benjamin and Cornell (1970) a Bayesian approach to quality control is formulated. The formulation is then extended to the case where the quality control is based on sampling of indirect information about the condition of the components, i.e. condition indicators...
Bayesian Classification of Image Structures
DEFF Research Database (Denmark)
Goswami, Dibyendu; Kalkan, Sinan; Krüger, Norbert
2009-01-01
In this paper, we describe work on Bayesian classi ers for distinguishing between homogeneous structures, textures, edges and junctions. We build semi-local classiers from hand-labeled images to distinguish between these four different kinds of structures based on the concept of intrinsic dimensi...
Bayesian estimates of linkage disequilibrium
Directory of Open Access Journals (Sweden)
Abad-Grau María M
2007-06-01
Full Text Available Abstract Background The maximum likelihood estimator of D' – a standard measure of linkage disequilibrium – is biased toward disequilibrium, and the bias is particularly evident in small samples and rare haplotypes. Results This paper proposes a Bayesian estimation of D' to address this problem. The reduction of the bias is achieved by using a prior distribution on the pair-wise associations between single nucleotide polymorphisms (SNPs that increases the likelihood of equilibrium with increasing physical distances between pairs of SNPs. We show how to compute the Bayesian estimate using a stochastic estimation based on MCMC methods, and also propose a numerical approximation to the Bayesian estimates that can be used to estimate patterns of LD in large datasets of SNPs. Conclusion Our Bayesian estimator of D' corrects the bias toward disequilibrium that affects the maximum likelihood estimator. A consequence of this feature is a more objective view about the extent of linkage disequilibrium in the human genome, and a more realistic number of tagging SNPs to fully exploit the power of genome wide association studies.
3-D contextual Bayesian classifiers
DEFF Research Database (Denmark)
Larsen, Rasmus
In this paper we will consider extensions of a series of Bayesian 2-D contextual classification pocedures proposed by Owen (1984) Hjort & Mohn (1984) and Welch & Salter (1971) and Haslett (1985) to 3 spatial dimensions. It is evident that compared to classical pixelwise classification further...
Bayesian Alternation During Tactile Augmentation
Directory of Open Access Journals (Sweden)
Caspar Mathias Goeke
2016-10-01
Full Text Available A large number of studies suggest that the integration of multisensory signals by humans is well described by Bayesian principles. However, there are very few reports about cue combination between a native and an augmented sense. In particular, we asked the question whether adult participants are able to integrate an augmented sensory cue with existing native sensory information. Hence for the purpose of this study we build a tactile augmentation device. Consequently, we compared different hypotheses of how untrained adult participants combine information from a native and an augmented sense. In a two-interval forced choice (2 IFC task, while subjects were blindfolded and seated on a rotating platform, our sensory augmentation device translated information on whole body yaw rotation to tactile stimulation. Three conditions were realized: tactile stimulation only (augmented condition, rotation only (native condition, and both augmented and native information (bimodal condition. Participants had to choose one out of two consecutive rotations with higher angular rotation. For the analysis, we fitted the participants’ responses with a probit model and calculated the just notable difference (JND. Then we compared several models for predicting bimodal from unimodal responses. An objective Bayesian alternation model yielded a better prediction (χred2 = 1.67 than the Bayesian integration model (χred2= 4.34. Slightly higher accuracy showed a non-Bayesian winner takes all model (χred2= 1.64, which either used only native or only augmented values per subject for prediction. However the performance of the Bayesian alternation model could be substantially improved (χred2= 1.09 utilizing subjective weights obtained by a questionnaire. As a result, the subjective Bayesian alternation model predicted bimodal performance most accurately among all tested models. These results suggest that information from augmented and existing sensory modalities in
Topics in Bayesian statistics and maximum entropy
International Nuclear Information System (INIS)
Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.
1998-12-01
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Bayesian analysis of rare events
Energy Technology Data Exchange (ETDEWEB)
Straub, Daniel, E-mail: straub@tum.de; Papaioannou, Iason; Betz, Wolfgang
2016-06-01
In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.
Energy Technology Data Exchange (ETDEWEB)
Prokopyuk, S.G.; Dyachenko, A.Ye.; Mukhametov, M.N.; Prokopov, O.I.
1982-01-01
A separator is proposed which contains separating slanted plates and baffle plates installed at a distance to them at an acute angle to them. To increase the effectiveness of separating a gas and liquid stream and the throughput through reducing the secondary carry away of the liquid drops and to reduce the hydraulic resistance, as well, openings are made in the plates. The horizontal projections of each opening from the lower and upper surfaces of the plate do not overlap each other.
White, Curt M; Strazisar, Brian R; Granite, Evan J; Hoffman, James S; Pennline, Henry W
2003-06-01
The topic of global warming as a result of increased atmospheric CO2 concentration is arguably the most important environmental issue that the world faces today. It is a global problem that will need to be solved on a global level. The link between anthropogenic emissions of CO2 with increased atmospheric CO2 levels and, in turn, with increased global temperatures has been well established and accepted by the world. International organizations such as the United Nations Framework Convention on Climate Change (UNFCCC) and the Intergovernmental Panel on Climate Change (IPCC) have been formed to address this issue. Three options are being explored to stabilize atmospheric levels of greenhouse gases (GHGs) and global temperatures without severely and negatively impacting standard of living: (1) increasing energy efficiency, (2) switching to less carbon-intensive sources of energy, and (3) carbon sequestration. To be successful, all three options must be used in concert. The third option is the subject of this review. Specifically, this review will cover the capture and geologic sequestration of CO2 generated from large point sources, namely fossil-fuel-fired power gasification plants. Sequestration of CO2 in geological formations is necessary to meet the President's Global Climate Change Initiative target of an 18% reduction in GHG intensity by 2012. Further, the best strategy to stabilize the atmospheric concentration of CO2 results from a multifaceted approach where sequestration of CO2 into geological formations is combined with increased efficiency in electric power generation and utilization, increased conservation, increased use of lower carbon-intensity fuels, and increased use of nuclear energy and renewables. This review covers the separation and capture of CO2 from both flue gas and fuel gas using wet scrubbing technologies, dry regenerable sorbents, membranes, cryogenics, pressure and temperature swing adsorption, and other advanced concepts. Existing
Bayesian calibration of simultaneity in audiovisual temporal order judgments.
Directory of Open Access Journals (Sweden)
Shinya Yamamoto
Full Text Available After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation. In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to bayesian integration theory (bayesian calibration. We further showed, in theory, that the effect of bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone in a different block, the point of simultaneity shifted to "sound-first" for the pitch associated with sound-first stimuli, and to "light-first" for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to "light-first" for the pitch associated with sound-first stimuli, and to "sound-first" for the pitch associated with light-first stimuli. The results clearly show that bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli.
International Nuclear Information System (INIS)
Eerkens, J.W.
1979-01-01
A method of isotope separation is described which involves the use of a laser photon beam to selectively induce energy level transitions of an isotope molecule containing the isotope to be separated. The use of the technique for 235 U enrichment is demonstrated. (UK)
A computational Bayesian approach to dependency assessment in system reliability
International Nuclear Information System (INIS)
Yontay, Petek; Pan, Rong
2016-01-01
Due to the increasing complexity of engineered products, it is of great importance to develop a tool to assess reliability dependencies among components and systems under the uncertainty of system reliability structure. In this paper, a Bayesian network approach is proposed for evaluating the conditional probability of failure within a complex system, using a multilevel system configuration. Coupling with Bayesian inference, the posterior distributions of these conditional probabilities can be estimated by combining failure information and expert opinions at both system and component levels. Three data scenarios are considered in this study, and they demonstrate that, with the quantification of the stochastic relationship of reliability within a system, the dependency structure in system reliability can be gradually revealed by the data collected at different system levels. - Highlights: • A Bayesian network representation of system reliability is presented. • Bayesian inference methods for assessing dependencies in system reliability are developed. • Complete and incomplete data scenarios are discussed. • The proposed approach is able to integrate reliability information from multiple sources at multiple levels of the system.
A Bayesian Model of the Memory Colour Effect.
Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R
2018-01-01
According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects.
Bayesian non parametric modelling of Higgs pair production
Directory of Open Access Journals (Sweden)
Scarpa Bruno
2017-01-01
Full Text Available Statistical classification models are commonly used to separate a signal from a background. In this talk we face the problem of isolating the signal of Higgs pair production using the decay channel in which each boson decays into a pair of b-quarks. Typically in this context non parametric methods are used, such as Random Forests or different types of boosting tools. We remain in the same non-parametric framework, but we propose to face the problem following a Bayesian approach. A Dirichlet process is used as prior for the random effects in a logit model which is fitted by leveraging the Polya-Gamma data augmentation. Refinements of the model include the insertion in the simple model of P-splines to relate explanatory variables with the response and the use of Bayesian trees (BART to describe the atoms in the Dirichlet process.
Bayesian estimation methods in metrology
International Nuclear Information System (INIS)
Cox, M.G.; Forbes, A.B.; Harris, P.M.
2004-01-01
In metrology -- the science of measurement -- a measurement result must be accompanied by a statement of its associated uncertainty. The degree of validity of a measurement result is determined by the validity of the uncertainty statement. In recognition of the importance of uncertainty evaluation, the International Standardization Organization in 1995 published the Guide to the Expression of Uncertainty in Measurement and the Guide has been widely adopted. The validity of uncertainty statements is tested in interlaboratory comparisons in which an artefact is measured by a number of laboratories and their measurement results compared. Since the introduction of the Mutual Recognition Arrangement, key comparisons are being undertaken to determine the degree of equivalence of laboratories for particular measurement tasks. In this paper, we discuss the possible development of the Guide to reflect Bayesian approaches and the evaluation of key comparison data using Bayesian estimation methods
Deep Learning and Bayesian Methods
Directory of Open Access Journals (Sweden)
Prosper Harrison B.
2017-01-01
Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.
Bayesian inference on proportional elections.
Directory of Open Access Journals (Sweden)
Gabriel Hideki Vatanabe Brunello
Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.
BAYESIAN IMAGE RESTORATION, USING CONFIGURATIONS
Directory of Open Access Journals (Sweden)
Thordis Linda Thorarinsdottir
2011-05-01
Full Text Available In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed in detail for 3 X 3 and 5 X 5 configurations and examples of the performance of the procedure are given.
Space Shuttle RTOS Bayesian Network
Morris, A. Terry; Beling, Peter A.
2001-01-01
With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores
Multiview Bayesian Correlated Component Analysis
DEFF Research Database (Denmark)
Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai
2015-01-01
are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...... we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....
Skarstrom, C.
1959-03-10
A centrifugal separator is described for separating gaseous mixtures where the temperature gradients both longitudinally and radially of the centrifuge may be controlled effectively to produce a maximum separation of the process gases flowing through. Tbe invention provides for the balancing of increases and decreases in temperature in various zones of the centrifuge chamber as the result of compression and expansions respectively, of process gases and may be employed effectively both to neutralize harmful temperature gradients and to utilize beneficial temperaturc gradients within the centrifuge.
Microparticle Separation by Cyclonic Separation
Karback, Keegan; Leith, Alexander
2017-11-01
The ability to separate particles based on their size has wide ranging applications from the industrial to the medical. Currently, cyclonic separators are primarily used in agriculture and manufacturing to syphon out contaminates or products from an air supply. This has led us to believe that cyclonic separation has more applications than the agricultural and industrial. Using the OpenFoam computational package, we were able to determine the flow parameters of a vortex in a cyclonic separator in order to segregate dust particles to a cutoff size of tens of nanometers. To test the model, we constructed an experiment to separate a test dust of various sized particles. We filled a chamber with Arizona test dust and utilized an acoustic suspension technique to segregate particles finer than a coarse cutoff size and introduce them into the cyclonic separation apparatus where they were further separated via a vortex following our computational model. The size of the particles separated from this experiment will be used to further refine our model. Metropolitan State University of Denver, Colorado University of Denver, Dr. Randall Tagg, Dr. Richard Krantz.
International Nuclear Information System (INIS)
Castle, P.M.
1979-01-01
This invention relates to molecular and atomic isotope separation and is particularly applicable to the separation of 235 U from other uranium isotopes including 238 U. In the method described a desired isotope is separated mechanically from an atomic or molecular beam formed from an isotope mixture utilising the isotropic recoil momenta resulting from selective excitation of the desired isotope species by radiation, followed by ionization or dissociation by radiation or electron attachment. By forming a matrix of UF 6 molecules in HBr molecules so as to collapse the V 3 vibrational mode of the UF 6 molecule the 235 UF 6 molecules are selectively excited to promote reduction of UF 6 molecules containing 235 U and facilitate separation. (UK)
International Nuclear Information System (INIS)
Chen, C.L.
1979-01-01
Isotopic species in an isotopic mixture including a first species having a first isotope and a second species having a second isotope are separated by selectively exciting the first species in preference to the second species and then reacting the selectively excited first species with an additional preselected radiation, an electron or another chemical species so as to form a product having a mass different from the original species and separating the product from the balance of the mixture in a centrifugal separating device such as centrifuge or aerodynamic nozzle. In the centrifuge the isotopic mixture is passed into a rotor where it is irradiated through a window. Heavier and lighter components can be withdrawn. The irradiated mixture experiences a large centrifugal force and is separated in a deflection area into lighter and heavier components. (UK)
International Nuclear Information System (INIS)
Anon.
1976-01-01
Results of studies on the photochemistry of aqueous Pu solutions and the stability of iodine in liquid and gaseous CO 2 are reported. Progress is reported in studies on: the preparation of macroporous bodies filled with oxides and sulfides to be used as adsorbents; the beneficiation of photographic wastes; the anion exchange adsorption of transition elements from thiosulfate solutions; advanced filtration applications of energy significance; high-resolution separations; and, the examination of the separation agents, octylphenylphosphoric acid (OPPA) and trihexyl phosphate (THP)
International Nuclear Information System (INIS)
Chen, C.L.
1982-01-01
A method is described for separating isotopes in which photo-excitation of selected isotope species is used together with the reaction of the excited species with postive ions of predetermined ionization energy, other excited species, or free electrons to produce ions or ion fragments of the selected species. Ions and electrons are produced by an electrical discharge, and separation is achieved through radial ambipolar diffusion, electrostatic techniques, or magnetohydrodynamic methods
12th Brazilian Meeting on Bayesian Statistics
Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo
2015-01-01
Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...
Complexity analysis of accelerated MCMC methods for Bayesian inversion
International Nuclear Information System (INIS)
Hoang, Viet Ha; Schwab, Christoph; Stuart, Andrew M
2013-01-01
The Bayesian approach to inverse problems, in which the posterior probability distribution on an unknown field is sampled for the purposes of computing posterior expectations of quantities of interest, is starting to become computationally feasible for partial differential equation (PDE) inverse problems. Balancing the sources of error arising from finite-dimensional approximation of the unknown field, the PDE forward solution map and the sampling of the probability space under the posterior distribution are essential for the design of efficient computational Bayesian methods for PDE inverse problems. We study Bayesian inversion for a model elliptic PDE with an unknown diffusion coefficient. We provide complexity analyses of several Markov chain Monte Carlo (MCMC) methods for the efficient numerical evaluation of expectations under the Bayesian posterior distribution, given data δ. Particular attention is given to bounds on the overall work required to achieve a prescribed error level ε. Specifically, we first bound the computational complexity of ‘plain’ MCMC, based on combining MCMC sampling with linear complexity multi-level solvers for elliptic PDE. Our (new) work versus accuracy bounds show that the complexity of this approach can be quite prohibitive. Two strategies for reducing the computational complexity are then proposed and analyzed: first, a sparse, parametric and deterministic generalized polynomial chaos (gpc) ‘surrogate’ representation of the forward response map of the PDE over the entire parameter space, and, second, a novel multi-level Markov chain Monte Carlo strategy which utilizes sampling from a multi-level discretization of the posterior and the forward PDE. For both of these strategies, we derive asymptotic bounds on work versus accuracy, and hence asymptotic bounds on the computational complexity of the algorithms. In particular, we provide sufficient conditions on the regularity of the unknown coefficients of the PDE and on the
Bayesian models for comparative analysis integrating phylogenetic uncertainty
Directory of Open Access Journals (Sweden)
Villemereuil Pierre de
2012-06-01
Full Text Available Abstract Background Uncertainty in comparative analyses can come from at least two sources: a phylogenetic uncertainty in the tree topology or branch lengths, and b uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow and inflated significance in hypothesis testing (e.g. p-values will be too small. Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible
Bayesian models for comparative analysis integrating phylogenetic uncertainty
2012-01-01
Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for
A Bayesian model for binary Markov chains
Directory of Open Access Journals (Sweden)
Belkheir Essebbar
2004-02-01
Full Text Available This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.
3rd Bayesian Young Statisticians Meeting
Lanzarone, Ettore; Villalobos, Isadora; Mattei, Alessandra
2017-01-01
This book is a selection of peer-reviewed contributions presented at the third Bayesian Young Statisticians Meeting, BAYSM 2016, Florence, Italy, June 19-21. The meeting provided a unique opportunity for young researchers, M.S. students, Ph.D. students, and postdocs dealing with Bayesian statistics to connect with the Bayesian community at large, to exchange ideas, and to network with others working in the same field. The contributions develop and apply Bayesian methods in a variety of fields, ranging from the traditional (e.g., biostatistics and reliability) to the most innovative ones (e.g., big data and networks).
Bayesian network as a modelling tool for risk management in agriculture
DEFF Research Database (Denmark)
Rasmussen, Svend; Madsen, Anders L.; Lund, Mogens
. In this paper we use Bayesian networks as an integrated modelling approach for representing uncertainty and analysing risk management in agriculture. It is shown how historical farm account data may be efficiently used to estimate conditional probabilities, which are the core elements in Bayesian network models....... We further show how the Bayesian network model RiBay is used for stochastic simulation of farm income, and we demonstrate how RiBay can be used to simulate risk management at the farm level. It is concluded that the key strength of a Bayesian network is the transparency of assumptions......, and that it has the ability to link uncertainty from different external sources to budget figures and to quantify risk at the farm level....
Underwood, Kristen L.; Rizzo, Donna M.; Schroth, Andrew W.; Dewoolkar, Mandar M.
2017-12-01
Given the variable biogeochemical, physical, and hydrological processes driving fluvial sediment and nutrient export, the water science and management communities need data-driven methods to identify regions prone to production and transport under variable hydrometeorological conditions. We use Bayesian analysis to segment concentration-discharge linear regression models for total suspended solids (TSS) and particulate and dissolved phosphorus (PP, DP) using 22 years of monitoring data from 18 Lake Champlain watersheds. Bayesian inference was leveraged to estimate segmented regression model parameters and identify threshold position. The identified threshold positions demonstrated a considerable range below and above the median discharge—which has been used previously as the default breakpoint in segmented regression models to discern differences between pre and post-threshold export regimes. We then applied a Self-Organizing Map (SOM), which partitioned the watersheds into clusters of TSS, PP, and DP export regimes using watershed characteristics, as well as Bayesian regression intercepts and slopes. A SOM defined two clusters of high-flux basins, one where PP flux was predominantly episodic and hydrologically driven; and another in which the sediment and nutrient sourcing and mobilization were more bimodal, resulting from both hydrologic processes at post-threshold discharges and reactive processes (e.g., nutrient cycling or lateral/vertical exchanges of fine sediment) at prethreshold discharges. A separate DP SOM defined two high-flux clusters exhibiting a bimodal concentration-discharge response, but driven by differing land use. Our novel framework shows promise as a tool with broad management application that provides insights into landscape drivers of riverine solute and sediment export.
International Nuclear Information System (INIS)
Rubin, L.S.
1986-01-01
A disposal container is described for use in disposal of radioactive waste materials consisting of: top wall structure, bottom wall structure, and circumferential side wall structure interconnecting the top and bottom wall structures to define an enclosed container, separation structure in the container adjacent the inner surface of the side wall structure for allowing passage of liquid and retention of solids, inlet port structure in the top wall structure, discharge port structure at the periphery of the container in communication with the outer surface of the separation structure for receiving liquid that passes through the separation structure, first centrifugally actuated valve structure having a normal position closing the inlet port structure and a centrifugally actuated position opening the inlet port structure, second centrifugally actuated valve structure having a normal position closing the discharge port structure and a centrifugally actuated position opening the discharge port structure, and coupling structure integral with wall structure of the container for releasable engagement with centrifugal drive structure
Ford, Timothy J
2017-01-01
This book presents a comprehensive introduction to the theory of separable algebras over commutative rings. After a thorough introduction to the general theory, the fundamental roles played by separable algebras are explored. For example, Azumaya algebras, the henselization of local rings, and Galois theory are rigorously introduced and treated. Interwoven throughout these applications is the important notion of étale algebras. Essential connections are drawn between the theory of separable algebras and Morita theory, the theory of faithfully flat descent, cohomology, derivations, differentials, reflexive lattices, maximal orders, and class groups. The text is accessible to graduate students who have finished a first course in algebra, and it includes necessary foundational material, useful exercises, and many nontrivial examples.
International Nuclear Information System (INIS)
Bartlett, R.J.; Morrey, J.R.
1978-01-01
A method and apparatus is described for separating gas molecules containing one isotope of an element from gas molecules containing other isotopes of the same element in which all of the molecules of the gas are at the same electronic state in their ground state. Gas molecules in a gas stream containing one of the isotopes are selectively excited to a different electronic state while leaving the other gas molecules in their original ground state. Gas molecules containing one of the isotopes are then deflected from the other gas molecules in the stream and thus physically separated
Fast Bayesian optimal experimental design and its applications
Long, Quan
2015-01-07
We summarize our Laplace method and multilevel method of accelerating the computation of the expected information gain in a Bayesian Optimal Experimental Design (OED). Laplace method is a widely-used method to approximate an integration in statistics. We analyze this method in the context of optimal Bayesian experimental design and extend this method from the classical scenario, where a single dominant mode of the parameters can be completely-determined by the experiment, to the scenarios where a non-informative parametric manifold exists. We show that by carrying out this approximation the estimation of the expected Kullback-Leibler divergence can be significantly accelerated. While Laplace method requires a concentration of measure, multi-level Monte Carlo method can be used to tackle the problem when there is a lack of measure concentration. We show some initial results on this approach. The developed methodologies have been applied to various sensor deployment problems, e.g., impedance tomography and seismic source inversion.
Applying Bayesian belief networks in rapid response situations
Energy Technology Data Exchange (ETDEWEB)
Gibson, William L [Los Alamos National Laboratory; Deborah, Leishman, A. [Los Alamos National Laboratory; Van Eeckhout, Edward [Los Alamos National Laboratory
2008-01-01
The authors have developed an enhanced Bayesian analysis tool called the Integrated Knowledge Engine (IKE) for monitoring and surveillance. The enhancements are suited for Rapid Response Situations where decisions must be made based on uncertain and incomplete evidence from many diverse and heterogeneous sources. The enhancements extend the probabilistic results of the traditional Bayesian analysis by (1) better quantifying uncertainty arising from model parameter uncertainty and uncertain evidence, (2) optimizing the collection of evidence to reach conclusions more quickly, and (3) allowing the analyst to determine the influence of the remaining evidence that cannot be obtained in the time allowed. These extended features give the analyst and decision maker a better comprehension of the adequacy of the acquired evidence and hence the quality of the hurried decisions. They also describe two example systems where the above features are highlighted.
On a full Bayesian inference for force reconstruction problems
Aucejo, M.; De Smet, O.
2018-05-01
In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.
Fast model updating coupling Bayesian inference and PGD model reduction
Rubio, Paul-Baptiste; Louf, François; Chamoin, Ludovic
2018-04-01
The paper focuses on a coupled Bayesian-Proper Generalized Decomposition (PGD) approach for the real-time identification and updating of numerical models. The purpose is to use the most general case of Bayesian inference theory in order to address inverse problems and to deal with different sources of uncertainties (measurement and model errors, stochastic parameters). In order to do so with a reasonable CPU cost, the idea is to replace the direct model called for Monte-Carlo sampling by a PGD reduced model, and in some cases directly compute the probability density functions from the obtained analytical formulation. This procedure is first applied to a welding control example with the updating of a deterministic parameter. In the second application, the identification of a stochastic parameter is studied through a glued assembly example.
Bayesian Methods and Universal Darwinism
Campbell, John
2009-12-01
Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.
Bayesian phylogeography finds its roots.
Directory of Open Access Journals (Sweden)
Philippe Lemey
2009-09-01
Full Text Available As a key factor in endemic and epidemic dynamics, the geographical distribution of viruses has been frequently interpreted in the light of their genetic histories. Unfortunately, inference of historical dispersal or migration patterns of viruses has mainly been restricted to model-free heuristic approaches that provide little insight into the temporal setting of the spatial dynamics. The introduction of probabilistic models of evolution, however, offers unique opportunities to engage in this statistical endeavor. Here we introduce a Bayesian framework for inference, visualization and hypothesis testing of phylogeographic history. By implementing character mapping in a Bayesian software that samples time-scaled phylogenies, we enable the reconstruction of timed viral dispersal patterns while accommodating phylogenetic uncertainty. Standard Markov model inference is extended with a stochastic search variable selection procedure that identifies the parsimonious descriptions of the diffusion process. In addition, we propose priors that can incorporate geographical sampling distributions or characterize alternative hypotheses about the spatial dynamics. To visualize the spatial and temporal information, we summarize inferences using virtual globe software. We describe how Bayesian phylogeography compares with previous parsimony analysis in the investigation of the influenza A H5N1 origin and H5N1 epidemiological linkage among sampling localities. Analysis of rabies in West African dog populations reveals how virus diffusion may enable endemic maintenance through continuous epidemic cycles. From these analyses, we conclude that our phylogeographic framework will make an important asset in molecular epidemiology that can be easily generalized to infer biogeogeography from genetic data for many organisms.
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
2013-01-01
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....
Numeracy, frequency, and Bayesian reasoning
Directory of Open Access Journals (Sweden)
Gretchen B. Chapman
2009-02-01
Full Text Available Previous research has demonstrated that Bayesian reasoning performance is improved if uncertainty information is presented as natural frequencies rather than single-event probabilities. A questionnaire study of 342 college students replicated this effect but also found that the performance-boosting benefits of the natural frequency presentation occurred primarily for participants who scored high in numeracy. This finding suggests that even comprehension and manipulation of natural frequencies requires a certain threshold of numeracy abilities, and that the beneficial effects of natural frequency presentation may not be as general as previously believed.
Gardner, Michael S.; McWilliams, Lisa G.; Jones, Jeffrey I.; Kuklenyik, Zsuzsanna; Pirkle, James L.; Barr, John R.
2017-08-01
We demonstrate the application of in-source nitrogen collision-induced dissociation (CID) that eliminates the need for ester hydrolysis before simultaneous analysis of esterified cholesterol (EC) and triglycerides (TG) along with free cholesterol (FC) from human serum, using normal phase liquid chromatography (LC) coupled to atmospheric pressure chemical ionization (APCI) tandem mass spectrometry (MS/MS). The analysis requires only 50 μL of 1:100 dilute serum with a high-throughput, precipitation/evaporation/extraction protocol in one pot. Known representative mixtures of EC and TG species were used as calibrators with stable isotope labeled analogs as internal standards. The APCI MS source was operated with nitrogen source gas. Reproducible in-source CID was achieved with the use of optimal cone voltage (declustering potential), generating FC, EC, and TG lipid class-specific precursor fragment ions for multiple reaction monitoring (MRM). Using a representative mixture of purified FC, CE, and TG species as calibrators, the method accuracy was assessed with analysis of five inter-laboratory standardization materials, showing -10% bias for Total-C and -3% for Total-TG. Repeated duplicate analysis of a quality control pool showed intra-day and inter-day variation of 5% and 5.8% for FC, 5.2% and 8.5% for Total-C, and 4.1% and 7.7% for Total-TG. The applicability of the method was demonstrated on 32 serum samples and corresponding lipoprotein sub-fractions collected from normolipidemic, hypercholesterolemic, hypertriglyceridemic, and hyperlipidemic donors. The results show that in-source CID coupled with isotope dilution UHPLC-MS/MS is a viable high precision approach for translational research studies where samples are substantially diluted or the amounts of archived samples are limited. [Figure not available: see fulltext.
BEAST: Bayesian evolutionary analysis by sampling trees
Directory of Open Access Journals (Sweden)
Drummond Alexei J
2007-11-01
Full Text Available Abstract Background The evolutionary analysis of molecular sequence variation is a statistical enterprise. This is reflected in the increased use of probabilistic models for phylogenetic inference, multiple sequence alignment, and molecular population genetics. Here we present BEAST: a fast, flexible software architecture for Bayesian analysis of molecular sequences related by an evolutionary tree. A large number of popular stochastic models of sequence evolution are provided and tree-based models suitable for both within- and between-species sequence data are implemented. Results BEAST version 1.4.6 consists of 81000 lines of Java source code, 779 classes and 81 packages. It provides models for DNA and protein sequence evolution, highly parametric coalescent analysis, relaxed clock phylogenetics, non-contemporaneous sequence data, statistical alignment and a wide range of options for prior distributions. BEAST source code is object-oriented, modular in design and freely available at http://beast-mcmc.googlecode.com/ under the GNU LGPL license. Conclusion BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. It also provides a resource for the further development of new models and statistical methods of evolutionary analysis.
Seeded Bayesian Networks: Constructing genetic networks from microarray data
Directory of Open Access Journals (Sweden)
Quackenbush John
2008-07-01
Full Text Available Abstract Background DNA microarrays and other genomics-inspired technologies provide large datasets that often include hidden patterns of correlation between genes reflecting the complex processes that underlie cellular metabolism and physiology. The challenge in analyzing large-scale expression data has been to extract biologically meaningful inferences regarding these processes – often represented as networks – in an environment where the datasets are often imperfect and biological noise can obscure the actual signal. Although many techniques have been developed in an attempt to address these issues, to date their ability to extract meaningful and predictive network relationships has been limited. Here we describe a method that draws on prior information about gene-gene interactions to infer biologically relevant pathways from microarray data. Our approach consists of using preliminary networks derived from the literature and/or protein-protein interaction data as seeds for a Bayesian network analysis of microarray results. Results Through a bootstrap analysis of gene expression data derived from a number of leukemia studies, we demonstrate that seeded Bayesian Networks have the ability to identify high-confidence gene-gene interactions which can then be validated by comparison to other sources of pathway data. Conclusion The use of network seeds greatly improves the ability of Bayesian Network analysis to learn gene interaction networks from gene expression data. We demonstrate that the use of seeds derived from the biomedical literature or high-throughput protein-protein interaction data, or the combination, provides improvement over a standard Bayesian Network analysis, allowing networks involving dynamic processes to be deduced from the static snapshots of biological systems that represent the most common source of microarray data. Software implementing these methods has been included in the widely used TM4 microarray analysis package.
Laser separation of uranium isotopes
International Nuclear Information System (INIS)
Porter, J.T.
1981-01-01
Method and apparatus for separating uranium isotopes are claimed. The method comprises the steps of irradiating a uranyl source material at a wavelength selective to a desired isotope and at an effective temperature for isotope spectral line splitting below about 77 deg.K., further irradiating the source material within the fluorescent lifetime of the source material to selectively photochemically reduce the excited isotopic species, and chemically separating the reduced isotope species from the remaining uranyl salt compound
Bayesian LASSO, scale space and decision making in association genetics.
Pasanen, Leena; Holmström, Lasse; Sillanpää, Mikko J
2015-01-01
LASSO is a penalized regression method that facilitates model fitting in situations where there are as many, or even more explanatory variables than observations, and only a few variables are relevant in explaining the data. We focus on the Bayesian version of LASSO and consider four problems that need special attention: (i) controlling false positives, (ii) multiple comparisons, (iii) collinearity among explanatory variables, and (iv) the choice of the tuning parameter that controls the amount of shrinkage and the sparsity of the estimates. The particular application considered is association genetics, where LASSO regression can be used to find links between chromosome locations and phenotypic traits in a biological organism. However, the proposed techniques are relevant also in other contexts where LASSO is used for variable selection. We separate the true associations from false positives using the posterior distribution of the effects (regression coefficients) provided by Bayesian LASSO. We propose to solve the multiple comparisons problem by using simultaneous inference based on the joint posterior distribution of the effects. Bayesian LASSO also tends to distribute an effect among collinear variables, making detection of an association difficult. We propose to solve this problem by considering not only individual effects but also their functionals (i.e. sums and differences). Finally, whereas in Bayesian LASSO the tuning parameter is often regarded as a random variable, we adopt a scale space view and consider a whole range of fixed tuning parameters, instead. The effect estimates and the associated inference are considered for all tuning parameters in the selected range and the results are visualized with color maps that provide useful insights into data and the association problem considered. The methods are illustrated using two sets of artificial data and one real data set, all representing typical settings in association genetics.
Bayesian analysis of magnetic island dynamics
International Nuclear Information System (INIS)
Preuss, R.; Maraschek, M.; Zohm, H.; Dose, V.
2003-01-01
We examine a first order differential equation with respect to time used to describe magnetic islands in magnetically confined plasmas. The free parameters of this equation are obtained by employing Bayesian probability theory. Additionally, a typical Bayesian change point is solved in the process of obtaining the data
Learning dynamic Bayesian networks with mixed variables
DEFF Research Database (Denmark)
Bøttcher, Susanne Gammelgaard
This paper considers dynamic Bayesian networks for discrete and continuous variables. We only treat the case, where the distribution of the variables is conditional Gaussian. We show how to learn the parameters and structure of a dynamic Bayesian network and also how the Markov order can be learned...
Using Bayesian Networks to Improve Knowledge Assessment
Millan, Eva; Descalco, Luis; Castillo, Gladys; Oliveira, Paula; Diogo, Sandra
2013-01-01
In this paper, we describe the integration and evaluation of an existing generic Bayesian student model (GBSM) into an existing computerized testing system within the Mathematics Education Project (PmatE--Projecto Matematica Ensino) of the University of Aveiro. This generic Bayesian student model had been previously evaluated with simulated…
Using Bayesian belief networks in adaptive management.
J.B. Nyberg; B.G. Marcot; R. Sulyma
2006-01-01
Bayesian belief and decision networks are relatively new modeling methods that are especially well suited to adaptive-management applications, but they appear not to have been widely used in adaptive management to date. Bayesian belief networks (BBNs) can serve many purposes for practioners of adaptive management, from illustrating system relations conceptually to...
Bayesian Decision Theoretical Framework for Clustering
Chen, Mo
2011-01-01
In this thesis, we establish a novel probabilistic framework for the data clustering problem from the perspective of Bayesian decision theory. The Bayesian decision theory view justifies the important questions: what is a cluster and what a clustering algorithm should optimize. We prove that the spectral clustering (to be specific, the…
Robust Bayesian detection of unmodelled bursts
International Nuclear Information System (INIS)
Searle, Antony C; Sutton, Patrick J; Tinto, Massimo; Woan, Graham
2008-01-01
We develop a Bayesian treatment of the problem of detecting unmodelled gravitational wave bursts using the new global network of interferometric detectors. We also compare this Bayesian treatment with existing coherent methods, and demonstrate that the existing methods make implicit assumptions on the distribution of signals that make them sub-optimal for realistic signal populations
Bayesian models: A statistical primer for ecologists
Hobbs, N. Thompson; Hooten, Mevin B.
2015-01-01
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models
Particle identification in ALICE: a Bayesian approach
Adam, J.; Adamova, D.; Aggarwal, M. M.; Rinella, G. Aglieri; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Almaraz, J. R. M.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andronic, A.; Anguelov, V.; Anticic, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshaeuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badala, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Balasubramanian, S.; Baldisseri, A.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnafoeldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Bathen, B.; Batigne, G.; Camejo, A. Batista; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Belyaev, V.; Benacek, P.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielcik, J.; Bielcikova, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Bjelogrlic, S.; Blair, J. T.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Boggild, H.; Boldizsar, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossu, F.; Botta, E.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Cai, X.; Caines, H.; Diaz, L. Calero; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Carnesecchi, F.; Castellanos, J. Castillo; Castro, A. J.; Casula, E. A. R.; Sanchez, C. Ceballos; Cepila, J.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Barroso, V. Chibante; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Balbastre, G. Conesa; del Valle, Z. Conesa; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Morales, Y. Corrales; Cortes Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; Deisting, A.; Deloff, A.; Denes, E.; Deplano, C.; Dhankher, P.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Corchero, M. A. Diaz; Dietel, T.; Dillenseger, P.; Divia, R.; Djuvsland, O.; Dobrin, A.; Gimenez, D. Domenicis; Doenigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erdemir, I.; Erhardt, F.; Espagnon, B.; Estienne, M.; Esumi, S.; Eum, J.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernandez Tellez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Girard, M. Fusco; Gaardhoje, J. J.; Gagliardi, M.; Gago, A. M.; Gallio, M.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Germain, M.; Gheata, A.; Gheata, M.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glaessel, P.; Gomez Coral, D. M.; Ramirez, A. Gomez; Gonzalez, A. S.; Gonzalez, V.; Gonzalez-Zamora, P.; Gorbunov, S.; Goerlich, L.; Gotovac, S.; Grabski, V.; Grachov, O. A.; Graczykowski, L. K.; Graham, K. L.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Gronefeld, J. M.; Grosse-Oetringhaus, J. F.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Haake, R.; Haaland, O.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbaer, E.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Horak, D.; Hosokawa, R.; Hristov, P.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Incani, E.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacazio, N.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jahnke, C.; Jakubowska, M. J.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Bustamante, R. T. Jimenez; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kaplin, V.; Kar, S.; Uysal, A. Karasu; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Khan, M. Mohisin; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, J. S.; Kim, M.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein-Boesing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kostarakis, P.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Meethaleveedu, G. Koyithatta; Kralik, I.; Kravcakova, A.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kucera, V.; Kuijer, P. G.; Kumar, J.; Kumar, L.; Kumar, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Ladron de Guevara, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Lehas, F.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; Monzon, I. Leon; Leon Vargas, H.; Leoncino, M.; Levai, P.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Lopez, X.; Torres, E. Lopez; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mares, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marin, A.; Markert, C.; Marquard, M.; Martin, N. A.; Blanco, J. Martin; Martinengo, P.; Martinez, M. I.; Garcia, G. Martinez; Pedreira, M. Martinez; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Mastroserio, A.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; Mcdonald, D.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Perez, J. Mercado; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Miskowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montano Zetina, L.; Montes, E.; De Godoy, D. A. Moreira; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Muehlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, D.; Pagano, P.; Paic, G.; Pal, S. K.; Pan, J.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Da Costa, H. Pereira; Peresunko, D.; Lara, C. E. Perez; Lezama, E. Perez; Peskov, V.; Pestov, Y.; Petracek, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Ploskon, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Raniwala, R.; Raniwala, S.; Raesaenen, S. S.; Rascanu, B. T.; Rathee, D.; Read, K. F.; Redlich, K.; Reed, R. J.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rocco, E.; Rodriguez Cahuantzi, M.; Manso, A. Rodriguez; Roed, K.; Rogochaya, E.; Rohr, D.; Roehrich, D.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Montero, A. J. Rubio; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Safarik, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sandor, L.; Sandoval, A.; Sano, M.; Sarkar, D.; Sarkar, N.; Sarma, P.; Scapparone, E.; Scarlassara, F.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Sefcik, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shahzad, M. I.; Shangaraev, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singha, S.; Singhal, V.; Sinha, B. C.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; de Souza, R. D.; Sozzi, F.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Sumbera, M.; Sumowidagdo, S.; Szabo, A.; Szanto de Toledo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Munoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thaeder, J.; Thakur, D.; Thomas, D.; Tieulent, R.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; Palomo, L. Valencia; Vallero, S.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vyvre, P. Vande; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Vercellin, E.; Vergara Limon, S.; Vernet, R.; Verweij, M.; Vickovic, L.; Viesti, G.; Viinikainen, J.; Vilakazi, Z.; Baillie, O. Villalobos; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Voelkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrlakova, J.; Vulpescu, B.; Wagner, B.; Wagner, J.; Wang, H.; Watanabe, D.; Watanabe, Y.; Weiser, D. F.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yang, H.; Yano, S.; Yasin, Z.; Yokoyama, H.; Yoo, I. -K.; Yoon, J. H.; Yurchenko, V.; Yushmanov, I.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Zavada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, C.; Zhao, C.; Zhigareva, N.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.
2016-01-01
We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation of the adopted methodology and formalism, the performance of the Bayesian
Advances in Bayesian Modeling in Educational Research
Levy, Roy
2016-01-01
In this article, I provide a conceptually oriented overview of Bayesian approaches to statistical inference and contrast them with frequentist approaches that currently dominate conventional practice in educational research. The features and advantages of Bayesian approaches are illustrated with examples spanning several statistical modeling…
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Darwiche, Adnan; Chavira, Mark
2006-01-01
We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available PRIMULA tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference...
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan
2004-01-01
We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating...
Wang, L.; Davis, J. L.; Tamisiea, M. E.
2017-12-01
The Antarctic ice sheet (AIS) holds about 60% of all fresh water on the Earth, an amount equivalent to about 58 m of sea-level rise. Observation of AIS mass change is thus essential in determining and predicting its contribution to sea level. While the ice mass loss estimates for West Antarctica (WA) and the Antarctic Peninsula (AP) are in good agreement, what the mass balance over East Antarctica (EA) is, and whether or not it compensates for the mass loss is under debate. Besides the different error sources and sensitivities of different measurement types, complex spatial and temporal variabilities would be another factor complicating the accurate estimation of the AIS mass balance. Therefore, a model that allows for variabilities in both melting rate and seasonal signals would seem appropriate in the estimation of present-day AIS melting. We present a stochastic filter technique, which enables the Bayesian separation of the systematic stripe noise and mass signal in decade-length GRACE monthly gravity series, and allows the estimation of time-variable seasonal and inter-annual components in the signals. One of the primary advantages of this Bayesian method is that it yields statistically rigorous uncertainty estimates reflecting the inherent spatial resolution of the data. By applying the stochastic filter to the decade-long GRACE observations, we present the temporal variabilities of the AIS mass balance at basin scale, particularly over East Antarctica, and decipher the EA mass variations in the past decade, and their role in affecting overall AIS mass balance and sea level.
Irving, J.; Koepke, C.; Elsheikh, A. H.
2017-12-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion
BELM: Bayesian extreme learning machine.
Soria-Olivas, Emilio; Gómez-Sanchis, Juan; Martín, José D; Vila-Francés, Joan; Martínez, Marcelino; Magdalena, José R; Serrano, Antonio J
2011-03-01
The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap; and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages, namely, automatic production of CIs, reduction of probability of model overfitting, and use of a priori knowledge.
BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.
Khakabimamaghani, Sahand; Ester, Martin
2016-01-01
The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data.
Bayesian Nonparametric Longitudinal Data Analysis.
Quintana, Fernando A; Johnson, Wesley O; Waetjen, Elaine; Gold, Ellen
2016-01-01
Practical Bayesian nonparametric methods have been developed across a wide variety of contexts. Here, we develop a novel statistical model that generalizes standard mixed models for longitudinal data that include flexible mean functions as well as combined compound symmetry (CS) and autoregressive (AR) covariance structures. AR structure is often specified through the use of a Gaussian process (GP) with covariance functions that allow longitudinal data to be more correlated if they are observed closer in time than if they are observed farther apart. We allow for AR structure by considering a broader class of models that incorporates a Dirichlet Process Mixture (DPM) over the covariance parameters of the GP. We are able to take advantage of modern Bayesian statistical methods in making full predictive inferences and about characteristics of longitudinal profiles and their differences across covariate combinations. We also take advantage of the generality of our model, which provides for estimation of a variety of covariance structures. We observe that models that fail to incorporate CS or AR structure can result in very poor estimation of a covariance or correlation matrix. In our illustration using hormone data observed on women through the menopausal transition, biology dictates the use of a generalized family of sigmoid functions as a model for time trends across subpopulation categories.
2nd Bayesian Young Statisticians Meeting
Bitto, Angela; Kastner, Gregor; Posekany, Alexandra
2015-01-01
The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Université Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session ...
Bayesian natural language semantics and pragmatics
Zeevat, Henk
2015-01-01
The contributions in this volume focus on the Bayesian interpretation of natural languages, which is widely used in areas of artificial intelligence, cognitive science, and computational linguistics. This is the first volume to take up topics in Bayesian Natural Language Interpretation and make proposals based on information theory, probability theory, and related fields. The methodologies offered here extend to the target semantic and pragmatic analyses of computational natural language interpretation. Bayesian approaches to natural language semantics and pragmatics are based on methods from signal processing and the causal Bayesian models pioneered by especially Pearl. In signal processing, the Bayesian method finds the most probable interpretation by finding the one that maximizes the product of the prior probability and the likelihood of the interpretation. It thus stresses the importance of a production model for interpretation as in Grice's contributions to pragmatics or in interpretation by abduction.
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.
Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models
International Nuclear Information System (INIS)
Cai, Caifang
2013-01-01
Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also
Energy Technology Data Exchange (ETDEWEB)
Wittner, Manuel [Physikalisches Institut, Universitaet Heidelberg, Heidelberg (Germany); Collaboration: ALICE-Collaboration
2015-07-01
One particularly interesting measurement detected by the ALICE set-up at the LHC are electrons from charm and beauty hadron decays. Heavy quarks originate from initial hard scattering processes and thus experience the whole history of a heavy ion collision. Therefore, they are valuable probes to study the mechanisms of energy loss and hadronization in the hot and dense state of matter, that is expected to be formed in a heavy-ion collision at LHC. One important task is the distinction of the different electron sources, for which a method was developed. Hereby, the impact parameter distribution of the measurement data is compared with impact parameter distributions for the individual sources, which are created through Monte Carlo simulations. Afterwards, a maximum likelihood fit is applied. However, creating a posterior distribution of the likelihood according to Bayes' theorem and sampling it with Markov Chain Monte Carlo algorithms provides several advantages, e.g. a mathematically correct estimation of the uncertainties or the usage of prior knowledge. Hence for the first time in this particular problem, a Markov Chain Monte Carlo algorithm, namely the Metropolis algorithm, was implemented and investigated for its applicability in heavy flavor physics. First studies indicate its great usefulness in this field of physics.
Bayesian experts in exploring reaction kinetics of transcription circuits.
Yoshida, Ryo; Saito, Masaya M; Nagao, Hiromichi; Higuchi, Tomoyuki
2010-09-15
Biochemical reactions in cells are made of several types of biological circuits. In current systems biology, making differential equation (DE) models simulatable in silico has been an appealing, general approach to uncover a complex world of biochemical reaction dynamics. Despite of a need for simulation-aided studies, our research field has yet provided no clear answers: how to specify kinetic values in models that are difficult to measure from experimental/theoretical analyses on biochemical kinetics. We present a novel non-parametric Bayesian approach to this problem. The key idea lies in the development of a Dirichlet process (DP) prior distribution, called Bayesian experts, which reflects substantive knowledge on reaction mechanisms inherent in given models and experimentally observable kinetic evidences to the subsequent parameter search. The DP prior identifies significant local regions of unknown parameter space before proceeding to the posterior analyses. This article reports that a Bayesian expert-inducing stochastic search can effectively explore unknown parameters of in silico transcription circuits such that solutions of DEs reproduce transcriptomic time course profiles. A sample source code is available at the URL http://daweb.ism.ac.jp/~yoshidar/lisdas/.
Technical note: Bayesian calibration of dynamic ruminant nutrition models.
Reed, K F; Arhonditsis, G B; France, J; Kebreab, E
2016-08-01
Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Classical-Equivalent Bayesian Portfolio Optimization for Electricity Generation Planning
Directory of Open Access Journals (Sweden)
Hellinton H. Takada
2018-01-01
Full Text Available There are several electricity generation technologies based on different sources such as wind, biomass, gas, coal, and so on. The consideration of the uncertainties associated with the future costs of such technologies is crucial for planning purposes. In the literature, the allocation of resources in the available technologies has been solved as a mean-variance optimization problem assuming knowledge of the expected values and the covariance matrix of the costs. However, in practice, they are not exactly known parameters. Consequently, the obtained optimal allocations from the mean-variance optimization are not robust to possible estimation errors of such parameters. Additionally, it is usual to have electricity generation technology specialists participating in the planning processes and, obviously, the consideration of useful prior information based on their previous experience is of utmost importance. The Bayesian models consider not only the uncertainty in the parameters, but also the prior information from the specialists. In this paper, we introduce the classical-equivalent Bayesian mean-variance optimization to solve the electricity generation planning problem using both improper and proper prior distributions for the parameters. In order to illustrate our approach, we present an application comparing the classical-equivalent Bayesian with the naive mean-variance optimal portfolios.
Gollan, A.
1988-03-29
Feed gas is directed tangentially along the non-skin surface of gas separation membrane modules comprising a cylindrical bundle of parallel contiguous hollow fibers supported to allow feed gas to flow from an inlet at one end of a cylindrical housing through the bores of the bundled fibers to an outlet at the other end while a component of the feed gas permeates through the fibers, each having the skin side on the outside, through a permeate outlet in the cylindrical casing. 3 figs.