DEFF Research Database (Denmark)
Hansen, Thomas Mejer; Mosegaard, Klaus; Cordua, Knud Skou
2010-01-01
Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample the solutions to non-linear inverse problems. In principle these methods allow incorporation of arbitrarily complex a priori information, but current methods allow only relatively simple...... this algorithm with the Metropolis algorithm to obtain an efficient method for sampling posterior probability densities for nonlinear inverse problems....
Gibbs sampling on large lattice with GMRF
Marcotte, Denis; Allard, Denis
2018-02-01
Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.
Geometric and Texture Inpainting by Gibbs Sampling
DEFF Research Database (Denmark)
Gustafsson, David Karl John; Pedersen, Kim Steenstrup; Nielsen, Mads
2007-01-01
. In this paper we use the well-known FRAME (Filters, Random Fields and Maximum Entropy) for inpainting. We introduce a temperature term in the learned FRAME Gibbs distribution. By sampling using different temperature in the FRAME Gibbs distribution, different contents of the image are reconstructed. We propose...... a two step method for inpainting using FRAME. First the geometric structure of the image is reconstructed by sampling from a cooled Gibbs distribution, then the stochastic component is reconstructed by sample froma heated Gibbs distribution. Both steps in the reconstruction process are necessary...
Inverse problems with non-trivial priors: efficient solution through sequential Gibbs sampling
DEFF Research Database (Denmark)
Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus
2012-01-01
Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample solutions to non-linear inverse problems. In principle, these methods allow incorporation of prior information of arbitrary complexity. If an analytical closed form description of the prior...... is available, which is the case when the prior can be described by a multidimensional Gaussian distribution, such prior information can easily be considered. In reality, prior information is often more complex than can be described by the Gaussian model, and no closed form expression of the prior can be given....... We propose an algorithm, called sequential Gibbs sampling, allowing the Metropolis algorithm to efficiently incorporate complex priors into the solution of an inverse problem, also for the case where no closed form description of the prior exists. First, we lay out the theoretical background...
Simultaneous alignment and clustering of peptide data using a Gibbs sampling approach
DEFF Research Database (Denmark)
Andreatta, Massimo; Lund, Ole; Nielsen, Morten
2013-01-01
Motivation: Proteins recognizing short peptide fragments play a central role in cellular signaling. As a result of high-throughput technologies, peptide-binding protein specificities can be studied using large peptide libraries at dramatically lower cost and time. Interpretation of such large...... peptide datasets, however, is a complex task, especially when the data contain multiple receptor binding motifs, and/or the motifs are found at different locations within distinct peptides.Results: The algorithm presented in this article, based on Gibbs sampling, identifies multiple specificities...... of unaligned peptide datasets of variable length. Example applications described in this article include mixtures of binders to different MHC class I and class II alleles, distinct classes of ligands for SH3 domains and sub-specificities of the HLA-A*02:01 molecule.Availability: The Gibbs clustering method...
Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width.
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2015-12-01
Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width . We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates , which have bounded hierarchy width-regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers.
Nan, Ning; Chen, Qi; Wang, Yu; Zhai, Xu; Yang, Chuan-Ce; Cao, Bin; Chong, Tie
2017-10-01
To explore the disturbed molecular functions and pathways in clear cell renal cell carcinoma (ccRCC) using Gibbs sampling. Gene expression data of ccRCC samples and adjacent non-tumor renal tissues were recruited from public available database. Then, molecular functions of expression changed genes in ccRCC were classed to Gene Ontology (GO) project, and these molecular functions were converted into Markov chains. Markov chain Monte Carlo (MCMC) algorithm was implemented to perform posterior inference and identify probability distributions of molecular functions in Gibbs sampling. Differentially expressed molecular functions were selected under posterior value more than 0.95, and genes with the appeared times in differentially expressed molecular functions ≥5 were defined as pivotal genes. Functional analysis was employed to explore the pathways of pivotal genes and their strongly co-regulated genes. In this work, we obtained 396 molecular functions, and 13 of them were differentially expressed. Oxidoreductase activity showed the highest posterior value. Gene composition analysis identified 79 pivotal genes, and survival analysis indicated that these pivotal genes could be used as a strong independent predictor of poor prognosis in patients with ccRCC. Pathway analysis identified one pivotal pathway - oxidative phosphorylation. We identified the differentially expressed molecular functions and pivotal pathway in ccRCC using Gibbs sampling. The results could be considered as potential signatures for early detection and therapy of ccRCC. Copyright © 2017 Elsevier Ltd. All rights reserved.
Large scale inference in the Infinite Relational Model: Gibbs sampling is not enough
DEFF Research Database (Denmark)
Albers, Kristoffer Jon; Moth, Andreas Leon Aagard; Mørup, Morten
2013-01-01
. We find that Gibbs sampling can be computationally scaled to handle millions of nodes and billions of links. Investigating the behavior of the Gibbs sampler for different sizes of networks we find that the mixing ability decreases drastically with the network size, clearly indicating a need...
Gibbs-non-Gibbs transitions and vector-valued integration
Zuijlen, van W.B.
2016-01-01
This thesis consists of two distinct topics. The first part of the thesis con- siders Gibbs-non-Gibbs transitions. Gibbs measures describe the macro- scopic state of a system of a large number of components that is in equilib- rium. It may happen that when the system is transformed, for example, by
Improved prediction of MHC class I and class II epitopes using a novel Gibbs sampling approach
DEFF Research Database (Denmark)
Nielsen, Morten; Lundegaard, Claus; Worning, Peder
2004-01-01
Prediction of which peptides will bind a specific major histocompatibility complex (MHC) constitutes an important step in identifying potential T-cell epitopes suitable as vaccine candidates. MHC class II binding peptides have a broad length distribution complicating such predictions. Thus......, identifying the correct alignment is a crucial part of identifying the core of an MHC class II binding motif. In this context, we wish to describe a novel Gibbs motif sampler method ideally suited for recognizing such weak sequence motifs. The method is based on the Gibbs sampling method, and it incorporates...
DEFF Research Database (Denmark)
Rodek, L.; Knudsen, E.; Poulsen, H.F.
2005-01-01
discrete tomographic algorithm, applying image-modelling Gibbs priors and a homogeneity condition. The optimization of the objective function is accomplished via the Gibbs Sampler in conjunction with simulated annealing. In order to express the structure of the orientation map, the similarity...
Inverse Gaussian model for small area estimation via Gibbs sampling
African Journals Online (AJOL)
We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...
Indian Academy of Sciences (India)
The younger Gibbs grew up in the liberal and academic atmos- phere at Yale, where .... research in the premier European universities at the time when a similar culture ... tion in obscure journals, Gibbs' work did not receive wide recognition in ...
Quantum Gibbs Samplers: The Commuting Case
Kastoryano, Michael J.; Brandão, Fernando G. S. L.
2016-06-01
We analyze the problem of preparing quantum Gibbs states of lattice spin Hamiltonians with local and commuting terms on a quantum computer and in nature. Our central result is an equivalence between the behavior of correlations in the Gibbs state and the mixing time of the semigroup which drives the system to thermal equilibrium (the Gibbs sampler). We introduce a framework for analyzing the correlation and mixing properties of quantum Gibbs states and quantum Gibbs samplers, which is rooted in the theory of non-commutative {mathbb{L}_p} spaces. We consider two distinct classes of Gibbs samplers, one of them being the well-studied Davies generator modelling the dynamics of a system due to weak-coupling with a large Markovian environment. We show that their spectral gap is independent of system size if, and only if, a certain strong form of clustering of correlations holds in the Gibbs state. Therefore every Gibbs state of a commuting Hamiltonian that satisfies clustering of correlations in this strong sense can be prepared efficiently on a quantum computer. As concrete applications of our formalism, we show that for every one-dimensional lattice system, or for systems in lattices of any dimension at temperatures above a certain threshold, the Gibbs samplers of commuting Hamiltonians are always gapped, giving an efficient way of preparing the associated Gibbs states on a quantum computer.
A logistic regression estimating function for spatial Gibbs point processes
DEFF Research Database (Denmark)
Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege
We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...
Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much.
He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher
2016-01-01
Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance.
Near-Optimal Detection in MIMO Systems using Gibbs Sampling
DEFF Research Database (Denmark)
Hansen, Morten; Hassibi, Babak; Dimakis, Georgios Alexandros
2009-01-01
In this paper we study a Markov Chain Monte Carlo (MCMC) Gibbs sampler for solving the integer least-squares problem. In digital communication the problem is equivalent to preforming Maximum Likelihood (ML) detection in Multiple-Input Multiple-Output (MIMO) systems. While the use of MCMC methods...... sampler provides a computationally efficient way of achieving approximative ML detection in MIMO systems having a huge number of transmit and receive dimensions. In fact, they further suggest that the Markov chain is rapidly mixing. Thus, it has been observed that even in cases were ML detection using, e...
International Nuclear Information System (INIS)
Chan, M.T.; Herman, G.T.; Levitan, E.
1996-01-01
We demonstrate that (i) classical methods of image reconstruction from projections can be improved upon by considering the output of such a method as a distorted version of the original image and applying a Bayesian approach to estimate from it the original image (based on a model of distortion and on a Gibbs distribution as the prior) and (ii) by selecting an open-quotes image-modelingclose quotes prior distribution (i.e., one which is such that it is likely that a random sample from it shares important characteristics of the images of the application area) one can improve over another Gibbs prior formulated using only pairwise interactions. We illustrate our approach using simulated Positron Emission Tomography (PET) data from realistic brain phantoms. Since algorithm performance ultimately depends on the diagnostic task being performed. we examine a number of different medically relevant figures of merit to give a fair comparison. Based on a training-and-testing evaluation strategy, we demonstrate that statistically significant improvements can be obtained using the proposed approach
DEFF Research Database (Denmark)
Ødegård, Jørgen; Meuwissen, Theo HE; Heringstad, Bjørg
2010-01-01
Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where...... records exist for the parents). Furthermore, the new algorithm showed much faster Markov chain mixing properties for genetic parameters (similar to the sire-dam model). Conclusions The new algorithm to estimate genetic parameters via Gibbs sampling solves the bias problems typically occurring in animal...... individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (co)variance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative...
Notes on the development of the gibbs potential; Sur le developpement du potentiel de gibbs
Energy Technology Data Exchange (ETDEWEB)
Bloch, C; Dominicis, C de [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires
1959-07-01
A short account is given of some recent work on the perturbation expansion of the Gibbs potential of quantum statistical mechanics. (author) [French] Expose en resume de quelques travaux sur le developpement dans la theorie des perturbations du potentiel de Gibbs de la Mecanique Statistique. (auteur)
GPU-accelerated Gibbs ensemble Monte Carlo simulations of Lennard-Jonesium
Mick, Jason; Hailat, Eyad; Russo, Vincent; Rushaidat, Kamel; Schwiebert, Loren; Potoff, Jeffrey
2013-12-01
This work describes an implementation of canonical and Gibbs ensemble Monte Carlo simulations on graphics processing units (GPUs). The pair-wise energy calculations, which consume the majority of the computational effort, are parallelized using the energetic decomposition algorithm. While energetic decomposition is relatively inefficient for traditional CPU-bound codes, the algorithm is ideally suited to the architecture of the GPU. The performance of the CPU and GPU codes are assessed for a variety of CPU and GPU combinations for systems containing between 512 and 131,072 particles. For a system of 131,072 particles, the GPU-enabled canonical and Gibbs ensemble codes were 10.3 and 29.1 times faster (GTX 480 GPU vs. i5-2500K CPU), respectively, than an optimized serial CPU-bound code. Due to overhead from memory transfers from system RAM to the GPU, the CPU code was slightly faster than the GPU code for simulations containing less than 600 particles. The critical temperature Tc∗=1.312(2) and density ρc∗=0.316(3) were determined for the tail corrected Lennard-Jones potential from simulations of 10,000 particle systems, and found to be in exact agreement with prior mixed field finite-size scaling calculations [J.J. Potoff, A.Z. Panagiotopoulos, J. Chem. Phys. 109 (1998) 10914].
Advanced Markov chain Monte Carlo methods learning from past samples
Liang, Faming; Carrol, Raymond J
2010-01-01
This book provides comprehensive coverage of simulation of complex systems using Monte Carlo methods. Developing algorithms that are immune to the local trap problem has long been considered as the most important topic in MCMC research. Various advanced MCMC algorithms which address this problem have been developed include, the modified Gibbs sampler, the methods based on auxiliary variables and the methods making use of past samples. The focus of this book is on the algorithms that make use of past samples. This book includes the multicanonical algorithm, dynamic weighting, dynamically weight
International Nuclear Information System (INIS)
Potters, Max; Vaillant, Timothee; Bouchet, Freddy
2013-01-01
The 2D Euler equations are basic examples of fluid models for which a microcanonical measure can be constructed from first principles. This measure is defined through finite-dimensional approximations and a limiting procedure. Creutz’s algorithm is a microcanonical generalization of the Metropolis–Hastings algorithm (to sample Gibbs measures, in the canonical ensemble). We prove that Creutz’s algorithm can sample finite-dimensional approximations of the 2D Euler microcanonical measures (incorporating fixed energy and other invariants). This is essential as microcanonical and canonical measures are known to be inequivalent at some values of energy and vorticity distribution. Creutz’s algorithm is used to check predictions from the mean-field statistical mechanics theory of the 2D Euler equations (the Robert–Sommeria–Miller theory). We find full agreement with theory. Three different ways to compute the temperature give consistent results. Using Creutz’s algorithm, a first-order phase transition never observed previously and a situation of statistical ensemble inequivalence are found and studied. Strikingly, and in contrast to the usual statistical mechanics interpretations, this phase transition appears from a disordered phase to an ordered phase (with fewer symmetries) when the energy is increased. We explain this paradox. (paper)
An Introduction to the DA-T Gibbs Sampler for the Two-Parameter Logistic (2PL Model and Beyond
Directory of Open Access Journals (Sweden)
Gunter Maris
2005-01-01
Full Text Available The DA-T Gibbs sampler is proposed by Maris and Maris (2002 as a Bayesian estimation method for a wide variety of Item Response Theory (IRT models. The present paper provides an expository account of the DAT Gibbs sampler for the 2PL model. However, the scope is not limited to the 2PL model. It is demonstrated how the DA-T Gibbs sampler for the 2PL may be used to build, quite easily, Gibbs samplers for other IRT models. Furthermore, the paper contains a novel, intuitive derivation of the Gibbs sampler and could be read for a graduate course on sampling.
Enzyme Catalysis and the Gibbs Energy
Ault, Addison
2009-01-01
Gibbs-energy profiles are often introduced during the first semester of organic chemistry, but are less often presented in connection with enzyme-catalyzed reactions. In this article I show how the Gibbs-energy profile corresponds to the characteristic kinetics of a simple enzyme-catalyzed reaction. (Contains 1 figure and 1 note.)
A sampling algorithm for segregation analysis
Directory of Open Access Journals (Sweden)
Henshall John
2001-11-01
Full Text Available Abstract Methods for detecting Quantitative Trait Loci (QTL without markers have generally used iterative peeling algorithms for determining genotype probabilities. These algorithms have considerable shortcomings in complex pedigrees. A Monte Carlo Markov chain (MCMC method which samples the pedigree of the whole population jointly is described. Simultaneous sampling of the pedigree was achieved by sampling descent graphs using the Metropolis-Hastings algorithm. A descent graph describes the inheritance state of each allele and provides pedigrees guaranteed to be consistent with Mendelian sampling. Sampling descent graphs overcomes most, if not all, of the limitations incurred by iterative peeling algorithms. The algorithm was able to find the QTL in most of the simulated populations. However, when the QTL was not modeled or found then its effect was ascribed to the polygenic component. No QTL were detected when they were not simulated.
Inferring the Gibbs state of a small quantum system
International Nuclear Information System (INIS)
Rau, Jochen
2011-01-01
Gibbs states are familiar from statistical mechanics, yet their use is not limited to that domain. For instance, they also feature in the maximum entropy reconstruction of quantum states from incomplete measurement data. Outside the macroscopic realm, however, estimating a Gibbs state is a nontrivial inference task, due to two complicating factors: the proper set of relevant observables might not be evident a priori; and whenever data are gathered from a small sample only, the best estimate for the Lagrange parameters is invariably affected by the experimenter's prior bias. I show how the two issues can be tackled with the help of Bayesian model selection and Bayesian interpolation, respectively, and illustrate the use of these Bayesian techniques with a number of simple examples.
Evolution algebras generated by Gibbs measures
International Nuclear Information System (INIS)
Rozikov, Utkir A.; Tian, Jianjun Paul
2009-03-01
In this article we study algebraic structures of function spaces defined by graphs and state spaces equipped with Gibbs measures by associating evolution algebras. We give a constructive description of associating evolution algebras to the function spaces (cell spaces) defined by graphs and state spaces and Gibbs measure μ. For finite graphs we find some evolution subalgebras and other useful properties of the algebras. We obtain a structure theorem for evolution algebras when graphs are finite and connected. We prove that for a fixed finite graph, the function spaces have a unique algebraic structure since all evolution algebras are isomorphic to each other for whichever Gibbs measures are assigned. When graphs are infinite graphs then our construction allows a natural introduction of thermodynamics in studying of several systems of biology, physics and mathematics by theory of evolution algebras. (author)
Finite Cycle Gibbs Measures on Permutations of
Armendáriz, Inés; Ferrari, Pablo A.; Groisman, Pablo; Leonardi, Florencia
2015-03-01
We consider Gibbs distributions on the set of permutations of associated to the Hamiltonian , where is a permutation and is a strictly convex potential. Call finite-cycle those permutations composed by finite cycles only. We give conditions on ensuring that for large enough temperature there exists a unique infinite volume ergodic Gibbs measure concentrating mass on finite-cycle permutations; this measure is equal to the thermodynamic limit of the specifications with identity boundary conditions. We construct as the unique invariant measure of a Markov process on the set of finite-cycle permutations that can be seen as a loss-network, a continuous-time birth and death process of cycles interacting by exclusion, an approach proposed by Fernández, Ferrari and Garcia. Define as the shift permutation . In the Gaussian case , we show that for each , given by is an ergodic Gibbs measure equal to the thermodynamic limit of the specifications with boundary conditions. For a general potential , we prove the existence of Gibbs measures when is bigger than some -dependent value.
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Building test data from real outbreaks for evaluating detection algorithms.
Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve
2017-01-01
Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.
Building test data from real outbreaks for evaluating detection algorithms.
Directory of Open Access Journals (Sweden)
Gaetan Texier
Full Text Available Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler. We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1 resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak
Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko
2013-01-01
In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.
Gibbs perturbations of a two-dimensional gauge field
International Nuclear Information System (INIS)
Petrova, E.N.
1981-01-01
Small Gibbs perturbations of random fields have been investigated up to now for a few initial fields only. Among them there are independent fields, Gaussian fields and some others. The possibility for the investigation of Gibbs modifications of a random field depends essentially on the existence of good estimates for semiinvariants of this field. This is the reason why the class of random fields for which the investigation of Gibbs perturbations with arbitrary potential of bounded support is possible is rather small. The author takes as initial a well-known model: a two-dimensional gauge field. (Auth.)
International Nuclear Information System (INIS)
Jacome, Paulo A.D.; Landim, Mariana C.; Garcia, Amauri; Furtado, Alexandre F.; Ferreira, Ivaldo L.
2011-01-01
Highlights: → Surface tension and the Gibbs-Thomson coefficient are computed for Al-based alloys. → Butler's scheme and ThermoCalc are used to compute the thermophysical properties. → Predictive cell/dendrite growth models depend on accurate thermophysical properties. → Mechanical properties can be related to the microstructural cell/dendrite spacing. - Abstract: In this paper, a solution for Butler's formulation is presented permitting the surface tension and the Gibbs-Thomson coefficient of Al-based binary alloys to be determined. The importance of Gibbs-Thomson coefficient for binary alloys is related to the reliability of predictions furnished by predictive cellular and dendritic growth models and of numerical computations of solidification thermal variables, which will be strongly dependent on the thermophysical properties assumed for the calculations. A numerical model based on Powell hybrid algorithm and a finite difference Jacobian approximation was coupled to a specific interface of a computational thermodynamics software in order to assess the excess Gibbs energy of the liquid phase, permitting the surface tension and Gibbs-Thomson coefficient for Al-Fe, Al-Ni, Al-Cu and Al-Si hypoeutectic alloys to be calculated. The computed results are presented as a function of the alloy composition.
Reflections on Gibbs: From Critical Phenomena to the Amistad
Kadanoff, Leo P.
2003-03-01
J. Willard Gibbs, the younger was the first American theorist. He was one of the inventors of statistical physics. His introduction and development of the concepts of phase space, phase transitions, and thermodynamic surfaces was remarkably correct and elegant. These three concepts form the basis of different but related areas of physics. The connection among these areas has been a subject of deep reflection from Gibbs' time to our own. I shall talk about these connections by using concepts suggested by the work of Michael Berry and explicitly put forward by the philosopher Robert Batterman. This viewpoint relates theory-connection to the applied mathematics concepts of asymptotic analysis and singular perturbations. J. Willard Gibbs, the younger, had all his achievements concentrated in science. His father, also J. Willard Gibbs, also a Professor at Yale, had one great achievement that remains unmatched in our day. I shall describe it.
Two General Extension Algorithms of Latin Hypercube Sampling
Directory of Open Access Journals (Sweden)
Zhi-zhao Liu
2015-01-01
Full Text Available For reserving original sampling points to reduce the simulation runs, two general extension algorithms of Latin Hypercube Sampling (LHS are proposed. The extension algorithms start with an original LHS of size m and construct a new LHS of size m+n that contains the original points as many as possible. In order to get a strict LHS of larger size, some original points might be deleted. The relationship of original sampling points in the new LHS structure is shown by a simple undirected acyclic graph. The basic general extension algorithm is proposed to reserve the most original points, but it costs too much time. Therefore, a general extension algorithm based on greedy algorithm is proposed to reduce the extension time, which cannot guarantee to contain the most original points. These algorithms are illustrated by an example and applied to evaluating the sample means to demonstrate the effectiveness.
Ferreira, D. J. S.; Bezerra, B. N.; Collyer, M. N.; Garcia, A.; Ferreira, I. L.
2018-04-01
The simulation of casting processes demands accurate information on the thermophysical properties of the alloy; however, such information is scarce in the literature for multicomponent alloys. Generally, metallic alloys applied in industry have more than three solute components. In the present study, a general solution of Butler's formulation for surface tension is presented for multicomponent alloys and is applied in quaternary Al-Cu-Si-Fe alloys, thus permitting the Gibbs-Thomson coefficient to be determined. Such coefficient is a determining factor to the reliability of predictions furnished by microstructure growth models and by numerical computations of solidification thermal parameters, which will depend on the thermophysical properties assumed in the calculations. The Gibbs-Thomson coefficient for ternary and quaternary alloys is seldom reported in the literature. A numerical model based on Powell's hybrid algorithm and a finite difference Jacobian approximation has been coupled to a Thermo-Calc TCAPI interface to assess the excess Gibbs energy of the liquid phase, permitting liquidus temperature, latent heat, alloy density, surface tension and Gibbs-Thomson coefficient for Al-Cu-Si-Fe hypoeutectic alloys to be calculated, as an example of calculation capabilities for multicomponent alloys of the proposed method. The computed results are compared with thermophysical properties of binary Al-Cu and ternary Al-Cu-Si alloys found in the literature and presented as a function of the Cu solute composition.
Energy Technology Data Exchange (ETDEWEB)
Leblanc, B.
2002-03-01
Molecular simulation aims at simulating particles in interaction, describing a physico-chemical system. When considering Markov Chain Monte Carlo sampling in this context, we often meet the same problem of statistical efficiency as with Molecular Dynamics for the simulation of complex molecules (polymers for example). The search for a correct sampling of the space of possible configurations with respect to the Boltzmann-Gibbs distribution is directly related to the statistical efficiency of such algorithms (i.e. the ability of rapidly providing uncorrelated states covering all the configuration space). We investigated how to improve this efficiency with the help of Artificial Evolution (AE). AE algorithms form a class of stochastic optimization algorithms inspired by Darwinian evolution. Efficiency measures that can be turned into efficiency criteria have been first searched before identifying parameters that could be optimized. Relative frequencies for each type of Monte Carlo moves, usually empirically chosen in reasonable ranges, were first considered. We combined parallel simulations with a 'genetic server' in order to dynamically improve the quality of the sampling during the simulations progress. Our results shows that in comparison with some reference settings, it is possible to improve the quality of samples with respect to the chosen criterion. The same algorithm has been applied to improve the Parallel Tempering technique, in order to optimize in the same time the relative frequencies of Monte Carlo moves and the relative frequencies of swapping between sub-systems simulated at different temperatures. Finally, hints for further research in order to optimize the choice of additional temperatures are given. (author)
A brief critique of the Adam-Gibbs entropy model
DEFF Research Database (Denmark)
Dyre, J. C.; Hecksher, Tina; Niss, Kristine
2009-01-01
This paper critically discusses the entropy model proposed by Adam and Gibbs in 1965 for the dramatic temperature dependence of glass-forming liquids' average relaxation time, which is one of the most influential models during the last four decades. We discuss the Adam-Gibbs model's theoretical...
Thermodynamic fluctuations within the Gibbs and Einstein approaches
International Nuclear Information System (INIS)
Rudoi, Yurii G; Sukhanov, Alexander D
2000-01-01
A comparative analysis of the descriptions of fluctuations in statistical mechanics (the Gibbs approach) and in statistical thermodynamics (the Einstein approach) is given. On this basis solutions are obtained for the Gibbs and Einstein problems that arise in pressure fluctuation calculations for a spatially limited equilibrium (or slightly nonequilibrium) macroscopic system. A modern formulation of the Gibbs approach which allows one to calculate equilibrium pressure fluctuations without making any additional assumptions is presented; to this end the generalized Bogolyubov - Zubarev and Hellmann - Feynman theorems are proved for the classical and quantum descriptions of a macrosystem. A statistical version of the Einstein approach is developed which shows a fundamental difference in pressure fluctuation results obtained within the context of two approaches. Both the 'genetic' relation between the Gibbs and Einstein approaches and the conceptual distinction between their physical grounds are demonstrated. To illustrate the results, which are valid for any thermodynamic system, an ideal nondegenerate gas of microparticles is considered, both classically and quantum mechanically. Based on the results obtained, the correspondence between the micro- and macroscopic descriptions is considered and the prospects of statistical thermodynamics are discussed. (reviews of topical problems)
Energy Technology Data Exchange (ETDEWEB)
Jacome, Paulo A.D.; Landim, Mariana C. [Department of Mechanical Engineering, Fluminense Federal University, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil); Garcia, Amauri, E-mail: amaurig@fem.unicamp.br [Department of Materials Engineering, University of Campinas, UNICAMP, PO Box 6122, 13083-970 Campinas, SP (Brazil); Furtado, Alexandre F.; Ferreira, Ivaldo L. [Department of Mechanical Engineering, Fluminense Federal University, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil)
2011-08-20
Highlights: {yields} Surface tension and the Gibbs-Thomson coefficient are computed for Al-based alloys. {yields} Butler's scheme and ThermoCalc are used to compute the thermophysical properties. {yields} Predictive cell/dendrite growth models depend on accurate thermophysical properties. {yields} Mechanical properties can be related to the microstructural cell/dendrite spacing. - Abstract: In this paper, a solution for Butler's formulation is presented permitting the surface tension and the Gibbs-Thomson coefficient of Al-based binary alloys to be determined. The importance of Gibbs-Thomson coefficient for binary alloys is related to the reliability of predictions furnished by predictive cellular and dendritic growth models and of numerical computations of solidification thermal variables, which will be strongly dependent on the thermophysical properties assumed for the calculations. A numerical model based on Powell hybrid algorithm and a finite difference Jacobian approximation was coupled to a specific interface of a computational thermodynamics software in order to assess the excess Gibbs energy of the liquid phase, permitting the surface tension and Gibbs-Thomson coefficient for Al-Fe, Al-Ni, Al-Cu and Al-Si hypoeutectic alloys to be calculated. The computed results are presented as a function of the alloy composition.
Efficient sampling algorithms for Monte Carlo based treatment planning
International Nuclear Information System (INIS)
DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.
1998-01-01
Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed
Reflections on Gibbs: From Statistical Physics to the Amistad V3.0
Kadanoff, Leo P.
2014-07-01
This note is based upon a talk given at an APS meeting in celebration of the achievements of J. Willard Gibbs. J. Willard Gibbs, the younger, was the first American physical sciences theorist. He was one of the inventors of statistical physics. He introduced and developed the concepts of phase space, phase transitions, and thermodynamic surfaces in a remarkably correct and elegant manner. These three concepts form the basis of different areas of physics. The connection among these areas has been a subject of deep reflection from Gibbs' time to our own. This talk therefore celebrated Gibbs by describing modern ideas about how different parts of physics fit together. I finished with a more personal note. Our own J. Willard Gibbs had all his many achievements concentrated in science. His father, also J. Willard Gibbs, also a Professor at Yale, had one great non-academic achievement that remains unmatched in our day. I describe it.
Gibbs phenomenon for dispersive PDEs on the line
Biondini, Gino; Trogdon, Thomas
2014-01-01
We investigate the Cauchy problem for linear, constant-coefficient evolution PDEs on the real line with discontinuous initial conditions (ICs) in the small-time limit. The small-time behavior of the solution near discontinuities is expressed in terms of universal, computable special functions. We show that the leading-order behavior of the solution of dispersive PDEs near a discontinuity of the ICs is characterized by Gibbs-type oscillations and gives exactly the Wilbraham-Gibbs constant.
International Nuclear Information System (INIS)
Lucka, Felix
2012-01-01
Sparsity has become a key concept for solving of high-dimensional inverse problems using variational regularization techniques. Recently, using similar sparsity-constraints in the Bayesian framework for inverse problems by encoding them in the prior distribution has attracted attention. Important questions about the relation between regularization theory and Bayesian inference still need to be addressed when using sparsity promoting inversion. A practical obstacle for these examinations is the lack of fast posterior sampling algorithms for sparse, high-dimensional Bayesian inversion. Accessing the full range of Bayesian inference methods requires being able to draw samples from the posterior probability distribution in a fast and efficient way. This is usually done using Markov chain Monte Carlo (MCMC) sampling algorithms. In this paper, we develop and examine a new implementation of a single component Gibbs MCMC sampler for sparse priors relying on L1-norms. We demonstrate that the efficiency of our Gibbs sampler increases when the level of sparsity or the dimension of the unknowns is increased. This property is contrary to the properties of the most commonly applied Metropolis–Hastings (MH) sampling schemes. We demonstrate that the efficiency of MH schemes for L1-type priors dramatically decreases when the level of sparsity or the dimension of the unknowns is increased. Practically, Bayesian inversion for L1-type priors using MH samplers is not feasible at all. As this is commonly believed to be an intrinsic feature of MCMC sampling, the performance of our Gibbs sampler also challenges common beliefs about the applicability of sample based Bayesian inference. (paper)
A comparison of various Gibbs energy dissipation correlations for predicting microbial growth yields
Energy Technology Data Exchange (ETDEWEB)
Liu, J.-S. [Laboratory of Chemical and Biochemical Engineering, Swiss Federal Institute of Technology, EPFL, CH-1015 Lausanne (Switzerland); Vojinovic, V. [Laboratory of Chemical and Biochemical Engineering, Swiss Federal Institute of Technology, EPFL, CH-1015 Lausanne (Switzerland); Patino, R. [Cinvestav-Merida, Departamento de Fisica Aplicada, Km. 6 carretera antigua a Progreso, AP 73 Cordemex, 97310 Merida, Yucatan (Mexico); Maskow, Th. [UFZ Centre for Environmental Research, Department of Environmental Microbiology, Permoserstrasse 15, D-04318 Leipzig (Germany); Stockar, U. von [Laboratory of Chemical and Biochemical Engineering, Swiss Federal Institute of Technology, EPFL, CH-1015 Lausanne (Switzerland)]. E-mail: urs.vonStockar@epfl.ch
2007-06-25
Thermodynamic analysis may be applied in order to predict microbial growth yields roughly, based on an empirical correlation of the Gibbs energy of the overall growth reaction or Gibbs energy dissipation. Due to the well-known trade-off between high biomass yield and high Gibbs energy dissipation necessary for fast growth, an optimal range of Gibbs energy dissipation exists and it can be correlated to physical characteristics of the growth substrates. A database previously available in the literature has been extended significantly in order to test such correlations. An analysis of the relationship between biomass yield and Gibbs energy dissipation reveals that one does not need a very precise estimation of the latter to predict the former roughly. Approximating the Gibbs energy dissipation with a constant universal value of -500 kJ C-mol{sup -1} of dry biomass grown predicts many experimental growth yields nearly as well as a carefully designed, complex correlation available from the literature, even though a number of predictions are grossly out of range. A new correlation for Gibbs energy dissipation is proposed which is just as accurate as the complex literature correlation despite its dramatically simpler structure.
A comparison of various Gibbs energy dissipation correlations for predicting microbial growth yields
International Nuclear Information System (INIS)
Liu, J.-S.; Vojinovic, V.; Patino, R.; Maskow, Th.; Stockar, U. von
2007-01-01
Thermodynamic analysis may be applied in order to predict microbial growth yields roughly, based on an empirical correlation of the Gibbs energy of the overall growth reaction or Gibbs energy dissipation. Due to the well-known trade-off between high biomass yield and high Gibbs energy dissipation necessary for fast growth, an optimal range of Gibbs energy dissipation exists and it can be correlated to physical characteristics of the growth substrates. A database previously available in the literature has been extended significantly in order to test such correlations. An analysis of the relationship between biomass yield and Gibbs energy dissipation reveals that one does not need a very precise estimation of the latter to predict the former roughly. Approximating the Gibbs energy dissipation with a constant universal value of -500 kJ C-mol -1 of dry biomass grown predicts many experimental growth yields nearly as well as a carefully designed, complex correlation available from the literature, even though a number of predictions are grossly out of range. A new correlation for Gibbs energy dissipation is proposed which is just as accurate as the complex literature correlation despite its dramatically simpler structure
Extension of Gibbs-Duhem equation including influences of external fields
Guangze, Han; Jianjia, Meng
2018-03-01
Gibbs-Duhem equation is one of the fundamental equations in thermodynamics, which describes the relation among changes in temperature, pressure and chemical potential. Thermodynamic system can be affected by external field, and this effect should be revealed by thermodynamic equations. Based on energy postulate and the first law of thermodynamics, the differential equation of internal energy is extended to include the properties of external fields. Then, with homogeneous function theorem and a redefinition of Gibbs energy, a generalized Gibbs-Duhem equation with influences of external fields is derived. As a demonstration of the application of this generalized equation, the influences of temperature and external electric field on surface tension, surface adsorption controlled by external electric field, and the derivation of a generalized chemical potential expression are discussed, which show that the extended Gibbs-Duhem equation developed in this paper is capable to capture the influences of external fields on a thermodynamic system.
Külske, C
2003-01-01
We derive useful general concentration inequalities for functions of Gibbs fields in the uniqueness regime. We also consider expectations of random Gibbs measures that depend on an additional disorder field, and prove concentration w.r.t the disorder field. Both fields are assumed to be in the uniqueness regime, allowing in particular for non-independent disorder field. The modification of the bounds compared to the case of an independent field can be expressed in terms of constants that resemble the Dobrushin contraction coefficient, and are explicitly computable. On the basis of these inequalities, we obtain bounds on the deviation of a diffraction pattern created by random scatterers located on a general discrete point set in the Euclidean space, restricted to a finite volume. Here we also allow for thermal dislocations of the scatterers around their equilibrium positions. Extending recent results for independent scatterers, we give a universal upper bound on the probability of a deviation of the random sc...
Adaptive sampling algorithm for detection of superpoints
Institute of Scientific and Technical Information of China (English)
CHENG Guang; GONG Jian; DING Wei; WU Hua; QIANG ShiQiang
2008-01-01
The superpoints are the sources (or the destinations) that connect with a great deal of destinations (or sources) during a measurement time interval, so detecting the superpoints in real time is very important to network security and management. Previous algorithms are not able to control the usage of the memory and to deliver the desired accuracy, so it is hard to detect the superpoints on a high speed link in real time. In this paper, we propose an adaptive sampling algorithm to detect the superpoints in real time, which uses a flow sample and hold module to reduce the detection of the non-superpoints and to improve the measurement accuracy of the superpoints. We also design a data stream structure to maintain the flow records, which compensates for the flow Hash collisions statistically. An adaptive process based on different sampling probabilities is used to maintain the recorded IP ad dresses in the limited memory. This algorithm is compared with the other algo rithms by analyzing the real network trace data. Experiment results and mathematic analysis show that this algorithm has the advantages of both the limited memory requirement and high measurement accuracy.
Dynamical predictive power of the generalized Gibbs ensemble revealed in a second quench.
Zhang, J M; Cui, F C; Hu, Jiangping
2012-04-01
We show that a quenched and relaxed completely integrable system is hardly distinguishable from the corresponding generalized Gibbs ensemble in a dynamical sense. To be specific, the response of the quenched and relaxed system to a second quench can be accurately reproduced by using the generalized Gibbs ensemble as a substitute. Remarkably, as demonstrated with the transverse Ising model and the hard-core bosons in one dimension, not only the steady values but even the transient, relaxation dynamics of the physical variables can be accurately reproduced by using the generalized Gibbs ensemble as a pseudoinitial state. This result is an important complement to the previously established result that a quenched and relaxed system is hardly distinguishable from the generalized Gibbs ensemble in a static sense. The relevance of the generalized Gibbs ensemble in the nonequilibrium dynamics of completely integrable systems is then greatly strengthened.
Just Another Gibbs Additive Modeler: Interfacing JAGS and mgcv
Directory of Open Access Journals (Sweden)
Simon N. Wood
2016-12-01
Full Text Available The BUGS language offers a very flexible way of specifying complex statistical models for the purposes of Gibbs sampling, while its JAGS variant offers very convenient R integration via the rjags package. However, including smoothers in JAGS models can involve some quite tedious coding, especially for multivariate or adaptive smoothers. Further, if an additive smooth structure is required then some care is needed, in order to centre smooths appropriately, and to find appropriate starting values. R package mgcv implements a wide range of smoothers, all in a manner appropriate for inclusion in JAGS code, and automates centring and other smooth setup tasks. The purpose of this note is to describe an interface between mgcv and JAGS, based around an R function, jagam, which takes a generalized additive model (GAM as specified in mgcv and automatically generates the JAGS model code and data required for inference about the model via Gibbs sampling. Although the auto-generated JAGS code can be run as is, the expectation is that the user would wish to modify it in order to add complex stochastic model components readily specified in JAGS. A simple interface is also provided for visualisation and further inference about the estimated smooth components using standard mgcv functionality. The methods described here will be un-necessarily inefficient if all that is required is fully Bayesian inference about a standard GAM, rather than the full flexibility of JAGS. In that case the BayesX package would be more efficient.
Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F
2011-03-03
The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.
Time-dependent generalized Gibbs ensembles in open quantum systems
Lange, Florian; Lenarčič, Zala; Rosch, Achim
2018-04-01
Generalized Gibbs ensembles have been used as powerful tools to describe the steady state of integrable many-particle quantum systems after a sudden change of the Hamiltonian. Here, we demonstrate numerically that they can be used for a much broader class of problems. We consider integrable systems in the presence of weak perturbations which break both integrability and drive the system to a state far from equilibrium. Under these conditions, we show that the steady state and the time evolution on long timescales can be accurately described by a (truncated) generalized Gibbs ensemble with time-dependent Lagrange parameters, determined from simple rate equations. We compare the numerically exact time evolutions of density matrices for small systems with a theory based on block-diagonal density matrices (diagonal ensemble) and a time-dependent generalized Gibbs ensemble containing only a small number of approximately conserved quantities, using the one-dimensional Heisenberg model with perturbations described by Lindblad operators as an example.
Determination of Gibbs energies of formation in aqueous solution using chemical engineering tools.
Toure, Oumar; Dussap, Claude-Gilles
2016-08-01
Standard Gibbs energies of formation are of primary importance in the field of biothermodynamics. In the absence of any directly measured values, thermodynamic calculations are required to determine the missing data. For several biochemical species, this study shows that the knowledge of the standard Gibbs energy of formation of the pure compounds (in the gaseous, solid or liquid states) enables to determine the corresponding standard Gibbs energies of formation in aqueous solutions. To do so, using chemical engineering tools (thermodynamic tables and a model enabling to predict activity coefficients, solvation Gibbs energies and pKa data), it becomes possible to determine the partial chemical potential of neutral and charged components in real metabolic conditions, even in concentrated mixtures. Copyright © 2016 Elsevier Ltd. All rights reserved.
Consistent estimation of Gibbs energy using component contributions.
Directory of Open Access Journals (Sweden)
Elad Noor
Full Text Available Standard Gibbs energies of reactions are increasingly being used in metabolic modeling for applying thermodynamic constraints on reaction rates, metabolite concentrations and kinetic parameters. The increasing scope and diversity of metabolic models has led scientists to look for genome-scale solutions that can estimate the standard Gibbs energy of all the reactions in metabolism. Group contribution methods greatly increase coverage, albeit at the price of decreased precision. We present here a way to combine the estimations of group contribution with the more accurate reactant contributions by decomposing each reaction into two parts and applying one of the methods on each of them. This method gives priority to the reactant contributions over group contributions while guaranteeing that all estimations will be consistent, i.e. will not violate the first law of thermodynamics. We show that there is a significant increase in the accuracy of our estimations compared to standard group contribution. Specifically, our cross-validation results show an 80% reduction in the median absolute residual for reactions that can be derived by reactant contributions only. We provide the full framework and source code for deriving estimates of standard reaction Gibbs energy, as well as confidence intervals, and believe this will facilitate the wide use of thermodynamic data for a better understanding of metabolism.
Generalization of Gibbs Entropy and Thermodynamic Relation
Park, Jun Chul
2010-01-01
In this paper, we extend Gibbs's approach of quasi-equilibrium thermodynamic processes, and calculate the microscopic expression of entropy for general non-equilibrium thermodynamic processes. Also, we analyze the formal structure of thermodynamic relation in non-equilibrium thermodynamic processes.
Gibbs Energy Modeling of Digenite and Adjacent Solid-State Phases
Waldner, Peter
2017-08-01
All sulfur potential and phase diagram data available in the literature for solid-state equilibria related to digenite have been assessed. Thorough thermodynamic analysis at 1 bar total pressure has been performed. A three-sublattice approach has been developed to model the Gibbs energy of digenite as a function of composition and temperature using the compound energy formalism. The Gibbs energies of the adjacent solid-state phases covelitte and high-temperature chalcocite are also modeled treating both sulfides as stoichiometric compounds. The novel model for digenite offers new interpretation of experimental data, may contribute from a thermodynamic point of view to the elucidation of the role of copper species within the crystal structure and allows extrapolation to composition regimes richer in copper than stoichiometric digenite Cu2S. Preliminary predictions into the ternary Cu-Fe-S system at 1273 K (1000 °C) using the Gibbs energy model of digenite for calculating its iron solubility are promising.
Gibbs equilibrium averages and Bogolyubov measure
International Nuclear Information System (INIS)
Sankovich, D.P.
2011-01-01
Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure
Gibbs free energy of formation of liquid lanthanide-bismuth alloys
International Nuclear Information System (INIS)
Sheng Jiawei; Yamana, Hajimu; Moriyama, Hirotake
2001-01-01
The linear free energy relationship developed by Sverjensky and Molling provides a way to predict Gibbs free energies of liquid Ln-Bi alloys formation from the known thermodynamic properties of aqueous trivalent lanthanides (Ln 3(5(6+ ). The Ln-Bi alloys are divided into two isostructural families named as the LnBi 2 (Ln=La, Ce, Pr, Nd and Pm) and LnBi (Ln=Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm and Yb). The calculated Gibbs free energy values are well agreed with experimental data
A software sampling frequency adaptive algorithm for reducing spectral leakage
Institute of Scientific and Technical Information of China (English)
PAN Li-dong; WANG Fei
2006-01-01
Spectral leakage caused by synchronous error in a nonsynchronous sampling system is an important cause that reduces the accuracy of spectral analysis and harmonic measurement.This paper presents a software sampling frequency adaptive algorithm that can obtain the actual signal frequency more accurately,and then adjusts sampling interval base on the frequency calculated by software algorithm and modifies sampling frequency adaptively.It can reduce synchronous error and impact of spectral leakage;thereby improving the accuracy of spectral analysis and harmonic measurement for power system signal where frequency changes slowly.This algorithm has high precision just like the simulations show,and it can be a practical method in power system harmonic analysis since it can be implemented easily.
Boltzmann, Gibbs and Darwin-Fowler approaches in parastatistics
International Nuclear Information System (INIS)
Ponczek, R.L.; Yan, C.C.
1976-01-01
Derivations of the equilibrium values of occupation numbers are made using three approaches, namely, the Boltzmann 'elementary' one, the ensemble method of Gibbs, and that of Darwin and Fowler as well [pt
Continuous spin mean-field models : Limiting kernels and Gibbs properties of local transforms
Kulske, Christof; Opoku, Alex A.
2008-01-01
We extend the notion of Gibbsianness for mean-field systems to the setup of general (possibly continuous) local state spaces. We investigate the Gibbs properties of systems arising from an initial mean-field Gibbs measure by application of given local transition kernels. This generalizes previous
Numerical implementation and oceanographic application of the Gibbs potential of ice
Directory of Open Access Journals (Sweden)
R. Feistel
2005-01-01
Full Text Available The 2004 Gibbs thermodynamic potential function of naturally abundant water ice is based on much more experimental data than its predecessors, is therefore significantly more accurate and reliable, and for the first time describes the entire temperature and pressure range of existence of this ice phase. It is expressed in the ITS-90 temperature scale and is consistent with the current scientific pure water standard, IAPWS-95, and the 2003 Gibbs potential of seawater. The combination of these formulations provides sublimation pressures, freezing points, and sea ice properties covering the parameter ranges of oceanographic interest. This paper provides source code examples in Visual Basic, Fortran and C++ for the computation of the Gibbs function of ice and its partial derivatives. It reports the most important related thermodynamic equations for ice and sea ice properties.
Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm
Elahi, Sana; kaleem, Muhammad; Omer, Hammad
2018-01-01
Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.
Chang, Chein-I
2017-01-01
This book explores recursive architectures in designing progressive hyperspectral imaging algorithms. In particular, it makes progressive imaging algorithms recursive by introducing the concept of Kalman filtering in algorithm design so that hyperspectral imagery can be processed not only progressively sample by sample or band by band but also recursively via recursive equations. This book can be considered a companion book of author’s books, Real-Time Progressive Hyperspectral Image Processing, published by Springer in 2016. Explores recursive structures in algorithm architecture Implements algorithmic recursive architecture in conjunction with progressive sample and band processing Derives Recursive Hyperspectral Sample Processing (RHSP) techniques according to Band-Interleaved Sample/Pixel (BIS/BIP) acquisition format Develops Recursive Hyperspectral Band Processing (RHBP) techniques according to Band SeQuential (BSQ) acquisition format for hyperspectral data.
La Iglesia, A.
1989-01-01
The effect of grinding on crystallinity, particle size and solubility of two samples of kaolinite was studied. The standard Gibbs free energies of formation of different ground samples were calculated from solubility measurements, and show a direct relationship between Gibbs free energy and particle size-crystallinity variation. Values of -3752.2 and -3776.4 KJ/mol. were determinated for ÎGÂºl (am) and ÎGÂºl (crys) of kaolinite, respectively. A new th...
An Improved Nested Sampling Algorithm for Model Selection and Assessment
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
Illustrating Enzyme Inhibition Using Gibbs Energy Profiles
Bearne, Stephen L.
2012-01-01
Gibbs energy profiles have great utility as teaching and learning tools because they present students with a visual representation of the energy changes that occur during enzyme catalysis. Unfortunately, most textbooks divorce discussions of traditional kinetic topics, such as enzyme inhibition, from discussions of these same topics in terms of…
On P-Adic Quasi Gibbs Measures for Q + 1-State Potts Model on the Cayley Tree
International Nuclear Information System (INIS)
Mukhamedov, Farrukh
2010-06-01
In the present paper we introduce a new class of p-adic measures, associated with q +1-state Potts model, called p-adic quasi Gibbs measure, which is totally different from the p-adic Gibbs measure. We establish the existence p-adic quasi Gibbs measures for the model on a Cayley tree. If q is divisible by p, then we prove the occurrence of a strong phase transition. If q and p are relatively prime, then there is a quasi phase transition. These results are totally different from the results of [F.M.Mukhamedov, U.A. Rozikov, Indag. Math. N.S. 15(2005) 85-100], since q is divisible by p, which means that q + 1 is not divided by p, so according to a main result of the mentioned paper, there is a unique and bounded p-adic Gibbs measure (different from p-adic quasi Gibbs measure). (author)
Oxidation potentials, Gibbs energies, enthalpies and entropies of actinide ions in aqueous solutions
International Nuclear Information System (INIS)
1977-01-01
The values of the Gibbs energy, enthalpy, and entropy of different actinide ions, thermodynamic characteristics of the processes of hydration of these ions, and the presently known ionization potentials of actinides are given. The enthalpy and entropy components of the oxidation potentials of actinide elements are considered. The curves of the dependence of the Gibbs energy of ion formation on the atomic number of the element and the Frost diagrams are analyzed. The diagram proposed by Frost represents the graphical dependence of the Gibbs energy of hydrated ions on the degree of oxidation of the element. Using the Frost diagram it is easy to establish whether a given ion is stable to disproportioning
Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.
Energy Technology Data Exchange (ETDEWEB)
Matulef, Kevin Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-02-01
The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewer resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.
Directory of Open Access Journals (Sweden)
R. Feistel
2005-01-01
Full Text Available The 2003 Gibbs thermodynamic potential function represents a very accurate, compact, consistent and comprehensive formulation of equilibrium properties of seawater. It is expressed in the International Temperature Scale ITS-90 and is fully consistent with the current scientific pure water standard, IAPWS-95. Source code examples in FORTRAN, C++ and Visual Basic are presented for the numerical implementation of the potential function and its partial derivatives, as well as for potential temperature. A collection of thermodynamic formulas and relations is given for possible applications in oceanography, from density and chemical potential over entropy and potential density to mixing heat and entropy production. For colligative properties like vapour pressure, freezing points, and for a Gibbs potential of sea ice, the equations relating the Gibbs function of seawater to those of vapour and ice are presented.
The variance quadtree algorithm: use for spatial sampling design
Minasny, B.; McBratney, A.B.; Walvoort, D.J.J.
2007-01-01
Spatial sampling schemes are mainly developed to determine sampling locations that can cover the variation of environmental properties in the area of interest. Here we proposed the variance quadtree algorithm for sampling in an area with prior information represented as ancillary or secondary
An efficient estimator for Gibbs random fields
Czech Academy of Sciences Publication Activity Database
Janžura, Martin
2014-01-01
Roč. 50, č. 6 (2014), s. 883-895 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : Gibbs random field * efficient estimator * empirical estimator Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/SI/janzura-0441325.pdf
Ahmad, Mohd Ali Khameini; Liao, Lingmin; Saburov, Mansoor
2018-06-01
We study the set of p-adic Gibbs measures of the q-state Potts model on the Cayley tree of order three. We prove the vastness of the set of the periodic p-adic Gibbs measures for such model by showing the chaotic behavior of the corresponding Potts-Bethe mapping over Q_p for the prime numbers p≡1 (mod 3). In fact, for 0< |θ -1|_p< |q|_p^2 < 1 where θ =\\exp _p(J) and J is a coupling constant, there exists a subsystem that is isometrically conjugate to the full shift on three symbols. Meanwhile, for 0< |q|_p^2 ≤ |θ -1|_p< |q|_p < 1, there exists a subsystem that is isometrically conjugate to a subshift of finite type on r symbols where r ≥ 4. However, these subshifts on r symbols are all topologically conjugate to the full shift on three symbols. The p-adic Gibbs measures of the same model for the prime numbers p=2,3 and the corresponding Potts-Bethe mapping are also discussed. On the other hand, for 0< |θ -1|_p< |q|_p < 1, we remark that the Potts-Bethe mapping is not chaotic when p=3 and p≡ 2 (mod 3) and we could not conclude the vastness of the set of the periodic p-adic Gibbs measures. In a forthcoming paper with the same title, we will treat the case 0< |q|_p ≤ |θ -1|_p < 1 for all prime numbers p.
Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim
2014-01-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using
Modeling adsorption of cationic surfactants at air/water interface without using the Gibbs equation.
Phan, Chi M; Le, Thu N; Nguyen, Cuong V; Yusa, Shin-ichi
2013-04-16
The Gibbs adsorption equation has been indispensable in predicting the surfactant adsorption at the interfaces, with many applications in industrial and natural processes. This study uses a new theoretical framework to model surfactant adsorption at the air/water interface without the Gibbs equation. The model was applied to two surfactants, C14TAB and C16TAB, to determine the maximum surface excesses. The obtained values demonstrated a fundamental change, which was verified by simulations, in the molecular arrangement at the interface. The new insights, in combination with recent discoveries in the field, expose the limitations of applying the Gibbs adsorption equation to cationic surfactants at the air/water interface.
International Nuclear Information System (INIS)
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2014-01-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems
Energy Technology Data Exchange (ETDEWEB)
Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)
2014-02-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.
Elsheikh, Ahmed H.
2014-02-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.
An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-09-01
Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
The MaxEnt extension of a quantum Gibbs family, convex geometry and geodesics
International Nuclear Information System (INIS)
Weis, Stephan
2015-01-01
We discuss methods to analyze a quantum Gibbs family in the ultra-cold regime where the norm closure of the Gibbs family fails due to discontinuities of the maximum-entropy inference. The current discussion of maximum-entropy inference and irreducible correlation in the area of quantum phase transitions is a major motivation for this research. We extend a representation of the irreducible correlation from finite temperatures to absolute zero
International Nuclear Information System (INIS)
Genderen, A.C.G. van; Weijden, C.H. van der
1984-01-01
For a group of minerals containing a common anion there exists a linear relationship between two parameters called ΔO and ΔF.ΔO is defined as the difference between the Gibbs energy of formation of a solid oxide and the Gibbs energy of formation of its aqueous cation, while ΔF is defined as the Gibbs energy of reaction of the formation of a mineral from the constituting oxide(s) and the acid. Using the Gibbs energies of formation of a number of known minerals the corresponding ΔO's and ΔF's were calculated and with the resulting regression equation it is possible to predict values for the Gibbs energies of formation of other minerals containing the same anion. This was done for 29 minerals containing the uranyl-ion together with phosphate, vanadate, arsenate or carbonate. (orig.)
International Nuclear Information System (INIS)
Awasthi, Neha; Ritschel, Thomas; Lipowsky, Reinhard; Knecht, Volker
2013-01-01
Highlights: • ΔG and K eq for NO 2 dimerization and NH 3 synthesis calculated via ab-initio methods. • Vis-á-vis experiments, W1 and CCSD(T) are accurate and G3B3 also does quite well. • CBS-APNO most accurate for NH 3 reaction but shows limitations in modeling NO 2 . • Temperature dependence of ΔG and K eq is calculated for the NH 3 reaction. • Good agreement of calculated K eq with experiments and the van’t Hoff approximation. -- Abstract: Standard quantum chemical methods are used for accurate calculation of thermochemical properties such as enthalpies of formation, entropies and Gibbs energies of formation. Equilibrium reactions are widely investigated and experimental measurements often lead to a range of reaction Gibbs energies and equilibrium constants. It is useful to calculate these equilibrium properties from quantum chemical methods in order to address the experimental differences. Furthermore, most standard calculation methods differ in accuracy and feasibility of the system size. Hence, a systematic comparison of equilibrium properties calculated with different numerical algorithms would provide a useful reference. We select two well-known gas phase equilibrium reactions with small molecules: covalent dimer formation of NO 2 (2NO 2 ⇌ N 2 O 4 ) and the synthesis of NH 3 (N 2 + 3 H 2 ⇌ 2NH 3 ). We test four quantum chemical methods denoted by G3B3, CBS-APNO, W1 and CCSD(T) with aug-cc-pVXZ basis sets (X = 2, 3, and 4), to obtain thermochemical data for NO 2 , N 2 O 4 , and NH 3 . The calculated standard formation Gibbs energies Δ f G° are used to calculate standard reaction Gibbs energies Δ r G° and standard equilibrium constants K eq for the two reactions. Standard formation enthalpies Δ f H° are calculated in a more reliable way using high-level methods such as W1 and CCSD(T). Standard entropies S° for the molecules are calculated well within the range of experiments for all methods, however, the values of standard formation
Existence and uniqueness of Gibbs states for a statistical mechanical polyacetylene model
International Nuclear Information System (INIS)
Park, Y.M.
1987-01-01
One-dimensional polyacetylene is studied as a model of statistical mechanics. In a semiclassical approximation the system is equivalent to a quantum XY model interacting with unbounded classical spins in one-dimensional lattice space Z. By establishing uniform estimates, an infinite-volume-limit Hilbert space, a strongly continuous time evolution group of unitary operators, and an invariant vector are constructed. Moreover, it is proven that any infinite-limit state satisfies Gibbs conditions. Finally, a modification of Araki's relative entropy method is used to establish the uniqueness of Gibbs states
Fast sampling algorithm for the simulation of photon Compton scattering
International Nuclear Information System (INIS)
Brusa, D.; Salvat, F.
1996-01-01
A simple algorithm for the simulation of Compton interactions of unpolarized photons is described. The energy and direction of the scattered photon, as well as the active atomic electron shell, are sampled from the double-differential cross section obtained by Ribberfors from the relativistic impulse approximation. The algorithm consistently accounts for Doppler broadening and electron binding effects. Simplifications of Ribberfors' formula, required for efficient random sampling, are discussed. The algorithm involves a combination of inverse transform, composition and rejection methods. A parameterization of the Compton profile is proposed from which the simulation of Compton events can be performed analytically in terms of a few parameters that characterize the target atom, namely shell ionization energies, occupation numbers and maximum values of the one-electron Compton profiles. (orig.)
Algorithm For Hypersonic Flow In Chemical Equilibrium
Palmer, Grant
1989-01-01
Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.
Classical boson sampling algorithms with superior performance to near-term experiments
Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony
2017-12-01
It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.
Exploring Fourier Series and Gibbs Phenomenon Using Mathematica
Ghosh, Jonaki B.
2011-01-01
This article describes a laboratory module on Fourier series and Gibbs phenomenon which was undertaken by 32 Year 12 students. It shows how the use of CAS played the role of an "amplifier" by making higher level mathematical concepts accessible to students of year 12. Using Mathematica students were able to visualise Fourier series of…
An implicit flux-split algorithm to calculate hypersonic flowfields in chemical equilibrium
Palmer, Grant
1987-01-01
An implicit, finite-difference, shock-capturing algorithm that calculates inviscid, hypersonic flows in chemical equilibrium is presented. The flux vectors and flux Jacobians are differenced using a first-order, flux-split technique. The equilibrium composition of the gas is determined by minimizing the Gibbs free energy at every node point. The code is validated by comparing results over an axisymmetric hemisphere against previously published results. The algorithm is also applied to more practical configurations. The accuracy, stability, and versatility of the algorithm have been promising.
Directory of Open Access Journals (Sweden)
DARIUSZ Piwczynski
2013-03-01
Full Text Available The research was carried out on 4,030 Polish Merino ewes born in the years 1991- 2001, kept in 15 flocks from the Pomorze and Kujawy region. Fertility of ewes in subsequent reproduction seasons was analysed with the use of multiple logistic regression. The research showed that there is a statistical influence of the flock, year of birth, age of dam, flock year interaction of birth on the ewes fertility. In order to estimate the genetic parameters, the Gibbs sampling method was applied, using the univariate animal models, both linear as well as threshold. Estimates of fertility depending on the model equalled 0.067 to 0.104, whereas the estimates of repeatability equalled respectively: 0.076 and 0.139. The obtained genetic parameters were then used to estimate the breeding values of the animals in terms of controlled trait (Best Linear Unbiased Prediction method using linear and threshold models. The obtained animal breeding values rankings in respect of the same trait with the use of linear and threshold models were strongly correlated with each other (rs = 0.972. Negative genetic trends of fertility (0.01-0.08% per year were found.
On the Tsallis Entropy for Gibbs Random Fields
Czech Academy of Sciences Publication Activity Database
Janžura, Martin
2014-01-01
Roč. 21, č. 33 (2014), s. 59-69 ISSN 1212-074X R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional research plan: CEZ:AV0Z1075907 Keywords : Tsallis entropy * Gibbs random fields * phase transitions * Tsallis entropy rate Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2014/SI/janzura-0441885.pdf
Vargas, Francisco M.
2014-01-01
The temperature dependence of the Gibbs energy and important quantities such as Henry's law constants, activity coefficients, and chemical equilibrium constants is usually calculated by using the Gibbs-Helmholtz equation. Although, this is a well-known approach and traditionally covered as part of any physical chemistry course, the required…
Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms
Fidel, Adam; Jacobs, Sam Ade; Sharma, Shishir; Amato, Nancy M.; Rauchwerger, Lawrence
2014-01-01
Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.
Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms
Fidel, Adam
2014-05-01
Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.
First-Year University Chemistry Textbooks' Misrepresentation of Gibbs Energy
Quilez, Juan
2012-01-01
This study analyzes the misrepresentation of Gibbs energy by college chemistry textbooks. The article reports the way first-year university chemistry textbooks handle the concepts of spontaneity and equilibrium. Problems with terminology are found; confusion arises in the meaning given to [delta]G, [delta][subscript r]G, [delta]G[degrees], and…
Virial theorem and Gibbs thermodynamic potential for Coulomb systems
International Nuclear Information System (INIS)
Bobrov, V. B.; Trigger, S. A.
2014-01-01
Using the grand canonical ensemble and the virial theorem, we show that the Gibbs thermodynamic potential of the non-relativistic system of charged particles is uniquely defined by single-particle Green functions of electrons and nuclei. This result is valid beyond the perturbation theory with respect to the interparticle interaction
Virial theorem and Gibbs thermodynamic potential for Coulomb systems
Bobrov, V. B.; Trigger, S. A.
2013-01-01
Using the grand canonical ensemble and the virial theorem, we show that the Gibbs thermodynamic potential of the non-relativistic system of charged particles is uniquely defined by single-particle Green functions of electrons and nuclei. This result is valid beyond the perturbation theory with respect to the interparticle interaction.
Use of a Radon Stripping Algorithm for Retrospective Assessment of Air Filter Samples
International Nuclear Information System (INIS)
Hayes, Robert
2009-01-01
An evaluation of a large number of air sample filters was undertaken using a commercial alpha and beta spectroscopy system employing a passive implanted planar silicon (PIPS) detector. Samples were only measured after air flow through the filters had ceased. Use of a commercial radon stripping algorithm was implemented to discriminate anthropogenic alpha and beta activity on the filters from the radon progeny. When uncontaminated air filters were evaluated, the results showed that there was a time-dependent bias in both average estimates and measurement dispersion with the relative bias being small compared to the dispersion. By also measuring environmental air sample filters simultaneously with electroplated alpha and beta sources, use of the radon stripping algorithm demonstrated a number of substantial unexpected deviations. Use of the current algorithm is therefore not recommended for assay applications and so use of the PIPS detector should only be utilized for gross counting without appropriate modifications to the curve fitting algorithm. As a screening method, the radon stripping algorithm might be expected to see elevated alpha and beta activities on air sample filters (not due to radon progeny) around the 200 dpm level
Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images
Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun
2013-01-01
This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608
A scalable method for parallelizing sampling-based motion planning algorithms
Jacobs, Sam Ade; Manavi, Kasra; Burgos, Juan; Denny, Jory; Thomas, Shawna; Amato, Nancy M.
2012-01-01
This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.
A scalable method for parallelizing sampling-based motion planning algorithms
Jacobs, Sam Ade
2012-05-01
This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.
A clustering algorithm for sample data based on environmental pollution characteristics
Chen, Mei; Wang, Pengfei; Chen, Qiang; Wu, Jiadong; Chen, Xiaoyun
2015-04-01
Environmental pollution has become an issue of serious international concern in recent years. Among the receptor-oriented pollution models, CMB, PMF, UNMIX, and PCA are widely used as source apportionment models. To improve the accuracy of source apportionment and classify the sample data for these models, this study proposes an easy-to-use, high-dimensional EPC algorithm that not only organizes all of the sample data into different groups according to the similarities in pollution characteristics such as pollution sources and concentrations but also simultaneously detects outliers. The main clustering process consists of selecting the first unlabelled point as the cluster centre, then assigning each data point in the sample dataset to its most similar cluster centre according to both the user-defined threshold and the value of similarity function in each iteration, and finally modifying the clusters using a method similar to k-Means. The validity and accuracy of the algorithm are tested using both real and synthetic datasets, which makes the EPC algorithm practical and effective for appropriately classifying sample data for source apportionment models and helpful for better understanding and interpreting the sources of pollution.
Effective traffic features selection algorithm for cyber-attacks samples
Li, Yihong; Liu, Fangzheng; Du, Zhenyu
2018-05-01
By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.
A Spectral Algorithm for Envelope Reduction of Sparse Matrices
Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.
1993-01-01
The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.
One of Gibbs's ideas that has gone unnoticed (comment on chapter IX of his classic book)
International Nuclear Information System (INIS)
Sukhanov, Alexander D; Rudoi, Yurii G
2006-01-01
We show that contrary to the commonly accepted view, Chapter IX of Gibbs's book [1] contains the prolegomena to a macroscopic statistical theory that is qualitatively different from his own microscopic statistical mechanics. The formulas obtained by Gibbs were the first results in the history of physics related to the theory of fluctuations in any macroparameters, including temperature. (from the history of physics)
Gibbs free energy of formation of lanthanum rhodate by quadrupole mass spectrometer
International Nuclear Information System (INIS)
Prasad, R.; Banerjee, Aparna; Venugopal, V.
2003-01-01
The ternary oxide in the system La-Rh-O is of considerable importance because of its application in catalysis. Phase equilibria in the pseudo-binary system La 2 O 3 -Rh 2 O 3 has been investigated by Shevyakov et. al. Gibbs free energy of LaRhO 3 (s) was determined by Jacob et. al. using a solid state Galvanic cell in the temperature range 890 to 1310 K. No other thermodynamic data were available in the literature. Hence it was decided to determine Gibbs free energy of formation of LaRhO 3 (s) by an independent technique, viz. quadrupole mass spectrometer (QMS) coupled with a Knudsen effusion cell and the results are presented
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling
Tong, Cao; Gong, Haili
2018-03-01
This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.
Sampling-Based Motion Planning Algorithms for Replanning and Spatial Load Balancing
Energy Technology Data Exchange (ETDEWEB)
Boardman, Beth Leigh [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-12
The common theme of this dissertation is sampling-based motion planning with the two key contributions being in the area of replanning and spatial load balancing for robotic systems. Here, we begin by recalling two sampling-based motion planners: the asymptotically optimal rapidly-exploring random tree (RRT*), and the asymptotically optimal probabilistic roadmap (PRM*). We also provide a brief background on collision cones and the Distributed Reactive Collision Avoidance (DRCA) algorithm. The next four chapters detail novel contributions for motion replanning in environments with unexpected static obstacles, for multi-agent collision avoidance, and spatial load balancing. First, we show improved performance of the RRT* when using the proposed Grandparent-Connection (GP) or Focused-Refinement (FR) algorithms. Next, the Goal Tree algorithm for replanning with unexpected static obstacles is detailed and proven to be asymptotically optimal. A multi-agent collision avoidance problem in obstacle environments is approached via the RRT*, leading to the novel Sampling-Based Collision Avoidance (SBCA) algorithm. The SBCA algorithm is proven to guarantee collision free trajectories for all of the agents, even when subject to uncertainties in the knowledge of the other agents’ positions and velocities. Given that a solution exists, we prove that livelocks and deadlock will lead to the cost to the goal being decreased. We introduce a new deconfliction maneuver that decreases the cost-to-come at each step. This new maneuver removes the possibility of livelocks and allows a result to be formed that proves convergence to the goal configurations. Finally, we present a limited range Graph-based Spatial Load Balancing (GSLB) algorithm which fairly divides a non-convex space among multiple agents that are subject to differential constraints and have a limited travel distance. The GSLB is proven to converge to a solution when maximizing the area covered by the agents. The analysis
Gu, Jinghua; Xuan, Jianhua; Riggins, Rebecca B; Chen, Li; Wang, Yue; Clarke, Robert
2012-08-01
Identification of transcriptional regulatory networks (TRNs) is of significant importance in computational biology for cancer research, providing a critical building block to unravel disease pathways. However, existing methods for TRN identification suffer from the inclusion of excessive 'noise' in microarray data and false-positives in binding data, especially when applied to human tumor-derived cell line studies. More robust methods that can counteract the imperfection of data sources are therefore needed for reliable identification of TRNs in this context. In this article, we propose to establish a link between the quality of one target gene to represent its regulator and the uncertainty of its expression to represent other target genes. Specifically, an outlier sum statistic was used to measure the aggregated evidence for regulation events between target genes and their corresponding transcription factors. A Gibbs sampling method was then developed to estimate the marginal distribution of the outlier sum statistic, hence, to uncover underlying regulatory relationships. To evaluate the effectiveness of our proposed method, we compared its performance with that of an existing sampling-based method using both simulation data and yeast cell cycle data. The experimental results show that our method consistently outperforms the competing method in different settings of signal-to-noise ratio and network topology, indicating its robustness for biological applications. Finally, we applied our method to breast cancer cell line data and demonstrated its ability to extract biologically meaningful regulatory modules related to estrogen signaling and action in breast cancer. The Gibbs sampler MATLAB package is freely available at http://www.cbil.ece.vt.edu/software.htm. xuan@vt.edu Supplementary data are available at Bioinformatics online.
The Gibbs-Thomson equation for a spherical coherent precipitate with applications to nucleation
International Nuclear Information System (INIS)
Rottman, C.; Voorhees, P.W.; Johnson, W.C.
1988-01-01
The conditions for interfacial thermodynamic equilibrium form the basis for the derivation of a number of basic equations in materials science, including the various forms of the Gibbs-Thomson equation. The equilibrium conditions pertaining to a curved interface in a two-phase fluid system are well-known. In contrast, the conditions for thermodynamic equilibrium at a curved interface in nonhydrostatically stressed solids have only recently been examined. These conditions can be much different from those at a fluid interface and, as a result, the Gibbs-Thomson equation appropriate to coherent solids is likely to be considerably different from that for fluids. In this paper, the authors first derive the conditions necessary for thermodynamic equilibrium at the precipitate-matrix interface of a coherent spherical precipitate. The authors' derivation of these equilibrium conditions includes a correction to the equilibrium conditions of Johnson and Alexander for a spherical precipitate in an isotropic matrix. They then use these conditions to derive the dependence of the interfacial precipitate and matrix concentrations on precipitate radius (Gibbs-Thomson equation) for a such a precipitate. In addition, these relationships are then used to calculate the critical radius for the nucleation of a coherent misfitting precipitate
Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit
International Nuclear Information System (INIS)
Mandelli, Diego; Smith, Curtis Lee; Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua Joseph
2015-01-01
The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.
Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit
Energy Technology Data Exchange (ETDEWEB)
Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cogliati, Joshua Joseph [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-09-01
The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.
Performance Comparison of Reconstruction Algorithms in Discrete Blind Multi-Coset Sampling
DEFF Research Database (Denmark)
Grigoryan, Ruben; Arildsen, Thomas; Tandur, Deepaknath
2012-01-01
This paper investigates the performance of different reconstruction algorithms in discrete blind multi-coset sampling. Multi-coset scheme is a promising compressed sensing architecture that can replace traditional Nyquist-rate sampling in the applications with multi-band frequency sparse signals...
LA CASA GIBBS Y EL MONOPOLIO SALITRERO PERUANO: 1876-1878
Directory of Open Access Journals (Sweden)
Manuel Ravest Mora
2008-06-01
Full Text Available El objeto de este breve trabajo es mostrar la disposición de Anthony Gibbs & Sons, y de sus filiales, a apoyar el proyecto monopólico salitrero del Perú con recursos monetarios y los manejos de sus directores en la única empresa que, dada su capacidad de elaboración, podía hacerlo fracasar: la Compañía de Salitres y Ferrocarril de Antofagasta, de la que Gibbs era el segundo mayor accionista. Para el gobierno chileno la causa primaria de la guerra de 1879 fue el intento del Perú por monopolizar la producción salitrera. Bolivia, su aliada secreta desde 1873, colaboró arrendándole y vendiéndole sus depósitos de nitrato, e imponiendo a la exportación del salitre un tributo que infringió la condición -estipulada en un Tratado de Límites- bajo la cual Chile le cedió territorio. Su recuperación manu militari inició el conflicto. A partir de la segunda mitad del siglo pasado esta tesis economicista-legalista fue cuestionada en Chile y en el exterior, desplazando el acento causal al reordenamiento de los mercados de materias primas -de las que los beligerantes eran exportadores- a consecuencia de la crisis mundial de la década de 1870.This brief study aims at showing Anthony Gibbs & Sons disposition in supporting the Peruvian monopolistic nitrate project with monetary resources and its Director's influences in the only company which, due its production's capacity, could make the project fail: the Chilean Antofagasta Nitrate and Railway Co. in which Gibbs was the second most important stockholder. According to Chilean government the primary cause of 1879's war was Peru's attempt to monopolize nitrate production. Bolivia, its secret allied since 1873, helped renting and selling him her nitrate fields and imposing a tax on the nitrate exports of the Chilean company in Antofagasta, thus violating the condition stated in a Border Treaty by which Chile had ceded territory. Its recovery through the use of military forcé was the first act
Estimates of Gibbs free energies of formation of chlorinated aliphatic compounds
Dolfing, Jan; Janssen, Dick B.
1994-01-01
The Gibbs free energy of formation of chlorinated aliphatic compounds was estimated with Mavrovouniotis' group contribution method. The group contribution of chlorine was estimated from the scarce data available on chlorinated aliphatics in the literature, and found to vary somewhat according to the
GibbsCluster: unsupervised clustering and alignment of peptide sequences
DEFF Research Database (Denmark)
Andreatta, Massimo; Alvarez, Bruno; Nielsen, Morten
2017-01-01
motif characterizing each cluster. Several parameters are available to customize cluster analysis, including adjustable penalties for small clusters and overlapping groups and a trash cluster to remove outliers. As an example application, we used the server to deconvolute multiple specificities in large......-scale peptidome data generated by mass spectrometry. The server is available at http://www.cbs.dtu.dk/services/GibbsCluster-2.0....
Determination of standard molar Gibbs energy of formation of Sm6UO12(s)
International Nuclear Information System (INIS)
Sahu, Manjulata; Dash, Smruti
2015-01-01
The standard molar Gibbs energies of formation of Sm 6 UO 12 (s) have been measured using an oxygen concentration cell with yttria stabilized zirconia as solid electrolyte. Δ f G o m (T) for Sm 6 UO 12 (s) has been calculated using the measured and required thermodynamic data from the literature. The calculated Gibbs energy expression in the temperature range 899 to 1127 K can be given as: Δ f G o m (Nd 6 UO 12 , s,T)/(±2.3) kJ∙ mol -1 = -6681 +1.099 (T/K) (899-1127 K)(T/K). (author)
Liang, Faming; Jin, Ick-Hoon
2013-01-01
Simulating from distributions with intractable normalizing constants has been a long-standing problem inmachine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. TheMCMHalgorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals. © 2013 Massachusetts Institute of Technology.
Liang, Faming
2013-08-01
Simulating from distributions with intractable normalizing constants has been a long-standing problem inmachine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. TheMCMHalgorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals. © 2013 Massachusetts Institute of Technology.
Gibbs' theorem for open systems with incomplete statistics
International Nuclear Information System (INIS)
Bagci, G.B.
2009-01-01
Gibbs' theorem, which is originally intended for canonical ensembles with complete statistics has been generalized to open systems with incomplete statistics. As a result of this generalization, it is shown that the stationary equilibrium distribution of inverse power law form associated with the incomplete statistics has maximum entropy even for open systems with energy or matter influx. The renormalized entropy definition given in this paper can also serve as a measure of self-organization in open systems described by incomplete statistics.
Standard Gibbs free energies for transfer of actinyl ions at the aqueous/organic solution interface
International Nuclear Information System (INIS)
Kitatsuji, Yoshihiro; Okugaki, Tomohiko; Kasuno, Megumi; Kubota, Hiroki; Maeda, Kohji; Kimura, Takaumi; Yoshida, Zenko; Kihara, Sorin
2011-01-01
Research highlights: → Standard Gibbs free energies for ion-transfer of tri- to hexavalent actinide ions. → Determination is based on distribution method combined with ion-transfer voltammetry. → Organic solvents examined are nitrobenzene, DCE, benzonitrile, acetophenone and NPOE. → Gibbs free energies of U(VI), Np(VI) and Pu(VI) are similar to each other. → Gibbs free energies of Np(V) is very large, comparing with ordinary monovalent cations. - Abstract: Standard Gibbs free energies for transfer (ΔG tr 0 ) of actinyl ions (AnO 2 z+ ; z = 2 or 1; An: U, Np, or Pu) between an aqueous solution and an organic solution were determined based on distribution method combined with voltammetry for ion transfer at the interface of two immiscible electrolyte solutions. The organic solutions examined were nitrobenzene, 1,2-dichloroethane, benzonitrile, acetophenone, and 2-nitrophenyl octyl ether. Irrespective of the type of organic solutions, ΔG tr 0 of UO 2 2+ ,NpO 2 2+ , and PuO 2 2+ were nearly equal to each other and slightly larger than that of Mg 2+ . The ΔG tr 0 of NpO 2 + was extraordinary large compared with those of ordinary monovalent cations. The dependence of ΔG tr 0 of AnO 2 z+ on the type of organic solutions was similar to that of H + or Mg 2+ . The ΔG tr 0 of An 3+ and An 4+ were also discussed briefly.
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
International Nuclear Information System (INIS)
Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.
2015-01-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
Uniqueness of Gibbs Measure for Models with Uncountable Set of Spin Values on a Cayley Tree
International Nuclear Information System (INIS)
Eshkabilov, Yu. Kh.; Haydarov, F. H.; Rozikov, U. A.
2013-01-01
We consider models with nearest-neighbor interactions and with the set [0, 1] of spin values, on a Cayley tree of order K ≥ 1. It is known that the ‘splitting Gibbs measures’ of the model can be described by solutions of a nonlinear integral equation. For arbitrary k ≥ 2 we find a sufficient condition under which the integral equation has unique solution, hence under the condition the corresponding model has unique splitting Gibbs measure.
Directory of Open Access Journals (Sweden)
Leandro Barbosa
2008-07-01
Full Text Available Um total de 38.865 registros de animais da raça Large White foi usado para estimar componentes de co-variância e parâmetros genéticos das características idade ao atingir 100 kg de peso vivo (IDA e espessura de toucinho ajustada para 100 kg de peso vivo (ET, em análises bicaracterísticas. Para obtenção dos componentes de co-variância, foi utilizado o Amostrador de Gibbs por meio do programa MTGSAM. O modelo misto utilizado continha efeito fixo de grupo contemporâneo e os seguintes efeitos aleatórios: efeito genético aditivo direto, efeito genético aditivo materno, efeito comum de leitegada e efeito residual. As médias das estimativas de herdabilidade aditivas diretas foram 0,33 e 0,44 para IDA e ET, respectivamente. As médias das estimativas do efeito comum de leitegada foram 0,09 e 0,02 para IDA e ET, respectivamente. A estimativa de correlação genética aditiva entre as características foi próxima de zero (-0,015. As herdabilidades obtidas para as características de desempenho avaliadas indicam que ganhos genéticos satisfatórios podem ser obtidos no melhoramento de suínos da raça Large White para essas características e que a seleção simultânea para ambas as características pode ser realizada, uma vez que é baixa a correlação genética aditiva direta.Data consisting of 38,865 records of Large White pigs were used to estimate genetic parameters for days to 100 kg (DAYS and backfat thickness adjusted to 100 kg (BF. Covariance components were estimated by a bivariate mixed model including the fixed effect of contemporary group and the direct and maternal additive genetic, common litter and residual random effects using the Gibbs Sampling algorithm of the MTGSAM program. Estimates of direct and common litter effects for DAYS and BF were 0.33 and 0.44 and 0.09 and 0.02, respectively. Additive genetic correlation between DAYS and BF was close to zero (-0.015. The heritability estimates indicate that genetic gains may
Gibbs Free Energy of Formation for Selected Platinum Group Minerals (PGM
Directory of Open Access Journals (Sweden)
Spiros Olivotos
2016-01-01
Full Text Available Thermodynamic data for platinum group (Os, Ir, Ru, Rh, Pd and Pt minerals are very limited. The present study is focused on the calculation of the Gibbs free energy of formation (ΔfG° for selected PGM occurring in layered intrusions and ophiolite complexes worldwide, applying available experimental data on their constituent elements at their standard state (ΔG = G(species − ΔG(elements, using the computer program HSC Chemistry software 6.0. The evaluation of the accuracy of the calculation method was made by the calculation of (ΔGf of rhodium sulfide phases. The calculated values were found to be ingood agreement with those measured in the binary system (Rh + S as a function of temperature by previous authors (Jacob and Gupta (2014. The calculated Gibbs free energy (ΔfG° followed the order RuS2 < (Ir,OsS2 < (Pt, PdS < (Pd, PtTe2, increasing from compatible to incompatible noble metals and from sulfides to tellurides.
Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis
Energy Technology Data Exchange (ETDEWEB)
Vrbik, Jan [Department of Mathematics, Brock University, St. Catharines, Ontario L2S 3A1 (Canada); Ospadov, Egor; Rothstein, Stuart M., E-mail: srothstein@brocku.ca [Department of Physics, Brock University, St. Catharines, Ontario L2S 3A1 (Canada)
2016-07-14
Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x′; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.
Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis
International Nuclear Information System (INIS)
Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M.
2016-01-01
Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x′; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.
Directory of Open Access Journals (Sweden)
Shin'ya Nakano
2014-05-01
Full Text Available A hybrid algorithm that combines the ensemble transform Kalman filter (ETKF and the importance sampling approach is proposed. Since the ETKF assumes a linear Gaussian observation model, the estimate obtained by the ETKF can be biased in cases with nonlinear or non-Gaussian observations. The particle filter (PF is based on the importance sampling technique, and is applicable to problems with nonlinear or non-Gaussian observations. However, the PF usually requires an unrealistically large sample size in order to achieve a good estimation, and thus it is computationally prohibitive. In the proposed hybrid algorithm, we obtain a proposal distribution similar to the posterior distribution by using the ETKF. A large number of samples are then drawn from the proposal distribution, and these samples are weighted to approximate the posterior distribution according to the importance sampling principle. Since the importance sampling provides an estimate of the probability density function (PDF without assuming linearity or Gaussianity, we can resolve the bias due to the nonlinear or non-Gaussian observations. Finally, in the next forecast step, we reduce the sample size to achieve computational efficiency based on the Gaussian assumption, while we use a relatively large number of samples in the importance sampling in order to consider the non-Gaussian features of the posterior PDF. The use of the ETKF is also beneficial in terms of the computational simplicity of generating a number of random samples from the proposal distribution and in weighting each of the samples. The proposed algorithm is not necessarily effective in case that the ensemble is located distant from the true state. However, monitoring the effective sample size and tuning the factor for covariance inflation could resolve this problem. In this paper, the proposed hybrid algorithm is introduced and its performance is evaluated through experiments with non-Gaussian observations.
Uniqueness of Gibbs states and global Markov property for Euclidean fields
International Nuclear Information System (INIS)
Albeverio, S.; Hoeegh-Krohn, R.
1981-01-01
The authors briefly discuss the proof of the uniqueness of solutions of the DLR equations (uniqueness of Gibbs states) in the class of regular generalized random fields (in the sense of having second moments bounded by those of some Euclidean field), for the Euclidean fields with trigonometric interaction. (Auth.)
Tuck, Adrian F
2017-09-07
There is no widely agreed definition of entropy, and consequently Gibbs energy, in open systems far from equilibrium. One recent approach has sought to formulate an entropy and Gibbs energy based on observed scale invariances in geophysical variables, particularly in atmospheric quantities, including the molecules constituting stratospheric chemistry. The Hamiltonian flux dynamics of energy in macroscopic open nonequilibrium systems maps to energy in equilibrium statistical thermodynamics, and corresponding equivalences of scale invariant variables with other relevant statistical mechanical variables such as entropy, Gibbs energy, and 1/(k Boltzmann T), are not just formally analogous but are also mappings. Three proof-of-concept representative examples from available adequate stratospheric chemistry observations-temperature, wind speed and ozone-are calculated, with the aim of applying these mappings and equivalences. Potential applications of the approach to scale invariant observations from the literature, involving scales from molecular through laboratory to astronomical, are considered. Theoretical support for the approach from the literature is discussed.
Energy Technology Data Exchange (ETDEWEB)
Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.
2017-08-21
Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,” has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer
An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-03-08
Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Quantitative Boltzmann-Gibbs Principles via Orthogonal Polynomial Duality
Ayala, Mario; Carinci, Gioia; Redig, Frank
2018-06-01
We study fluctuation fields of orthogonal polynomials in the context of particle systems with duality. We thereby obtain a systematic orthogonal decomposition of the fluctuation fields of local functions, where the order of every term can be quantified. This implies a quantitative generalization of the Boltzmann-Gibbs principle. In the context of independent random walkers, we complete this program, including also fluctuation fields in non-stationary context (local equilibrium). For other interacting particle systems with duality such as the symmetric exclusion process, similar results can be obtained, under precise conditions on the n particle dynamics.
Directory of Open Access Journals (Sweden)
Jorge Alberto Achcar
2003-12-01
Full Text Available Neste artigo, apresentamos estimadores bayesianos para a prevalência de tuberculose usando métodos computacionais de simulação de amostras da distribuição a posteriori de interesse. Em especial, consideramos o uso do amostrador de Gibbs para simular amostras da distribuição a posteriori, e daí encontramos, em uma forma simples, inferências precisas para a prevalência de tuberculose. Em uma aplicação, analisamos os resultados do exame de Rx do tórax no diagnóstico da tuberculose. Com essa aplicação, verificamos que os estimadores bayesianos são simples de se obter e apresentam grande precisão. O uso de métodos computacionais para simulação de amostras como o caso do amostrador de Gibbs tem sido recentemente muito utilizado para análise bayesiana de modelos em bioestatística. Essas técnicas de simulação usando o amostrador de Gibbs são facilmente implementadas e não exigem muito conhecimento computacional, podendo ser programadas em qualquer software disponível. Além disso, essas técnicas podem ser consideradas para o estudo da prevalência de outras doenças.In this paper we present Bayesian estimators of the prevalence of tuberculosis using computational methods for simulation of samples of posterior distribution of interest. We especially considered the Gibbs sampling algorithm to generate samples of posterior distribution, and from these samples we obtained accurate inferences for the prevalence of tuberculosis. In an application, we analyzed the results of lung X-ray tests in the diagnosis of tuberculosis. With this application, we verified that Bayesian estimators are more accurate than some existing estimators usually considered by health researchers. The use of computational methods for simulation of samples as the case of the Gibbs sampling algorithm is becoming very popular for Bayesian analysis in biostatistics. These simulation techniques using the Gibbs sampling algorithm are easily implemented and do
Experimental Determination of Third Derivative of the Gibbs Free Energy, G II
DEFF Research Database (Denmark)
Koga, Yoshikata; Westh, Peter; Inaba, Akira
2010-01-01
We have been evaluating third derivative quantities of the Gibbs free energy, G, by graphically differentiating the second derivatives that are accessible experimentally, and demonstrated their power in elucidating the mixing schemes in aqueous solutions. Here we determine directly one of the third...
Inference with minimal Gibbs free energy in information field theory
International Nuclear Information System (INIS)
Ensslin, Torsten A.; Weig, Cornelius
2010-01-01
Non-linear and non-Gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the Gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from Poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a Gaussian signal with unknown spectrum, and (iii) inference of a Poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how Gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-Gaussian posterior.
Ebenhöh, Oliver; Spelberg, Stephanie
2018-02-19
The photosynthetic carbon reduction cycle, or Calvin-Benson-Bassham (CBB) cycle, is now contained in every standard biochemistry textbook. Although the cycle was already proposed in 1954, it is still the subject of intense research, and even the structure of the cycle, i.e. the exact series of reactions, is still under debate. The controversy about the cycle's structure was fuelled by the findings of Gibbs and Kandler in 1956 and 1957, when they observed that radioactive 14 CO 2 was dynamically incorporated in hexoses in a very atypical and asymmetrical way, a phenomenon later termed the 'photosynthetic Gibbs effect'. Now, it is widely accepted that the photosynthetic Gibbs effect is not in contradiction to the reaction scheme proposed by CBB, but the arguments given have been largely qualitative and hand-waving. To fully appreciate the controversy and to understand the difficulties in interpreting the Gibbs effect, it is illustrative to illuminate the history of the discovery of the CBB cycle. We here give an account of central scientific advances and discoveries, which were essential prerequisites for the elucidation of the cycle. Placing the historic discoveries in the context of the modern textbook pathway scheme illustrates the complexity of the cycle and demonstrates why especially dynamic labelling experiments are far from easy to interpret. We conclude by arguing that it requires sound theoretical approaches to resolve conflicting interpretations and to provide consistent quantitative explanations. © 2018 The Author(s).
Sampling algorithms for validation of supervised learning models for Ising-like systems
Portman, Nataliya; Tamblyn, Isaac
2017-12-01
In this paper, we build and explore supervised learning models of ferromagnetic system behavior, using Monte-Carlo sampling of the spin configuration space generated by the 2D Ising model. Given the enormous size of the space of all possible Ising model realizations, the question arises as to how to choose a reasonable number of samples that will form physically meaningful and non-intersecting training and testing datasets. Here, we propose a sampling technique called ;ID-MH; that uses the Metropolis-Hastings algorithm creating Markov process across energy levels within the predefined configuration subspace. We show that application of this method retains phase transitions in both training and testing datasets and serves the purpose of validation of a machine learning algorithm. For larger lattice dimensions, ID-MH is not feasible as it requires knowledge of the complete configuration space. As such, we develop a new ;block-ID; sampling strategy: it decomposes the given structure into square blocks with lattice dimension N ≤ 5 and uses ID-MH sampling of candidate blocks. Further comparison of the performance of commonly used machine learning methods such as random forests, decision trees, k nearest neighbors and artificial neural networks shows that the PCA-based Decision Tree regressor is the most accurate predictor of magnetizations of the Ising model. For energies, however, the accuracy of prediction is not satisfactory, highlighting the need to consider more algorithmically complex methods (e.g., deep learning).
International Nuclear Information System (INIS)
Lin, Shu-Kun
1996-01-01
Gibbs paradox statement of entropy of mixing has been regarded as the theoretical foundation of statistical mechanics, quantum theory and biophysics. However, all the relevant chemical experimental observations and logical analyses indicate that the Gibbs paradox statement is false. I prove that this statement is wrong: Gibbs paradox statement implies that entropy decreases with the increase in symmetry (as represented by a symmetry number σ; see any statistical mechanics textbook). From group theory any system has at least a symmetry number σ=1 which is the identity operation for a strictly asymmetric system. It follows that the entropy of a system is equal to, or less than, zero. However, from either von Neumann-Shannon entropy formula (S(w) =-Σ ω in p 1 ) or the Boltzmann entropy formula (S = in w) and the original definition, entropy is non-negative. Therefore, this statement is false. It should not be a surprise that for the first time, many outstanding problems such as the validity of Pauling's resonance theory, the explanation of second order phase transition phenomena, the biophysical problem of protein folding and the related hydrophobic effect, etc., can be solved. Empirical principles such as Pauli principle (and Hund's rule) and HSAB principle, etc., can also be given a theoretical explanation
A novel directional asymmetric sampling search algorithm for fast block-matching motion estimation
Li, Yue-e.; Wang, Qiang
2011-11-01
This paper proposes a novel directional asymmetric sampling search (DASS) algorithm for video compression. Making full use of the error information (block distortions) of the search patterns, eight different direction search patterns are designed for various situations. The strategy of local sampling search is employed for the search of big-motion vector. In order to further speed up the search, early termination strategy is adopted in procedure of DASS. Compared to conventional fast algorithms, the proposed method has the most satisfactory PSNR values for all test sequences.
International Nuclear Information System (INIS)
Phadke, Sushil; Shrivastava, Bhakt Darshan; Ujle, S K; Mishra, Ashutosh; Dagaonkar, N
2014-01-01
One of the potential driving forces behind a chemical reaction is favourable a new quantity known as the Gibbs free energy (G) of the system, which reflects the balance between these forces. Ultrasonic velocity and absorption measurements in liquids and liquid mixtures find extensive application to study the nature of intermolecular forces. Ultrasonic velocity measurements have been successfully employed to detect weak and strong molecular interactions present in binary and ternary liquid mixtures. After measuring the density and ultrasonic velocity of aqueous solution of 'Borassus Flabellifier' BF and Adansonia digitata And, we calculated Gibb's energy and intermolecular free length. The velocity of ultrasonic waves was measured, using a multi-frequency ultrasonic interferometer with a high degree of accuracy operating Model M-84 by M/s Mittal Enterprises, New Delhi, at a fixed frequency of 2 MHz. Natural sample 'Borassus Flabellifier' BF fruit pulp and Adansonia digitata AnD powder was collected from Dhar, District of MP, India for this study.
Shu, Tongxin; Xia, Min; Chen, Jiahong; Silva, Clarence de
2017-11-05
Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy.
Directory of Open Access Journals (Sweden)
Tongxin Shu
2017-11-01
Full Text Available Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA, while achieving around the same Normalized Mean Error (NME, DDASA is superior in saving 5.31% more battery energy.
Askerov, Bahram M
2010-01-01
This book deals with theoretical thermodynamics and the statistical physics of electron and particle gases. While treating the laws of thermodynamics from both classical and quantum theoretical viewpoints, it posits that the basis of the statistical theory of macroscopic properties of a system is the microcanonical distribution of isolated systems, from which all canonical distributions stem. To calculate the free energy, the Gibbs method is applied to ideal and non-ideal gases, and also to a crystalline solid. Considerable attention is paid to the Fermi-Dirac and Bose-Einstein quantum statistics and its application to different quantum gases, and electron gas in both metals and semiconductors is considered in a nonequilibrium state. A separate chapter treats the statistical theory of thermodynamic properties of an electron gas in a quantizing magnetic field.
Directory of Open Access Journals (Sweden)
Huaiqing Zhang
2014-01-01
Full Text Available The spectral leakage has a harmful effect on the accuracy of harmonic analysis for asynchronous sampling. This paper proposed a time quasi-synchronous sampling algorithm which is based on radial basis function (RBF interpolation. Firstly, a fundamental period is evaluated by a zero-crossing technique with fourth-order Newton’s interpolation, and then, the sampling sequence is reproduced by the RBF interpolation. Finally, the harmonic parameters can be calculated by FFT on the synchronization of sampling data. Simulation results showed that the proposed algorithm has high accuracy in measuring distorted and noisy signals. Compared to the local approximation schemes as linear, quadric, and fourth-order Newton interpolations, the RBF is a global approximation method which can acquire more accurate results while the time-consuming is about the same as Newton’s.
Potential-Decomposition Strategy in Markov Chain Monte Carlo Sampling Algorithms
International Nuclear Information System (INIS)
Shangguan Danhua; Bao Jingdong
2010-01-01
We introduce the potential-decomposition strategy (PDS), which can he used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insufficient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.
Extensitivity of entropy and modern form of Gibbs paradox
International Nuclear Information System (INIS)
Home, D.; Sengupta, S.
1981-01-01
The extensivity property of entropy is clarified in the light of a critical examination of the entropy formula based on quantum statistics and the relevant thermodynamic requirement. The modern form of the Gibbs paradox, related to the discontinuous jump in entropy due to identity or non-identity of particles, is critically investigated. Qualitative framework of a new resolution of this paradox, which analyses the general effect of distinction mark on the Hamiltonian of a system of identical particles, is outlined. (author)
Directory of Open Access Journals (Sweden)
Chaeyoung Lee
2012-11-01
Full Text Available Epistasis that may explain a large portion of the phenotypic variation for complex economic traits of animals has been ignored in many genetic association studies. A Baysian method was introduced to draw inferences about multilocus genotypic effects based on their marginal posterior distributions by a Gibbs sampler. A simulation study was conducted to provide statistical powers under various unbalanced designs by using this method. Data were simulated by combined designs of number of loci, within genotype variance, and sample size in unbalanced designs with or without null combined genotype cells. Mean empirical statistical power was estimated for testing posterior mean estimate of combined genotype effect. A practical example for obtaining empirical statistical power estimates with a given sample size was provided under unbalanced designs. The empirical statistical powers would be useful for determining an optimal design when interactive associations of multiple loci with complex phenotypes were examined.
Excess Gibbs energy for six binary solid solutions of molecularly simple substances
Energy Technology Data Exchange (ETDEWEB)
Lobo, L J; Staveley, L A.K.
1985-01-01
In this paper we apply the method developed in a previous study of Ar + CH/sub 4/ to the evaluation of the excess Gibbs energy G /SUP E.S/ for solid solutions of two molecularly simple components. The method depends on combining information on the excess Gibbs energy G /SUP E.L/ for the liquid mixture of the two components with a knowledge of the (T, x) solid-liquid phase diagram. Certain thermal properties o the pure substances are also needed. G /SUP E.S/ has been calculated for binary mixtures of Ar + Kr, Kr + CH/sub 4/, CO + N/sub 2/, Kr + Xe, Ar + N/sub 2/, and Ar + CO. In general, but not always, the solid mixtures are more non-ideal than the liquid mixtures of the same composition at the same temperature. Except for the Kr + CH/sub 4/ system, the ratio r = G /SUP E.S/ /G /SUP E.L/ is larger the richer the solution in the component with the smaller molecules.
Gary, Ronald K.
2004-01-01
The concentration dependence of (delta)S term in the Gibbs free energy function is described in relation to its application to reversible reactions in biochemistry. An intuitive and non-mathematical argument for the concentration dependence of the (delta)S term in the Gibbs free energy equation is derived and the applicability of the equation to…
Directory of Open Access Journals (Sweden)
D. Ramyachitra
2015-09-01
Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.
Ramyachitra, D; Sofia, M; Manikandan, P
2015-09-01
Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.
Fast covariance estimation for innovations computed from a spatial Gibbs point process
DEFF Research Database (Denmark)
Coeurjolly, Jean-Francois; Rubak, Ege
In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo...
A comparison of algorithms for inference and learning in probabilistic graphical models.
Frey, Brendan J; Jojic, Nebojsa
2005-09-01
Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.
Work and entropy production in generalised Gibbs ensembles
International Nuclear Information System (INIS)
Perarnau-Llobet, Martí; Riera, Arnau; Gallego, Rodrigo; Wilming, Henrik; Eisert, Jens
2016-01-01
Recent years have seen an enormously revived interest in the study of thermodynamic notions in the quantum regime. This applies both to the study of notions of work extraction in thermal machines in the quantum regime, as well as to questions of equilibration and thermalisation of interacting quantum many-body systems as such. In this work we bring together these two lines of research by studying work extraction in a closed system that undergoes a sequence of quenches and equilibration steps concomitant with free evolutions. In this way, we incorporate an important insight from the study of the dynamics of quantum many body systems: the evolution of closed systems is expected to be well described, for relevant observables and most times, by a suitable equilibrium state. We will consider three kinds of equilibration, namely to (i) the time averaged state, (ii) the Gibbs ensemble and (iii) the generalised Gibbs ensemble, reflecting further constants of motion in integrable models. For each effective description, we investigate notions of entropy production, the validity of the minimal work principle and properties of optimal work extraction protocols. While we keep the discussion general, much room is dedicated to the discussion of paradigmatic non-interacting fermionic quantum many-body systems, for which we identify significant differences with respect to the role of the minimal work principle. Our work not only has implications for experiments with cold atoms, but also can be viewed as suggesting a mindset for quantum thermodynamics where the role of the external heat baths is instead played by the system itself, with its internal degrees of freedom bringing coarse-grained observables to equilibrium. (paper)
Experimental Pragmatics and What Is Said: A Response to Gibbs and Moise.
Nicolle, Steve; Clark, Billy
1999-01-01
Attempted replication of Gibbs and Moise (1997) experiments regarding the recognition of a distinction between what is said and what is implicated. Results showed that, under certain conditions, subject selected implicatures when asked to select the paraphrase best reflecting what a speaker has said. Suggests that results can be explained with the…
Classification and authentication of unknown water samples using machine learning algorithms.
Kundu, Palash K; Panchariya, P C; Kundu, Madhusree
2011-07-01
This paper proposes the development of water sample classification and authentication, in real life which is based on machine learning algorithms. The proposed techniques used experimental measurements from a pulse voltametry method which is based on an electronic tongue (E-tongue) instrumentation system with silver and platinum electrodes. E-tongue include arrays of solid state ion sensors, transducers even of different types, data collectors and data analysis tools, all oriented to the classification of liquid samples and authentication of unknown liquid samples. The time series signal and the corresponding raw data represent the measurement from a multi-sensor system. The E-tongue system, implemented in a laboratory environment for 6 numbers of different ISI (Bureau of Indian standard) certified water samples (Aquafina, Bisleri, Kingfisher, Oasis, Dolphin, and McDowell) was the data source for developing two types of machine learning algorithms like classification and regression. A water data set consisting of 6 numbers of sample classes containing 4402 numbers of features were considered. A PCA (principal component analysis) based classification and authentication tool was developed in this study as the machine learning component of the E-tongue system. A proposed partial least squares (PLS) based classifier, which was dedicated as well; to authenticate a specific category of water sample evolved out as an integral part of the E-tongue instrumentation system. The developed PCA and PLS based E-tongue system emancipated an overall encouraging authentication percentage accuracy with their excellent performances for the aforesaid categories of water samples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Medical Image Retrieval Based On the Parallelization of the Cluster Sampling Algorithm
Ali, Hesham Arafat; Attiya, Salah; El-henawy, Ibrahim
2017-01-01
In this paper we develop parallel cluster sampling algorithms and show that a multi-chain version is embarrassingly parallel and can be used efficiently for medical image retrieval among other applications.
Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch
2017-06-06
An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.
MUSIC ALGORITHM FOR LOCATING POINT-LIKE SCATTERERS CONTAINED IN A SAMPLE ON FLAT SUBSTRATE
Institute of Scientific and Technical Information of China (English)
Dong Heping; Ma Fuming; Zhang Deyue
2012-01-01
In this paper,we consider a MUSIC algorithm for locating point-like scatterers contained in a sample on flat substrate.Based on an asymptotic expansion of the scattering amplitude proposed by Ammari et al.,the reconstruction problem can be reduced to a calculation of Green function corresponding to the background medium.In addition,we use an explicit formulation of Green function in the MUSIC algorithm to simplify the calculation when the cross-section of sample is a half-disc.Numerical experiments are included to demonstrate the feasibility of this method.
Uniqueness of Gibbs measure for Potts model with countable set of spin values
International Nuclear Information System (INIS)
Ganikhodjaev, N.N.; Rozikov, U.A.
2004-11-01
We consider a nearest-neighbor Potts model with countable spin values 0,1,..., and non zero external field, on a Cayley tree of order k (with k+1 neighbors). We study translation-invariant 'splitting' Gibbs measures. We reduce the problem to the description of the solutions of some infinite system of equations. For any k≥1 and any fixed probability measure ν with ν(i)>0 on the set of all non negative integer numbers Φ={0,1,...} we show that the set of translation-invariant splitting Gibbs measures contains at most one point, independently on parameters of the Potts model with countable set of spin values on Cayley tree. Also we give a full description of the class of measures ν on Φ such that wit respect to each element of this class our infinite system of equations has unique solution {a i =1,2,...}, where a is an element of (0,1). (author)
Monte Carlo Molecular Simulation with Isobaric-Isothermal and Gibbs-NPT Ensembles
Du, Shouhong
2012-01-01
This thesis presents Monte Carlo methods for simulations of phase behaviors of Lennard-Jones fluids. The isobaric-isothermal (NPT) ensemble and Gibbs-NPT ensemble are introduced in detail. NPT ensemble is employed to determine the phase diagram of pure component. The reduced simulation results are verified by comparison with the equation of state by by Johnson et al. and results with L-J parameters of methane agree considerably with the experiment measurements. We adopt the blocking method for variance estimation and error analysis of the simulation results. The relationship between variance and number of Monte Carlo cycles, error propagation and Random Number Generator performance are also investigated. We review the Gibbs-NPT ensemble employed for phase equilibrium of binary mixture. The phase equilibrium is achieved by performing three types of trial move: particle displacement, volume rearrangement and particle transfer. The simulation models and the simulation details are introduced. The simulation results of phase coexistence for methane and ethane are reported with comparison of the experimental data. Good agreement is found for a wide range of pressures. The contribution of this thesis work lies in the study of the error analysis with respect to the Monte Carlo cycles and number of particles in some interesting aspects.
Monte Carlo Molecular Simulation with Isobaric-Isothermal and Gibbs-NPT Ensembles
Du, Shouhong
2012-05-01
This thesis presents Monte Carlo methods for simulations of phase behaviors of Lennard-Jones fluids. The isobaric-isothermal (NPT) ensemble and Gibbs-NPT ensemble are introduced in detail. NPT ensemble is employed to determine the phase diagram of pure component. The reduced simulation results are verified by comparison with the equation of state by by Johnson et al. and results with L-J parameters of methane agree considerably with the experiment measurements. We adopt the blocking method for variance estimation and error analysis of the simulation results. The relationship between variance and number of Monte Carlo cycles, error propagation and Random Number Generator performance are also investigated. We review the Gibbs-NPT ensemble employed for phase equilibrium of binary mixture. The phase equilibrium is achieved by performing three types of trial move: particle displacement, volume rearrangement and particle transfer. The simulation models and the simulation details are introduced. The simulation results of phase coexistence for methane and ethane are reported with comparison of the experimental data. Good agreement is found for a wide range of pressures. The contribution of this thesis work lies in the study of the error analysis with respect to the Monte Carlo cycles and number of particles in some interesting aspects.
Molar Surface Gibbs Energy of the Aqueous Solution of Ionic Liquid [C4mim][Oac
Institute of Scientific and Technical Information of China (English)
TONG Jing; ZHENG Xu; TONG Jian; QU Ye; LIU Lu; LI Hui
2017-01-01
The values of density and surface tension for aqueous solution of ionic liquid(IL) 1-butyl-3-methylimidazolium acetate([C4mim][OAc]) with various molalities were measured in the range of 288.15-318.15 K at intervals of 5 K.On the basis of thermodynamics,a semi-empirical model-molar surface Gibbs energy model of the ionic liquid solution that could be used to predict the surface tension or molar volume of solutions was put forward.The predicted values of the surface tension for aqueous [C4im][OAc] and the corresponding experimental ones were highly correlated and extremely similar.In terms of the concept of the molar Gibbs energy,a new E(o)tv(o)s equation was obtained and each parameter of the new equation has a clear physical meaning.
Elsheikh, A. H.
2013-12-01
Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.
The Wang-Landau Sampling Algorithm
Landau, David P.
2003-03-01
Over the past several decades Monte Carlo simulations[1] have evolved into a powerful tool for the study of wide-ranging problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, usually in the canonical ensemble, and enormous improvements have been made in performance through the implementation of novel algorithms. Nonetheless, difficulties arise near phase transitions, either due to critical slowing down near 2nd order transitions or to metastability near 1st order transitions, thus limiting the applicability of the method. We shall describe a new and different Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is estimated, all thermodynamic properties can be calculated at all temperatures. This approach can be extended to multi-dimensional parameter spaces and has already found use in classical models of interacting particles including systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc., as well as for quantum models. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
Feng, Dong-xia; Nguyen, Anh V
2016-03-01
Floating objects on the air-water interfaces are central to a number of everyday activities, from walking on water by insects to flotation separation of valuable minerals using air bubbles. The available theories show that a fine sphere can float if the force of surface tension and buoyancies can support the sphere at the interface with an apical angle subtended by the circle of contact being larger than the contact angle. Here we show that the pinning of the contact line at the sharp edge, known as the Gibbs inequality condition, also plays a significant role in controlling the stability and detachment of floating spheres. Specifically, we truncated the spheres with different angles and used a force sensor device to measure the force of pushing the truncated spheres from the interface into water. We also developed a theoretical modeling to calculate the pushing force that in combination with experimental results shows different effects of the Gibbs inequality condition on the stability and detachment of the spheres from the water surface. For small angles of truncation, the Gibbs inequality condition does not affect the sphere detachment, and hence the classical theories on the floatability of spheres are valid. For large truncated angles, the Gibbs inequality condition determines the tenacity of the particle-meniscus contact and the stability and detachment of floating spheres. In this case, the classical theories on the floatability of spheres are no longer valid. A critical truncated angle for the transition from the classical to the Gibbs inequality regimes of detachment was also established. The outcomes of this research advance our understanding of the behavior of floating objects, in particular, the flotation separation of valuable minerals, which often contain various sharp edges of their crystal faces.
Elsheikh, A. H.; Wheeler, M. F.; Hoteit, Ibrahim
2013-01-01
Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known
Poisson-Box Sampling algorithms for three-dimensional Markov binary mixtures
Larmier, Coline; Zoia, Andrea; Malvagi, Fausto; Dumonteil, Eric; Mazzolo, Alain
2018-02-01
Particle transport in Markov mixtures can be addressed by the so-called Chord Length Sampling (CLS) methods, a family of Monte Carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on-the-fly the material interfaces crossed by the random walkers during their trajectories. Such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the Boltzmann equation for a large number of realizations of random media. CLS solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. In this work we propose a new family of algorithms (called 'Poisson Box Sampling', PBS) aimed at improving the accuracy of the CLS approach for transport in d-dimensional binary Markov mixtures. In order to probe the features of PBS methods, we will focus on three-dimensional Markov media and revisit the benchmark problem originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]: for these configurations we will compare reference solutions, standard CLS solutions and the new PBS solutions for scalar particle flux, transmission and reflection coefficients. PBS will be shown to perform better than CLS at the expense of a reasonable increase in computational time.
A fast direct sampling algorithm for equilateral closed polygons
International Nuclear Information System (INIS)
Cantarella, Jason; Duplantier, Bertrand; Shonkwiler, Clayton; Uehara, Erica
2016-01-01
Sampling equilateral closed polygons is of interest in the statistical study of ring polymers. Over the past 30 years, previous authors have proposed a variety of simple Markov chain algorithms (but have not been able to show that they converge to the correct probability distribution) and complicated direct samplers (which require extended-precision arithmetic to evaluate numerically unstable polynomials). We present a simple direct sampler which is fast and numerically stable, and analyze its runtime using a new formula for the volume of equilateral polygon space as a Dirichlet-type integral. (paper)
The thermodynamic properties of the upper continental crust: Exergy, Gibbs free energy and enthalpy
International Nuclear Information System (INIS)
Valero, Alicia; Valero, Antonio; Vieillard, Philippe
2012-01-01
This paper shows a comprehensive database of the thermodynamic properties of the most abundant minerals of the upper continental crust. For those substances whose thermodynamic properties are not listed in the literature, their enthalpy and Gibbs free energy are calculated with 11 different estimation methods described in this study, with associated errors of up to 10% with respect to values published in the literature. Thanks to this procedure we have been able to make a first estimation of the enthalpy, Gibbs free energy and exergy of the bulk upper continental crust and of each of the nearly 300 most abundant minerals contained in it. Finally, the chemical exergy of the continental crust is compared to the exergy of the concentrated mineral resources. The numbers obtained indicate the huge chemical exergy wealth of the crust: 6 × 10 6 Gtoe. However, this study shows that approximately only 0.01% of that amount can be effectively used by man.
Algorithmic randomness and physical entropy
International Nuclear Information System (INIS)
Zurek, W.H.
1989-01-01
Algorithmic randomness provides a rigorous, entropylike measure of disorder of an individual, microscopic, definite state of a physical system. It is defined by the size (in binary digits) of the shortest message specifying the microstate uniquely up to the assumed resolution. Equivalently, algorithmic randomness can be expressed as the number of bits in the smallest program for a universal computer that can reproduce the state in question (for instance, by plotting it with the assumed accuracy). In contrast to the traditional definitions of entropy, algorithmic randomness can be used to measure disorder without any recourse to probabilities. Algorithmic randomness is typically very difficult to calculate exactly but relatively easy to estimate. In large systems, probabilistic ensemble definitions of entropy (e.g., coarse-grained entropy of Gibbs and Boltzmann's entropy H=lnW, as well as Shannon's information-theoretic entropy) provide accurate estimates of the algorithmic entropy of an individual system or its average value for an ensemble. One is thus able to rederive much of thermodynamics and statistical mechanics in a setting very different from the usual. Physical entropy, I suggest, is a sum of (i) the missing information measured by Shannon's formula and (ii) of the algorithmic information content---algorithmic randomness---present in the available data about the system. This definition of entropy is essential in describing the operation of thermodynamic engines from the viewpoint of information gathering and using systems. These Maxwell demon-type entities are capable of acquiring and processing information and therefore can ''decide'' on the basis of the results of their measurements and computations the best strategy for extracting energy from their surroundings. From their internal point of view the outcome of each measurement is definite
The Gibbs Energy Basis and Construction of Boiling Point Diagrams in Binary Systems
Smith, Norman O.
2004-01-01
An illustration of how excess Gibbs energies of the components in binary systems can be used to construct boiling point diagrams is given. The underlying causes of the various types of behavior of the systems in terms of intermolecular forces and the method of calculating the coexisting liquid and vapor compositions in boiling point diagrams with…
inverse gaussian model for small area estimation via gibbs sampling
African Journals Online (AJOL)
ADMIN
For example, MacGibbon and Tomberlin. (1989) have considered estimating small area rates and binomial parameters using empirical Bayes methods. Stroud (1991) used hierarchical Bayes approach for univariate natural exponential families with quadratic variance functions in sample survey applications, while Chaubey ...
Black, Clanton C
2008-01-01
The very personal touch of Professor Martin Gibbs as a worldwide advocate for photosynthesis and plant physiology was lost with his death in July 2006. Widely known for his engaging humorous personality and his humanitarian lifestyle, Martin Gibbs excelled as a strong international science diplomat; like a personal science family patriarch encouraging science and plant scientists around the world. Immediately after World War II he was a pioneer at the Brookhaven National Laboratory in the use of (14)C to elucidate carbon flow in metabolism and particularly carbon pathways in photosynthesis. His leadership on carbon metabolism and photosynthesis extended for four decades of working in collaboration with a host of students and colleagues. In 1962, he was selected as the Editor-in-Chief of Plant Physiology. That appointment initiated 3 decades of strong directional influences by Gibbs on plant research and photosynthesis. Plant Physiology became and remains a premier source of new knowledge about the vital and primary roles of plants in earth's environmental history and the energetics of our green-blue planet. His leadership and charismatic humanitarian character became the quintessence of excellence worldwide. Martin Gibbs was in every sense the personification of a model mentor not only for scientists but also shown in devotion to family. Here we pay tribute and honor to an exemplary humanistic mentor, Martin Gibbs.
A neural algorithm for the non-uniform and adaptive sampling of biomedical data.
Mesin, Luca
2016-04-01
Body sensors are finding increasing applications in the self-monitoring for health-care and in the remote surveillance of sensitive people. The physiological data to be sampled can be non-stationary, with bursts of high amplitude and frequency content providing most information. Such data could be sampled efficiently with a non-uniform schedule that increases the sampling rate only during activity bursts. A real time and adaptive algorithm is proposed to select the sampling rate, in order to reduce the number of measured samples, but still recording the main information. The algorithm is based on a neural network which predicts the subsequent samples and their uncertainties, requiring a measurement only when the risk of the prediction is larger than a selectable threshold. Four examples of application to biomedical data are discussed: electromyogram, electrocardiogram, electroencephalogram, and body acceleration. Sampling rates are reduced under the Nyquist limit, still preserving an accurate representation of the data and of their power spectral densities (PSD). For example, sampling at 60% of the Nyquist frequency, the percentage average rectified errors in estimating the signals are on the order of 10% and the PSD is fairly represented, until the highest frequencies. The method outperforms both uniform sampling and compressive sensing applied to the same data. The discussed method allows to go beyond Nyquist limit, still preserving the information content of non-stationary biomedical signals. It could find applications in body sensor networks to lower the number of wireless communications (saving sensor power) and to reduce the occupation of memory. Copyright © 2016 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Sahu, Manjulata; Dash, Smruti
2011-01-01
The standard molar Gibbs energies of formation of Nd 6 UO 12 (s) have been measured using an oxygen concentration cell with yttria stabilized zirconia as solid electrolyte. Δ f G m o (T) for Nd 6 UO 12 (s) has been calculated using the measured and required thermodynamic data from the literature. The calculated Gibbs energy expression can be given as: Δ f G m o (Nd 6 UO 12 , s,T)/(± 2.3) kJmol -1 = -6660.1+1.0898 (T/K). (author)
International Nuclear Information System (INIS)
Del' Pino, Kh.; Chukurov, P.M.; Drakin, S.I.
1980-01-01
Analyzed are the results of experimental depermination of formation enthalpies of waterless nitrates of lanthane cerium, praseodymium, neodymium and samarium. Using method of comparative calculation computed are enthalpies of formation of waterless lanthanide and yttrium nitrates. Calculated values of enthalpies and Gibbs energies of waterless lanthanide nitrate formation are tabulated
DSMC multicomponent aerosol dynamics: Sampling algorithms and aerosol processes
Palaniswaamy, Geethpriya
The post-accident nuclear reactor primary and containment environments can be characterized by high temperatures and pressures, and fission products and nuclear aerosols. These aerosols evolve via natural transport processes as well as under the influence of engineered safety features. These aerosols can be hazardous and may pose risk to the public if released into the environment. Computations of their evolution, movement and distribution involve the study of various processes such as coagulation, deposition, condensation, etc., and are influenced by factors such as particle shape, charge, radioactivity and spatial inhomogeneity. These many factors make the numerical study of nuclear aerosol evolution computationally very complicated. The focus of this research is on the use of the Direct Simulation Monte Carlo (DSMC) technique to elucidate the role of various phenomena that influence the nuclear aerosol evolution. In this research, several aerosol processes such as coagulation, deposition, condensation, and source reinforcement are explored for a multi-component, aerosol dynamics problem in a spatially homogeneous medium. Among the various sampling algorithms explored the Metropolis sampling algorithm was found to be effective and fast. Several test problems and test cases are simulated using the DSMC technique. The DSMC results obtained are verified against the analytical and sectional results for appropriate test problems. Results show that the assumption of a single mean density is not appropriate due to the complicated effect of component densities on the aerosol processes. The methods developed and the insights gained will also be helpful in future research on the challenges associated with the description of fission product and aerosol releases.
Masuda, Yosuke; Yamaotsu, Noriyuki; Hirono, Shuichi
2017-01-01
In order to predict the potencies of mechanism-based reversible covalent inhibitors, the relationships between calculated Gibbs free energy of hydrolytic water molecule in acyl-trypsin intermediates and experimentally measured catalytic rate constants (k cat ) were investigated. After obtaining representative solution structures by molecular dynamics (MD) simulations, hydration thermodynamics analyses using WaterMap™ were conducted. Consequently, we found for the first time that when Gibbs free energy of the hydrolytic water molecule was lower, logarithms of k cat were also lower. The hydrolytic water molecule with favorable Gibbs free energy may hydrolyze acylated serine slowly. Gibbs free energy of hydrolytic water molecule might be a useful descriptor for computer-aided discovery of mechanism-based reversible covalent inhibitors of hydrolytic enzymes.
Excess Gibbs Energy for Ternary Lattice Solutions of Nonrandom Mixing
Energy Technology Data Exchange (ETDEWEB)
Jung, Hae Young [DukSung Womens University, Seoul (Korea, Republic of)
2008-12-15
It is assumed for three components lattice solution that the number of ways of arranging particles randomly on the lattice follows a normal distribution of a linear combination of N{sub 12}, N{sub 23}, N{sub 13} which are the number of the nearest neighbor interactions between different molecules. It is shown by random number simulations that this assumption is reasonable. From this distribution, an approximate equation for the excess Gibbs energy of three components lattice solution is derived. Using this equation, several liquid-vapor equilibria are calculated and compared with the results from other equations.
International Nuclear Information System (INIS)
Tiwari, P; Xie, Y; Chen, Y; Deasy, J
2014-01-01
Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality
Psychoanalytic Interpretation of Blueberries by Susan Gibb
Directory of Open Access Journals (Sweden)
Maya Zalbidea Paniagua
2014-06-01
Full Text Available Blueberries (2009 by Susan Gibb, published in the ELO (Electronic Literature Organization, invites the reader to travel inside the protagonist’s mind to discover real and imaginary experiences examining notions of gender, sex, body and identity of a traumatised woman. This article explores the verbal and visual modes in this digital short fiction following semiotic patterns as well as interpreting the psychological states that are expressed through poetical and technological components. A comparative study of the consequences of trauma in the protagonist will be developed including psychoanalytic theories by Sigmund Freud, Jacques Lacan and the feminist psychoanalysts: Melanie Klein and Bracha Ettinger. The reactions of the protagonist will be studied: loss of reality, hallucinations and Electra Complex, as well as the rise of defence mechanisms and her use of the artistic creativity as a healing therapy. The interactivity of the hypermedia, multiple paths and endings will be analyzed as a literary strategy that increases the reader’s capacity of empathizing with the speaker.
Czech Academy of Sciences Publication Activity Database
Moučka, F.; Nezbeda, Ivo
2013-01-01
Roč. 360, DEC 25 (2013), s. 472-476 ISSN 0378-3812 Grant - others:GA MŠMT(CZ) LH12019 Institutional support: RVO:67985858 Keywords : multi-particle move monte carlo * Gibbs ensemble * vapor-liquid-equilibria Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.241, year: 2013
Monte Carlo Algorithms for a Bayesian Analysis of the Cosmic Microwave Background
Jewell, Jeffrey B.; Eriksen, H. K.; ODwyer, I. J.; Wandelt, B. D.; Gorski, K.; Knox, L.; Chu, M.
2006-01-01
A viewgraph presentation on the review of Bayesian approach to Cosmic Microwave Background (CMB) analysis, numerical implementation with Gibbs sampling, a summary of application to WMAP I and work in progress with generalizations to polarization, foregrounds, asymmetric beams, and 1/f noise is given.
Hanson, Robert M.; Riley, Patrick; Schwinefus, Jeff; Fischer, Paul J.
2008-01-01
The use of qualitative graphs of Gibbs energy versus temperature is described in the context of chemical demonstrations involving phase changes and colligative properties at the general chemistry level. (Contains 5 figures and 1 note.)
Algunas Precisiones en torno a las funciones termodinámicas energía libre de Gibbs
Solaz Portolés, Joan Josep; Quílez Pardo, Juan
2001-01-01
The aim of this study is to elucidate some didactic misundertandings related with the use and the appli cability of the delta functions ∆G, ∆rG and ∆rG0, which derive from the thermodynamic potential Gibbs Free Energy, G.
Entropy Calculation of Reversible Mixing of Ideal Gases Shows Absence of Gibbs Paradox
Oleg Borodiouk; Vasili Tatarin
1999-01-01
Abstract: We consider the work of reversible mixing of ideal gases using a real process. Now assumptions were made concerning infinite shifts, infinite number of cycles and infinite work to provide an accurate calculation of entropy resulting from reversible mixing of ideal gases. We derived an equation showing the dependence of this entropy on the difference in potential of mixed gases, which is evidence for the absence of Gibbs' paradox.
Markov chain sampling of the O(n) loop models on the infinite plane
Herdeiro, Victor
2017-07-01
A numerical method was recently proposed in Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] showing a precise sampling of the infinite plane two-dimensional critical Ising model for finite lattice subsections. The present note extends the method to a larger class of models, namely the O(n) loop gas models for n ∈(1 ,2 ] . We argue that even though the Gibbs measure is nonlocal, it is factorizable on finite subsections when sufficient information on the loops touching the boundaries is stored. Our results attempt to show that provided an efficient Markov chain mixing algorithm and an improved discrete lattice dilation procedure the planar limit of the O(n) models can be numerically studied with efficiency similar to the Ising case. This confirms that scale invariance is the only requirement for the present numerical method to work.
Priede, Imants G.; Billett, David S. M.; Brierley, Andrew S.; Hoelzel, A. Rus; Inall, Mark; Miller, Peter I.; Cousins, Nicola J.; Shields, Mark A.; Fujii, Toyonobu
2013-12-01
The ECOMAR project investigated photosynthetically-supported life on the North Mid-Atlantic Ridge (MAR) between the Azores and Iceland focussing on the Charlie-Gibbs Fracture Zone area in the vicinity of the sub-polar front where the North Atlantic Current crosses the MAR. Repeat visits were made to four stations at 2500 m depth on the flanks of the MAR in the years 2007-2010; a pair of northern stations at 54°N in cold water north of the sub-polar front and southern stations at 49°N in warmer water influenced by eddies from the North Atlantic Current. At each station an instrumented mooring was deployed with current meters and sediment traps (100 and 1000 m above the sea floor) to sample downward flux of particulate matter. The patterns of water flow, fronts, primary production and export flux in the region were studied by a combination of remote sensing and in situ measurements. Sonar, tow nets and profilers sampled pelagic fauna over the MAR. Swath bathymetry surveys across the ridge revealed sediment-covered flat terraces parallel to the axis of the MAR with intervening steep rocky slopes. Otter trawls, megacores, baited traps and a suite of tools carried by the R.O.V. Isis including push cores, grabs and a suction device collected benthic fauna. Video and photo surveys were also conducted using the SHRIMP towed vehicle and the R.O.V. Isis. Additional surveying and sampling by landers and R.O.V. focussed on the summit of a seamount (48°44‧N, 28°10‧W) on the western crest of the MAR between the two southern stations.
Zoeal morphology of Pachygrapsus transversus (Gibbes (Decapoda, Grapsidae reared in the laboratory
Directory of Open Access Journals (Sweden)
Ana Luiza Brossi-Garcia
1997-12-01
Full Text Available Ovigerous females of Pachygrapsus transversus (Gibbes, 1850 were collected on the Praia Dura and Saco da Ribeira beaches, Ubatuba, São Paulo, Brazil. Larvae were individually reared in a climatic room at 25ºC temperature, salinities of 28, 32 and 35‰ and under natural photoperiod conditions. The best rearing results were observed at 35%o salinity. Seven zoeal instars were observed, drawing and described in detail. The data are compared with those obtained for P. gracilis (Saussure, 1858.
Solid oxide galvanic cell for determination of Gibbs energy of formation of Tb6UO12(s)
International Nuclear Information System (INIS)
Sahu, Manjulata; Dash, Smruti
2013-01-01
Citrate-nitrate combustion method was used to synthesise Tb 6 UO 12 (s). Gibbs energy of formation of Tb 6 UO 12 (s) was measured using solid oxide galvanic cell in the temperature range 957-1175 K. (author)
International Nuclear Information System (INIS)
Li, H.Q.; Yang, Y.S.; Tong, W.H.; Wang, Z.Y.
2007-01-01
With the effects of electronic structure and atomic size being introduced, the mixing enthalpy as well as the Gibbs energy of the ternary Zr-Al-Cu, Ni-Al-Cu, Zr-Ni-Al and quaternary Zr-Al-Ni-Cu systems are calculated based on quasiregular solution model. The computed results agree well with the experimental data. The sequence of Gibbs energies of different systems is: G Zr-Al-Ni-Cu Zr-Al-Ni Zr-Al-Cu Cu-Al-Ni . To Zr-Al-Cu, Ni-Al-Cu and Zr-Ni-Al, the lowest Gibbs energy locates in the composition range of X Zr 0.39-0.61, X Al = 0.38-0.61; X Ni = 0.39-0.61, X Al = 0.38-0.60 and X Zr = 0.32-0.67, X Al = 0.32-0.66, respectively. And to the Zr-Ni-Al-Cu system with 66.67% Zr, the lowest Gibbs energy is obtained in the region of X Al = 0.63-0.80, X Ni = 0.14-0.24
Directory of Open Access Journals (Sweden)
Qiuhong Sun
2014-04-01
Full Text Available Based on the data mining research, the data mining based on genetic algorithm method, the genetic algorithm is briefly introduced, while the genetic algorithm based on two important theories and theoretical templates principle implicit parallelism is also discussed. Focuses on the application of genetic algorithms for association rule mining method based on association rule mining, this paper proposes a genetic algorithm fitness function structure, data encoding, such as the title of the improvement program, in particular through the early issues study, proposed the improved adaptive Pc, Pm algorithm is applied to the genetic algorithm, thereby improving efficiency of the algorithm. Finally, a genetic algorithm based association rule mining algorithm, and be applied in sea water samples database in data mining and prove its effective.
International Nuclear Information System (INIS)
Zhang, Leihong; Liang, Dong
2016-01-01
In order to solve the problem that reconstruction efficiency and precision is not high, in this paper different samples are selected to reconstruct spectral reflectance, and a new kind of spectral reflectance reconstruction method based on the algorithm of compressive sensing is provided. Four different color numbers of matte color cards such as the ColorChecker Color Rendition Chart and Color Checker SG, the copperplate paper spot color card of Panton, and the Munsell colors card are chosen as training samples, the spectral image is reconstructed respectively by the algorithm of compressive sensing and pseudo-inverse and Wiener, and the results are compared. These methods of spectral reconstruction are evaluated by root mean square error and color difference accuracy. The experiments show that the cumulative contribution rate and color difference of the Munsell colors card are better than those of the other three numbers of color cards in the same conditions of reconstruction, and the accuracy of the spectral reconstruction will be affected by the training sample of different numbers of color cards. The key technology of reconstruction means that the uniformity and representation of the training sample selection has important significance upon reconstruction. In this paper, the influence of the sample selection on the spectral image reconstruction is studied. The precision of the spectral reconstruction based on the algorithm of compressive sensing is higher than that of the traditional algorithm of spectral reconstruction. By the MATLAB simulation results, it can be seen that the spectral reconstruction precision and efficiency are affected by the different color numbers of the training sample. (paper)
Entropy Calculation of Reversible Mixing of Ideal Gases Shows Absence of Gibbs Paradox
Directory of Open Access Journals (Sweden)
Oleg Borodiouk
1999-05-01
Full Text Available Abstract: We consider the work of reversible mixing of ideal gases using a real process. Now assumptions were made concerning infinite shifts, infinite number of cycles and infinite work to provide an accurate calculation of entropy resulting from reversible mixing of ideal gases. We derived an equation showing the dependence of this entropy on the difference in potential of mixed gases, which is evidence for the absence of Gibbs' paradox.
Algorithmic randomness, physical entropy, measurements, and the second law
International Nuclear Information System (INIS)
Zurek, W.H.
1989-01-01
Algorithmic information content is equal to the size -- in the number of bits -- of the shortest program for a universal Turing machine which can reproduce a state of a physical system. In contrast to the statistical Boltzmann-Gibbs-Shannon entropy, which measures ignorance, the algorithmic information content is a measure of the available information. It is defined without a recourse to probabilities and can be regarded as a measure of randomness of a definite microstate. I suggest that the physical entropy S -- that is, the quantity which determines the amount of the work ΔW which can be extracted in the cyclic isothermal expansion process through the equation ΔW = k B TΔS -- is a sum of two contributions: the mission information measured by the usual statistical entropy and the known randomness measured by the algorithmic information content. The sum of these two contributions is a ''constant of motion'' in the process of a dissipation less measurement on an equilibrium ensemble. This conservation under a measurement, which can be traced back to the noiseless coding theorem of Shannon, is necessary to rule out existence of a successful Maxwell's demon. 17 refs., 3 figs
Directory of Open Access Journals (Sweden)
Sovilj P.
2014-10-01
Full Text Available Measurement methods, based on the approach named Digital Stochastic Measurement, have been introduced, and several prototype and small-series commercial instruments have been developed based on these methods. These methods have been mostly investigated for various types of stationary signals, but also for non-stationary signals. This paper presents, analyzes and discusses digital stochastic measurement of electroencephalography (EEG signal in the time domain, emphasizing the problem of influence of the Wilbraham-Gibbs phenomenon. The increase of measurement error, related to the Wilbraham-Gibbs phenomenon, is found. If the EEG signal is measured and measurement interval is 20 ms wide, the average maximal error relative to the range of input signal is 16.84 %. If the measurement interval is extended to 2s, the average maximal error relative to the range of input signal is significantly lowered - down to 1.37 %. Absolute errors are compared with the error limit recommended by Organisation Internationale de Métrologie Légale (OIML and with the quantization steps of the advanced EEG instruments with 24-bit A/D conversion
International Nuclear Information System (INIS)
Lima da Silva, Aline; De Fraga Malfatti, Celia; Heck, Nestor Cesar
2003-01-01
The use of fuel cells is a promising technology in the conversion of chemical to electrical energy. Due to environmental concerns related to the reduction of atmospheric pollution and greenhouse gases emissions such as CO 2 , NO x and hydrocarbons, there have been many researches about fuel cells using hydrogen as fuel. Hydrogen gas can be produced by several routes; a promising one is the steam reforming of ethanol. This route may become an important industrial process, especially for sugarcane producing countries. Ethanol is renewable energy and presents several advantages over other sources related to natural availability, storage and handling safety. In order to contribute to the understanding of the steam reforming of ethanol inside the reformer, this work displays a detailed thermodynamic analysis of the ethanol/water system, in the temperature range of 500-1200K, considering different H 2 O/ethanol reforming ratios. The equilibrium determinations were done with the help of the Gibbs energy minimization method using the Generalized Reduced Gradient algorithm (GRG). Based on literature data, the species considered in calculations were: H 2 , H 2 O, CO, CO 2 , CH 4 , C 2 H 4 , CH 3 CHO, C 2 H 5 OH (gas phase) and C gr . (graphite phase). The thermodynamic conditions for carbon deposition (probably soot) on catalyst during gas reforming were analyzed, in order to establish temperature ranges and H 2 O/ethanol ratios where carbon precipitation is not thermodynamically feasible. Experimental results from literature show that carbon deposition causes catalyst deactivation during reforming. This deactivation is due to encapsulating carbon that covers active phases on a catalyst substrate, e.g. Ni over Al 2 O 3 . In the present study, a mathematical relationship between Lagrange multipliers and the carbon activity (with reference to the graphite phase) was deduced, unveiling the carbon activity in the reformer atmosphere. From this, it is possible to foreseen if soot
Reduced-Complexity Deterministic Annealing for Vector Quantizer Design
Directory of Open Access Journals (Sweden)
Ortega Antonio
2005-01-01
Full Text Available This paper presents a reduced-complexity deterministic annealing (DA approach for vector quantizer (VQ design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.
Ergodic time-reversible chaos for Gibbs' canonical oscillator
International Nuclear Information System (INIS)
Hoover, William Graham; Sprott, Julien Clinton; Patra, Puneet Kumar
2015-01-01
Nosé's pioneering 1984 work inspired a variety of time-reversible deterministic thermostats. Though several groups have developed successful doubly-thermostated models, single-thermostat models have failed to generate Gibbs' canonical distribution for the one-dimensional harmonic oscillator. A 2001 doubly-thermostated model, claimed to be ergodic, has a singly-thermostated version. Though neither of these models is ergodic this work has suggested a successful route toward singly-thermostated ergodicity. We illustrate both ergodicity and its lack for these models using phase-space cross sections and Lyapunov instability as diagnostic tools. - Highlights: • We develop cross-section and Lyapunov methods for diagnosing ergodicity. • We apply these methods to several thermostatted-oscillator problems. • We demonstrate the nonergodicity of previous work. • We find a novel family of ergodic thermostatted-oscillator problems.
Unifying hydrotropy under Gibbs phase rule.
Shimizu, Seishi; Matubayasi, Nobuyuki
2017-09-13
The task of elucidating the mechanism of solubility enhancement using hydrotropes has been hampered by the wide variety of phase behaviour that hydrotropes can exhibit, encompassing near-ideal aqueous solution, self-association, micelle formation, and micro-emulsions. Instead of taking a field guide or encyclopedic approach to classify hydrotropes into different molecular classes, we take a rational approach aiming at constructing a unified theory of hydrotropy based upon the first principles of statistical thermodynamics. Achieving this aim can be facilitated by the two key concepts: (1) the Gibbs phase rule as the basis of classifying the hydrotropes in terms of the degrees of freedom and the number of variables to modulate the solvation free energy; (2) the Kirkwood-Buff integrals to quantify the interactions between the species and their relative contributions to the process of solubilization. We demonstrate that the application of the two key concepts can in principle be used to distinguish the different molecular scenarios at work under apparently similar solubility curves observed from experiments. In addition, a generalization of our previous approach to solutes beyond dilution reveals the unified mechanism of hydrotropy, driven by a strong solute-hydrotrope interaction which overcomes the apparent per-hydrotrope inefficiency due to hydrotrope self-clustering.
Directory of Open Access Journals (Sweden)
Zohreh Yousefi
2016-11-01
Full Text Available Introduction Small ruminants, especially native breed types, play an important role in livelihoods of a considerable part of human population in the tropics from socio-economic aspects. Therefore, integrated attempt in terms of management and genetic improvement to enhance production is of crucial importance. Knowledge of genetic variation and co-variation among traits is required for both the design of effective sheep breeding programs and the accurate prediction of genetic progress from these programs. Body weight and growth traits are one of the economically important traits in sheep production, especially in Iran where lamb sale is the main source of income for sheep breeders while other products are in secondary importance. Although mutton is the most important source of protein in Iran, meat production from the sheep does not cover the increasing consumer demand. On the other hand, increase in sheep number to increase meat production has been limited by low quality and quantity of forage range. Therefore, enhancing meat production should be achieved by selecting the animals that have maximum genetic merit as next generation parents. To design an efficient improvement program and genetic evaluation system for maximization response to selection for economically important traits, accurate estimates of the genetic parameters and the genetic relationships between the traits are necessary. Studies of various sheep breeds have shown that both direct and maternal genetic influences are of importance for lamb growth. When growth traits are included in the breeding goal, both direct and maternal genetic effects should be taken into account in order to achieve optimum genetic progress. The objective of this study was to estimate the variance components and heritability, for growth traits, by fitting six animal models in the Sangsari sheep using Gibbs sampling. Material and Method Sangsari is a fat-tailed and relatively small sized breed of sheep
Uncovering Transcriptional Regulatory Networks by Sparse Bayesian Factor Model
Directory of Open Access Journals (Sweden)
Qi Yuan(Alan
2010-01-01
Full Text Available Abstract The problem of uncovering transcriptional regulation by transcription factors (TFs based on microarray data is considered. A novel Bayesian sparse correlated rectified factor model (BSCRFM is proposed that models the unknown TF protein level activity, the correlated regulations between TFs, and the sparse nature of TF-regulated genes. The model admits prior knowledge from existing database regarding TF-regulated target genes based on a sparse prior and through a developed Gibbs sampling algorithm, a context-specific transcriptional regulatory network specific to the experimental condition of the microarray data can be obtained. The proposed model and the Gibbs sampling algorithm were evaluated on the simulated systems, and results demonstrated the validity and effectiveness of the proposed approach. The proposed model was then applied to the breast cancer microarray data of patients with Estrogen Receptor positive ( status and Estrogen Receptor negative ( status, respectively.
Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.
Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G
2016-06-01
This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.
Bayesian Noise Estimation for Non-ideal Cosmic Microwave Background Experiments
Wehus, I. K.; Næss, S. K.; Eriksen, H. K.
2012-03-01
We describe a Bayesian framework for estimating the time-domain noise covariance of cosmic microwave background (CMB) observations, typically parameterized in terms of a 1/f frequency profile. This framework is based on the Gibbs sampling algorithm, which allows for exact marginalization over nuisance parameters through conditional probability distributions. In this paper, we implement support for gaps in the data streams and marginalization over fixed time-domain templates, and also outline how to marginalize over confusion from CMB fluctuations, which may be important for high signal-to-noise experiments. As a by-product of the method, we obtain proper constrained realizations, which themselves can be useful for map making. To validate the algorithm, we demonstrate that the reconstructed noise parameters and corresponding uncertainties are unbiased using simulated data. The CPU time required to process a single data stream of 100,000 samples with 1000 samples removed by gaps is 3 s if only the maximum posterior parameters are required, and 21 s if one also wants to obtain the corresponding uncertainties by Gibbs sampling.
BAYESIAN NOISE ESTIMATION FOR NON-IDEAL COSMIC MICROWAVE BACKGROUND EXPERIMENTS
International Nuclear Information System (INIS)
Wehus, I. K.; Næss, S. K.; Eriksen, H. K.
2012-01-01
We describe a Bayesian framework for estimating the time-domain noise covariance of cosmic microwave background (CMB) observations, typically parameterized in terms of a 1/f frequency profile. This framework is based on the Gibbs sampling algorithm, which allows for exact marginalization over nuisance parameters through conditional probability distributions. In this paper, we implement support for gaps in the data streams and marginalization over fixed time-domain templates, and also outline how to marginalize over confusion from CMB fluctuations, which may be important for high signal-to-noise experiments. As a by-product of the method, we obtain proper constrained realizations, which themselves can be useful for map making. To validate the algorithm, we demonstrate that the reconstructed noise parameters and corresponding uncertainties are unbiased using simulated data. The CPU time required to process a single data stream of 100,000 samples with 1000 samples removed by gaps is 3 s if only the maximum posterior parameters are required, and 21 s if one also wants to obtain the corresponding uncertainties by Gibbs sampling.
BAYESIAN NOISE ESTIMATION FOR NON-IDEAL COSMIC MICROWAVE BACKGROUND EXPERIMENTS
Energy Technology Data Exchange (ETDEWEB)
Wehus, I. K. [Theoretical Physics, Imperial College London, London SW7 2AZ (United Kingdom); Naess, S. K.; Eriksen, H. K., E-mail: i.k.wehus@fys.uio.no, E-mail: sigurdkn@astro.uio.no, E-mail: h.k.k.eriksen@astro.uio.no [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, N-0315 Oslo (Norway)
2012-03-01
We describe a Bayesian framework for estimating the time-domain noise covariance of cosmic microwave background (CMB) observations, typically parameterized in terms of a 1/f frequency profile. This framework is based on the Gibbs sampling algorithm, which allows for exact marginalization over nuisance parameters through conditional probability distributions. In this paper, we implement support for gaps in the data streams and marginalization over fixed time-domain templates, and also outline how to marginalize over confusion from CMB fluctuations, which may be important for high signal-to-noise experiments. As a by-product of the method, we obtain proper constrained realizations, which themselves can be useful for map making. To validate the algorithm, we demonstrate that the reconstructed noise parameters and corresponding uncertainties are unbiased using simulated data. The CPU time required to process a single data stream of 100,000 samples with 1000 samples removed by gaps is 3 s if only the maximum posterior parameters are required, and 21 s if one also wants to obtain the corresponding uncertainties by Gibbs sampling.
DEFF Research Database (Denmark)
Ulstrup, Jens
1999-01-01
We discuss a simple model for the environmental reorganisation Gibbs free energy, E-r, in electron transfer between a metalloprotein and a small reaction partner. The protein is represented as a dielectric globule with low dielectric constant, the metal centres as conducting spheres, all embedded...
PARALLEL ADAPTIVE MULTILEVEL SAMPLING ALGORITHMS FOR THE BAYESIAN ANALYSIS OF MATHEMATICAL MODELS
Prudencio, Ernesto; Cheung, Sai Hung
2012-01-01
In recent years, Bayesian model updating techniques based on measured data have been applied to many engineering and applied science problems. At the same time, parallel computational platforms are becoming increasingly more powerful and are being used more frequently by the engineering and scientific communities. Bayesian techniques usually require the evaluation of multi-dimensional integrals related to the posterior probability density function (PDF) of uncertain model parameters. The fact that such integrals cannot be computed analytically motivates the research of stochastic simulation methods for sampling posterior PDFs. One such algorithm is the adaptive multilevel stochastic simulation algorithm (AMSSA). In this paper we discuss the parallelization of AMSSA, formulating the necessary load balancing step as a binary integer programming problem. We present a variety of results showing the effectiveness of load balancing on the overall performance of AMSSA in a parallel computational environment.
International Nuclear Information System (INIS)
Kireev, A.A.; Pak, T.G.; Bezuglyj, V.D.
1996-01-01
Solubilities of KClO 4 , RbClO 4 , CsClO 4 , (CH 3 ) 4 NClO 4 , (C 2 M 5 ) 4 NClO 4 in water and water-acetone mixtures are determined by the method of isothermal saturation at 298.15 K. Dissociation constants of alkali metal perchlorates are found by conductometric method. Solubility products and standard Gibbs energies of transfer of corresponding electrolytes from water into water-acetone solvents are calculated. The character of transfer Gibbs energy dependence on solvent composition is explained by preferred solvation of cations by acetone molecules and anions-by water molecules. Features of tetraalkyl ammonium ions are explained by large changes in energy of cavity formation for these ions
Standard molar Gibbs free energy of formation of URh3(s)
International Nuclear Information System (INIS)
Prasad, Rajendra; Sayi, Y.S.; Radhakrishna, J.; Yadav, C.S.; Shankaran, P.S.; Chhapru, G.C.
1992-01-01
Equilibrium partial pressures of CO(g) over the system (UO 2 (s) + C(s) + Rh(s) + URh 3 (s)) were measured in the temperature range 1327 - 1438 K. Standard Gibbs molar free energy of formation of URh 3 (Δ f G o m ) in the above temperature range can be expressed as Δ f G o m (URh 3 ,s,T)+-3.0(kJ/mol)= -348.165 + 0.03144 T(K). The second and third law enthalpy of formation, ΔfH o m (URh 3 ,s,298.15 K) are (-318.4 +- 3.0) and (298.3 +- 2.5) kJ/mol respectively. (author). 7 refs., 3 tabs
Grazhdan, K. V.; Gamov, G. A.; Dushina, S. V.; Sharnin, V. A.
2012-11-01
Coefficients of the interphase distribution of nicotinic acid are determined in aqueous solution systems of ethanol-hexane and DMSO-hexane at 25.0 ± 0.1°C. They are used to calculate the Gibbs energy of the transfer of nicotinic acid from water into aqueous solutions of ethanol and dimethylsulfoxide. The Gibbs energy values for the transfer of the molecular and zwitterionic forms of nicotinic acid are obtained by means of UV spectroscopy. The diametrically opposite effect of the composition of binary solvents on the transfer of the molecular and zwitterionic forms of nicotinic acid is noted.
Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.
Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L
2017-06-13
λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.
Creating ensembles of oblique decision trees with evolutionary algorithms and sampling
Cantu-Paz, Erick [Oakland, CA; Kamath, Chandrika [Tracy, CA
2006-06-13
A decision tree system that is part of a parallel object-oriented pattern recognition system, which in turn is part of an object oriented data mining system. A decision tree process includes the step of reading the data. If necessary, the data is sorted. A potential split of the data is evaluated according to some criterion. An initial split of the data is determined. The final split of the data is determined using evolutionary algorithms and statistical sampling techniques. The data is split. Multiple decision trees are combined in ensembles.
Direct measurements of the Gibbs free energy of OH using a CW tunable laser
Killinger, D. K.; Wang, C. C.
1979-01-01
The paper describes an absorption measurement for determining the Gibbs free energy of OH generated in a mixture of water and oxygen vapor. These measurements afford a direct verification of the accuracy of thermochemical data of H2O at high temperatures and pressures. The results indicate that values for the heat capacity of H2O obtained through numerical computations are correct within an experimental uncertainty of 0.15 cal/mole K.
Vrugt, Jasper A.; Beven, Keith J.
2018-04-01
This essay illustrates some recent developments to the DiffeRential Evolution Adaptive Metropolis (DREAM) MATLAB toolbox of Vrugt (2016) to delineate and sample the behavioural solution space of set-theoretic likelihood functions used within the GLUE (Limits of Acceptability) framework (Beven and Binley, 1992, 2014; Beven and Freer, 2001; Beven, 2006). This work builds on the DREAM(ABC) algorithm of Sadegh and Vrugt (2014) and enhances significantly the accuracy and CPU-efficiency of Bayesian inference with GLUE. In particular it is shown how lack of adequate sampling in the model space might lead to unjustified model rejection.
Smith, J. A.; Froyd, K. D.; Toon, O. B.
2012-12-01
We construct tables of reaction enthalpies and entropies for the association reactions involving sulfuric acid vapor, water vapor, and the bisulfate ion. These tables are created from experimental measurements and quantum chemical calculations for molecular clusters and a classical thermodynamic model for larger clusters. These initial tables are not thermodynamically consistent. For example, the Gibbs free energy of associating a cluster consisting of one acid molecule and two water molecules depends on the order in which the cluster was assembled: add two waters and then the acid or add an acid and a water and then the second water. We adjust the values within the tables using the method of Lagrange multipliers to minimize the adjustments and produce self-consistent Gibbs free energy surfaces for the neutral clusters and the charged clusters. With the self-consistent Gibbs free energy surfaces, we calculate size distributions of neutral and charged clusters for a variety of atmospheric conditions. Depending on the conditions, nucleation can be dominated by growth along the neutral channel or growth along the ion channel followed by ion-ion recombination.
An improved flux-split algorithm applied to hypersonic flows in chemical equilibrium
Palmer, Grant
1988-01-01
An explicit, finite-difference, shock-capturing numerical algorithm is presented and applied to hypersonic flows assumed to be in thermochemical equilibrium. Real-gas chemistry is either loosely coupled to the gasdynamics by way of a Gibbs free energy minimization package or fully coupled using species mass conservation equations with finite-rate chemical reactions. A scheme is developed that maintains stability in the explicit, finite-rate formulation while allowing relatively high time steps. The codes use flux vector splitting to difference the inviscid fluxes and employ real-gas corrections to viscosity and thermal conductivity. Numerical results are compared against existing ballistic range and flight data. Flows about complex geometries are also computed.
DEFF Research Database (Denmark)
Häggström, Olle; Lieshout, Marie-Colette van; Møller, Jesper
1999-01-01
The area-interaction process and the continuum random-cluster model are characterized in terms of certain functional forms of their respective conditional intensities. In certain cases, these two point process models can be derived from a bivariate point process model which in many respects...... is simpler to analyse and simulate. Using this correspondence we devise a two-component Gibbs sampler, which can be used for fast and exact simulation by extending the recent ideas of Propp and Wilson. We further introduce a Swendsen-Wang type algorithm. The relevance of the results within spatial statistics...
Gibbs energies of formation of zircon (ZrSiO4), thorite (ThSiO4), and phenacite (Be2SiO4)
International Nuclear Information System (INIS)
Schuiling, R.D.; Vergouwen, L.; Rijst, H. van der
1976-01-01
Zircon, thorite, and phenacite are very refractory compounds which do not yield to solution calorimetry. In In order to obtain approximate Gibbs energies of formation for these minerals, their reactions with a number of silica-undersaturated compounds (silicates or oxides) were studied. Conversely baddeleyite (ZrO 2 ), thorianite (ThO 2 ), and bromellite (BeO) were reacted with the appropriate silicates. As the Gibbs energies of reaction of the undersaturated compounds with SiO 2 are known, the experiments yield the following data: Δ G 298 , 1 /sub bar/ 0 = -459.02 +- 1.04 kcal for zircon, -489.67 +- 1.04 for thorite, and -480.20 +- 1.01 for phenacite
Generalized Gibbs distribution and energy localization in the semiclassical FPU problem
Hipolito, Rafael; Danshita, Ippei; Oganesyan, Vadim; Polkovnikov, Anatoli
2011-03-01
We investigate dynamics of the weakly interacting quantum mechanical Fermi-Pasta-Ulam (qFPU) model in the semiclassical limit below the stochasticity threshold. Within this limit we find that initial quantum fluctuations lead to the damping of FPU oscillations and relaxation of the system to a slowly evolving steady state with energy localized within few momentum modes. We find that in large systems this state can be described by the generalized Gibbs ensemble (GGE), with the Lagrange multipliers being very weak functions of time. This ensembles gives accurate description of the instantaneous correlation functions, both quadratic and quartic. Based on these results we conjecture that GGE generically appears as a prethermalized state in weakly non-integrable systems.
Dynamics of macro-observables and space-time inhomogeneous Gibbs ensembles
International Nuclear Information System (INIS)
Lanz, L.; Lupieri, G.
1978-01-01
The relationship between the classical description of a macro-system and quantum mechanics of its particles is considered within the framework recently developed by Ludwig. A procedure is given to define probability measures on the trajectory space of a macrosystem which yields a statistical description of the dynamics of a macrosystem. The basic tool in this treatment is a new concept of space-time inhomogeneous Gibbs ensemble, defined in N-body quantum mechanics. In the Gaussian approximation of the probabilities the results of Zubarev's theory based on the ''nonequilibrium statistical operator'' are recovered. The present ''embedding'' of the description of a macrosystem inside the N-body theory allows for a joint description of a macrosystem and a microsubsystem of it, and a ''macroscopical'' calculation of the statistical operator of the microsystem is indicated. (author)
Directory of Open Access Journals (Sweden)
Nezar Noor Al-Hebshi
2015-09-01
Full Text Available Background: Usefulness of next-generation sequencing (NGS in assessing bacteria associated with oral squamous cell carcinoma (OSCC has been undermined by inability to classify reads to the species level. Objective: The purpose of this study was to develop a robust algorithm for species-level classification of NGS reads from oral samples and to pilot test it for profiling bacteria within OSCC tissues. Methods: Bacterial 16S V1-V3 libraries were prepared from three OSCC DNA samples and sequenced using 454's FLX chemistry. High-quality, well-aligned, and non-chimeric reads ≥350 bp were classified using a novel, multi-stage algorithm that involves matching reads to reference sequences in revised versions of the Human Oral Microbiome Database (HOMD, HOMD extended (HOMDEXT, and Greengene Gold (GGG at alignment coverage and percentage identity ≥98%, followed by assignment to species level based on top hit reference sequences. Priority was given to hits in HOMD, then HOMDEXT and finally GGG. Unmatched reads were subject to operational taxonomic unit analysis. Results: Nearly, 92.8% of the reads were matched to updated-HOMD 13.2, 1.83% to trusted-HOMDEXT, and 1.36% to modified-GGG. Of all matched reads, 99.6% were classified to species level. A total of 228 species-level taxa were identified, representing 11 phyla; the most abundant were Proteobacteria, Bacteroidetes, Firmicutes, Fusobacteria, and Actinobacteria. Thirty-five species-level taxa were detected in all samples. On average, Prevotella oris, Neisseria flava, Neisseria flavescens/subflava, Fusobacterium nucleatum ss polymorphum, Aggregatibacter segnis, Streptococcus mitis, and Fusobacterium periodontium were the most abundant. Bacteroides fragilis, a species rarely isolated from the oral cavity, was detected in two samples. Conclusion: This multi-stage algorithm maximizes the fraction of reads classified to the species level while ensuring reliable classification by giving priority to the
Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions
DEFF Research Database (Denmark)
Fog, Agner
2008-01-01
the mode, ratio-of-uniforms rejection method, and rejection by sampling in the tau domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models...... of biased sampling and models of evolution and for calculating moments and quantiles of the distributions.......Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from...
Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.
2012-01-01
An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to
DEFF Research Database (Denmark)
Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus
2012-01-01
We present a general Monte Carlo full-waveform inversion strategy that integrates a priori information described by geostatistical algorithms with Bayesian inverse problem theory. The extended Metropolis algorithm can be used to sample the a posteriori probability density of highly nonlinear...... inverse problems, such as full-waveform inversion. Sequential Gibbs sampling is a method that allows efficient sampling of a priori probability densities described by geostatistical algorithms based on either two-point (e.g., Gaussian) or multiple-point statistics. We outline the theoretical framework......) Based on a posteriori realizations, complicated statistical questions can be answered, such as the probability of connectivity across a layer. (3) Complex a priori information can be included through geostatistical algorithms. These benefits, however, require more computing resources than traditional...
Iterative algorithm of discrete Fourier transform for processing randomly sampled NMR data sets
International Nuclear Information System (INIS)
Stanek, Jan; Kozminski, Wiktor
2010-01-01
Spectra obtained by application of multidimensional Fourier Transformation (MFT) to sparsely sampled nD NMR signals are usually corrupted due to missing data. In the present paper this phenomenon is investigated on simulations and experiments. An effective iterative algorithm for artifact suppression for sparse on-grid NMR data sets is discussed in detail. It includes automated peak recognition based on statistical methods. The results enable one to study NMR spectra of high dynamic range of peak intensities preserving benefits of random sampling, namely the superior resolution in indirectly measured dimensions. Experimental examples include 3D 15 N- and 13 C-edited NOESY-HSQC spectra of human ubiquitin.
Gibbs Measures Over Locally Tree-Like Graphs and Percolative Entropy Over Infinite Regular Trees
Austin, Tim; Podder, Moumanti
2018-03-01
Consider a statistical physical model on the d-regular infinite tree Td described by a set of interactions Φ . Let Gn be a sequence of finite graphs with vertex sets V_n that locally converge to Td. From Φ one can construct a sequence of corresponding models on the graphs G_n. Let μ_n be the resulting Gibbs measures. Here we assume that μ n converges to some limiting Gibbs measure μ on Td in the local weak^* sense, and study the consequences of this convergence for the specific entropies |V_n|^{-1}H(μ _n). We show that the limit supremum of |V_n|^{-1}H(μ _n) is bounded above by the percolative entropy H_{it{perc}}(μ ), a function of μ itself, and that |V_n|^{-1}H(μ _n) actually converges to H_{it{perc}}(μ ) in case Φ exhibits strong spatial mixing on T_d. When it is known to exist, the limit of |V_n|^{-1}H(μ _n) is most commonly shown to be given by the Bethe ansatz. Percolative entropy gives a different formula, and we do not know how to connect it to the Bethe ansatz directly. We discuss a few examples of well-known models for which the latter result holds in the high temperature regime.
Understand your Algorithm: Drill Down to Sample Visualizations in Jupyter Notebooks
Mapes, B. E.; Ho, Y.; Cheedela, S. K.; McWhirter, J.
2017-12-01
Statistics are the currency of climate dynamics, but the space of all possible algorithms is fathomless - especially for 4-dimensional weather-resolving data that many "impact" variables depend on. Algorithms are designed on data samples, but how do you know if they measure what you expect when turned loose on Big Data? We will introduce the year-1 prototype of a 3-year scientist-led, NSF-supported, Unidata-quality software stack called DRILSDOWN (https://brianmapes.github.io/EarthCube-DRILSDOWN/) for automatically extracting, integrating, and visualizing multivariate 4D data samples. Based on a customizable "IDV bundle" of data sources, fields and displays supplied by the user, the system will teleport its space-time coordinates to fetch Cases of Interest (edge cases, typical cases, etc.) from large aggregated repositories. These standard displays can serve as backdrops to overlay with your value-added fields (such as derived quantities stored on a user's local disk). Fields can be readily pulled out of the visualization object for further processing in Python. The hope is that algorithms successfully tested in this visualization space will then be lifted out and added to automatic processing toolchains, lending confidence in the next round of processing, to seek the next Cases of Interest, in light of a user's statistical measures of "Interest". To log the scientific work done in this vein, the visualizations are wrapped in iPython-based Jupyter notebooks for rich, human-readable documentation (indeed, quasi-publication with formatted text, LaTex math, etc.). Such notebooks are readable and executable, with digital replicability and provenance built in. The entire digital object of a case study can be stored in a repository, where libraries of these Case Study Notebooks can be examined in a browser. Model data (the session topic) are of course especially convenient for this system, but observations of all sorts can also be brought in, overlain, and differenced or
An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension
Directory of Open Access Journals (Sweden)
Yosra Marnissi
2018-02-01
Full Text Available In this paper, we are interested in Bayesian inverse problems where either the data fidelity term or the prior distribution is Gaussian or driven from a hierarchical Gaussian model. Generally, Markov chain Monte Carlo (MCMC algorithms allow us to generate sets of samples that are employed to infer some relevant parameters of the underlying distributions. However, when the parameter space is high-dimensional, the performance of stochastic sampling algorithms is very sensitive to existing dependencies between parameters. In particular, this problem arises when one aims to sample from a high-dimensional Gaussian distribution whose covariance matrix does not present a simple structure. Another challenge is the design of Metropolis–Hastings proposals that make use of information about the local geometry of the target density in order to speed up the convergence and improve mixing properties in the parameter space, while not being too computationally expensive. These two contexts are mainly related to the presence of two heterogeneous sources of dependencies stemming either from the prior or the likelihood in the sense that the related covariance matrices cannot be diagonalized in the same basis. In this work, we address these two issues. Our contribution consists of adding auxiliary variables to the model in order to dissociate the two sources of dependencies. In the new augmented space, only one source of correlation remains directly related to the target parameters, the other sources of correlations being captured by the auxiliary variables. Experiments are conducted on two practical image restoration problems—namely the recovery of multichannel blurred images embedded in Gaussian noise and the recovery of signal corrupted by a mixed Gaussian noise. Experimental results indicate that adding the proposed auxiliary variables makes the sampling problem simpler since the new conditional distribution no longer contains highly heterogeneous
Gibbs free energy of formation of UPb(s) compound
International Nuclear Information System (INIS)
Samui, Pradeep; Agarwal, Renu; Mishra, Ratikanta
2012-01-01
Liquid lead and lead-bismuth eutectic (LBE) are being explored as primary candidates for coolants in accelerator driven systems and in advanced nuclear reactors due to their favorable thermo-physical and chemical properties. They are also proposed to be used as spallation neutron source in ADS Reactor Systems. However, corrosion of structural materials (i.e. steel) presents a critical challenge for the use of liquid lead or LBE in advanced nuclear reactors. The interactions of liquid lead or LBE with clad and fuel is of great scientific and technological importance in the development of advanced nuclear reactors. Clad failure/breach can lead to reaction of coolant elements with fuel components. Thus the study of fuel-coolant interaction of U with Pb/Bi is important. The paper deals with the determination of Gibbs free energy of formation of U-rich phase i.e. UPb in Pb-U system, employing Knudsen effusion mass loss technique
ASTEM, Evaluation of Gibbs, Helmholtz and Saturation Line Function for Thermodynamics Calculation
International Nuclear Information System (INIS)
Moore, K.V.; Burgess, M.P.; Fuller, G.L.; Kaiser, A.H.; Jaeger, D.L.
1974-01-01
1 - Description of problem or function: ASTEM is a modular set of FORTRAN IV subroutines to evaluate the Gibbs, Helmholtz, and saturation line functions as published by the American Society of Mechanical Engineers (1967). Any thermodynamic quantity including derivative properties can be obtained from these routines by a user-supplied main program. PROPS is an auxiliary routine available for the IBM360 version which makes it easier to apply the ASTEM routines to power station models. 2 - Restrictions on the complexity of the problem: Unless re-dimensioned by the user, the highest derivative allowed is order 9. All arrays within ASTEM are one-dimensional to save storage area
ALGORITHM OF PREPARATION OF THE TRAINING SAMPLE USING 3D-FACE MODELING
Directory of Open Access Journals (Sweden)
D. I. Samal
2016-01-01
Full Text Available The algorithm of preparation and sampling for training of the multiclass qualifier of support vector machines (SVM is provided. The described approach based on the modeling of possible changes of the face features of recognized person. Additional features like perspectives of shooting, conditions of lighting, tilt angles were introduced to get improved identification results. These synthetic generated changes have some impact on the classifier learning expanding the range of possible variations of the initial image. The classifier learned with such extended example is ready to recognize unknown objects better. The age, emotional looks, turns of the head, various conditions of lighting, noise, and also some combinations of the listed parameters are chosen as the key considered parameters for modeling. The third-party software ‘FaceGen’ allowing to model up to 150 parameters and available in a demoversion for free downloading is used for 3D-modeling.The SVM classifier was chosen to test the impact of the introduced modifications of training sample. The preparation and preliminary processing of images contains the following constituents like detection and localization of area of the person on the image, assessment of an angle of rotation and an inclination, extension of the range of brightness of pixels and an equalization of the histogram to smooth the brightness and contrast characteristics of the processed images, scaling of the localized and processed area of the person, creation of a vector of features of the scaled and processed image of the person by a Principal component analysis (algorithm NIPALS, training of the multiclass SVM-classifier.The provided algorithm of expansion of the training selection is oriented to be used in practice and allows to expand using 3D-models the processed range of 2D – photographs of persons that positively affects results of identification in system of face recognition. This approach allows to compensate
An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors
Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel
2016-01-01
Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559
An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors
Directory of Open Access Journals (Sweden)
Bruno Srbinovski
2016-03-01
Full Text Available Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind. Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources and power hungry sensors (ultrasonic wind sensor and gas sensors. The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA.
Kim, Seohyun; Lu, Zhenqiu; Cohen, Allan S.
2018-01-01
Bayesian algorithms have been used successfully in the social and behavioral sciences to analyze dichotomous data particularly with complex structural equation models. In this study, we investigate the use of the Polya-Gamma data augmentation method with Gibbs sampling to improve estimation of structural equation models with dichotomous variables.…
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Application
Chambolle, Antonin; Ehrhardt, Matthias J.; Richtarik, Peter; Schö nlieb, Carola-Bibiane
2017-01-01
We propose a stochastic extension of the primal-dual hybrid gradient algorithm studied by Chambolle and Pock in 2011 to solve saddle point problems that are separable in the dual variable. The analysis is carried out for general convex-concave saddle point problems and problems that are either partially smooth / strongly convex or fully smooth / strongly convex. We perform the analysis for arbitrary samplings of dual variables, and obtain known deterministic results as a special case. Several variants of our stochastic method significantly outperform the deterministic variant on a variety of imaging tasks.
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Application
Chambolle, Antonin
2017-06-15
We propose a stochastic extension of the primal-dual hybrid gradient algorithm studied by Chambolle and Pock in 2011 to solve saddle point problems that are separable in the dual variable. The analysis is carried out for general convex-concave saddle point problems and problems that are either partially smooth / strongly convex or fully smooth / strongly convex. We perform the analysis for arbitrary samplings of dual variables, and obtain known deterministic results as a special case. Several variants of our stochastic method significantly outperform the deterministic variant on a variety of imaging tasks.
Directory of Open Access Journals (Sweden)
La Iglesia, A.
1989-12-01
Full Text Available The effect of grinding on crystallinity, particle size and solubility of two samples of kaolinite was studied. The standard Gibbs free energies of formation of different ground samples were calculated from solubility measurements, and show a direct relationship between Gibbs free energy and particle size-crystallinity variation. Values of -3752.2 and -3776.4 KJ/mol. were determinated for ÎGÂºl (am and ÎGÂºl (crys of kaolinite, respectively. A new thermodinamic equation that relates ÎGÂºl to particle size is proposed. This equation can probably be extended to clay mineals.Se estudia el efecto de la molienda prolongada sobre la cristalinidad, tamaño de partícula y solubilidad de dos muestras de caolinita. Se ha calculado la energía libre estandar de formación del mineral a partir de medidas de solubilidad, encontrando una relación directa entre ÎGÂºl, y las variaciones de tamaño de partícula y cristalinidad de las muestras. Por extrapolación, se han obtenido los valores de -3752,0 y -3776,4 KJ/mol. para ÎGÂºl caolinita amorfa y cristalina. Se propone una ecuación termodinámica que relaciona ÎGÂºl y el tamaño de partícula de la caolinita; esta ecuación puede aplicarse también a otros minerales de la arcilla.
Latella, Ivan; Pérez-Madrid, Agustín
2013-10-01
The local thermodynamics of a system with long-range interactions in d dimensions is studied using the mean-field approximation. Long-range interactions are introduced through pair interaction potentials that decay as a power law in the interparticle distance. We compute the local entropy, Helmholtz free energy, and grand potential per particle in the microcanonical, canonical, and grand canonical ensembles, respectively. From the local entropy per particle we obtain the local equation of state of the system by using the condition of local thermodynamic equilibrium. This local equation of state has the form of the ideal gas equation of state, but with the density depending on the potential characterizing long-range interactions. By volume integration of the relation between the different thermodynamic potentials at the local level, we find the corresponding equation satisfied by the potentials at the global level. It is shown that the potential energy enters as a thermodynamic variable that modifies the global thermodynamic potentials. As a result, we find a generalized Gibbs-Duhem equation that relates the potential energy to the temperature, pressure, and chemical potential. For the marginal case where the power of the decaying interaction potential is equal to the dimension of the space, the usual Gibbs-Duhem equation is recovered. As examples of the application of this equation, we consider spatially uniform interaction potentials and the self-gravitating gas. We also point out a close relationship with the thermodynamics of small systems.
International Nuclear Information System (INIS)
Hukushima, K; Iba, Y
2008-01-01
We develop a recently proposed importance-sampling Monte Carlo algorithm for sampling rare events and quenched variables in random disordered systems. We apply it to a two dimensional bond-diluted Ising model and study the Griffiths singularity which is considered to be due to the existence of rare large clusters. It is found that the distribution of the inverse susceptibility has an exponential tail down to the origin which is considered the consequence of the Griffiths singularity
Finite sample performance of the E-M algorithm for ranks data modelling
Directory of Open Access Journals (Sweden)
Angela D'Elia
2007-10-01
Full Text Available We check the finite sample performance of the maximum likelihood estimators of the parameters of a mixture distribution recently introduced for modelling ranks/preference data. The estimates are derived by the E-M algorithm and the performance is evaluated both from an univariate and bivariate points of view. While the results are generally acceptable as far as it concerns the bias, the Monte Carlo experiment shows a different behaviour of the estimators efficiency for the two parameters of the mixture, mainly depending upon their location in the admissible parametric space. Some operative suggestions conclude the paer.
Suarez Diez, M.; Saccenti, E.
2015-01-01
We investigated the effect of sample size and dimensionality on the performance of four algorithms (ARACNE, CLR, CORR, and PCLRC) when they are used for the inference of metabolite association networks. We report that as many as 100-400 samples may be necessary to obtain stable network estimations,
Wolf, Michael
2012-01-01
A document describes an algorithm created to estimate the mass placed on a sample verification sensor (SVS) designed for lunar or planetary robotic sample return missions. A novel SVS measures the capacitance between a rigid bottom plate and an elastic top membrane in seven locations. As additional sample material (soil and/or small rocks) is placed on the top membrane, the deformation of the membrane increases the capacitance. The mass estimation algorithm addresses both the calibration of each SVS channel, and also addresses how to combine the capacitances read from each of the seven channels into a single mass estimate. The probabilistic approach combines the channels according to the variance observed during the training phase, and provides not only the mass estimate, but also a value for the certainty of the estimate. SVS capacitance data is collected for known masses under a wide variety of possible loading scenarios, though in all cases, the distribution of sample within the canister is expected to be approximately uniform. A capacitance-vs-mass curve is fitted to this data, and is subsequently used to determine the mass estimate for the single channel s capacitance reading during the measurement phase. This results in seven different mass estimates, one for each SVS channel. Moreover, the variance of the calibration data is used to place a Gaussian probability distribution function (pdf) around this mass estimate. To blend these seven estimates, the seven pdfs are combined into a single Gaussian distribution function, providing the final mean and variance of the estimate. This blending technique essentially takes the final estimate as an average of the estimates of the seven channels, weighted by the inverse of the channel s variance.
International Nuclear Information System (INIS)
Cirillo, Emilio N.M.; Louis, Pierre-Yves; Ruszel, Wioletta M.; Spitoni, Cristian
2014-01-01
Cellular Automata are discrete-time dynamical systems on a spatially extended discrete space which provide paradigmatic examples of nonlinear phenomena. Their stochastic generalizations, i.e., Probabilistic Cellular Automata (PCA), are discrete time Markov chains on lattice with finite single-cell states whose distinguishing feature is the parallel character of the updating rule. We study the ground states of the Hamiltonian and the low-temperature phase diagram of the related Gibbs measure naturally associated with a class of reversible PCA, called the cross PCA. In such a model the updating rule of a cell depends indeed only on the status of the five cells forming a cross centered at the original cell itself. In particular, it depends on the value of the center spin (self-interaction). The goal of the paper is that of investigating the role played by the self-interaction parameter in connection with the ground states of the Hamiltonian and the low-temperature phase diagram of the Gibbs measure associated with this particular PCA
International Nuclear Information System (INIS)
Tremaine, P.R.
1979-01-01
Methods for calculating high-temprature Gibbs free energies of mononuclear cations and anions from room-temperature data are reviewed. Emphasis is given to species required for oxide solubility calculations relevant to mass transport situations in the nuclear industry. Free energies predicted by each method are compared to selected values calculated from recently reported solubility studies and other literature data. Values for monatomic ions estimated using the assumption anti C 0 p(T) = anti C 0 p(298) agree best with experiment to 423 K. From 423 K to 523 K, free energies from an electrostatic model for ion hydration are more accurate. Extrapolations for hydrolyzed species are limited by a lack of room-temperature entropy data and expressions for estimating these entropies are discussed. (orig.) [de
Entropic sampling of simple polymer models within Wang-Landau algorithm
International Nuclear Information System (INIS)
Vorontsov-Velyaminov, P N; Volkov, N A; Yurchenko, A A
2004-01-01
In this paper we apply a new simulation technique proposed in Wang and Landau (WL) (2001 Phys. Rev. Lett. 86 2050) to sampling of three-dimensional lattice and continuous models of polymer chains. Distributions obtained by homogeneous (unconditional) random walks are compared with results of entropic sampling (ES) within the WL algorithm. While homogeneous sampling gives reliable results typically in the range of 4-5 orders of magnitude, the WL entropic sampling yields them in the range of 20-30 orders and even larger with comparable computer effort. A combination of homogeneous and WL sampling provides reliable data for events with probabilities down to 10 -35 . For the lattice model we consider both the athermal case (self-avoiding walks, SAWs) and the thermal case when an energy is attributed to each contact between nonbonded monomers in a self-avoiding walk. For short chains the simulation results are checked by comparison with the exact data. In WL calculations for chain lengths up to N = 300 scaling relations for SAWs are well reproduced. In the thermal case distribution over the number of contacts is obtained in the N-range up to N = 100 and the canonical averages - internal energy, heat capacity, excess canonical entropy, mean square end-to-end distance - are calculated as a result in a wide temperature range. The continuous model is studied in the athermal case. By sorting conformations of a continuous phantom freely joined N-bonded chain with a unit bond length over a stochastic variable, the minimum distance between nonbonded beads, we determine the probability distribution for the N-bonded chain with hard sphere monomer units over its diameter a in the complete diameter range, 0 ≤ a ≤ 2, within a single ES run. This distribution provides us with excess specific entropy for a set of diameters a in this range. Calculations were made for chain lengths up to N = 100 and results were extrapolated to N → ∞ for a in the range 0 ≤ a ≤ 1.25
A Note on Information-Directed Sampling and Thompson Sampling
Zhou, Li
2015-01-01
This note introduce three Bayesian style Multi-armed bandit algorithms: Information-directed sampling, Thompson Sampling and Generalized Thompson Sampling. The goal is to give an intuitive explanation for these three algorithms and their regret bounds, and provide some derivations that are omitted in the original papers.
Estimation of Motion Vector Fields
DEFF Research Database (Denmark)
Larsen, Rasmus
1993-01-01
This paper presents an approach to the estimation of 2-D motion vector fields from time varying image sequences. We use a piecewise smooth model based on coupled vector/binary Markov random fields. We find the maximum a posteriori solution by simulated annealing. The algorithm generate sample...... fields by means of stochastic relaxation implemented via the Gibbs sampler....
Global, exact cosmic microwave background data analysis using Gibbs sampling
International Nuclear Information System (INIS)
Wandelt, Benjamin D.; Larson, David L.; Lakshminarayanan, Arun
2004-01-01
We describe an efficient and exact method that enables global Bayesian analysis of cosmic microwave background (CMB) data. The method reveals the joint posterior density (or likelihood for flat priors) of the power spectrum C l and the CMB signal. Foregrounds and instrumental parameters can be simultaneously inferred from the data. The method allows the specification of a wide range of foreground priors. We explicitly show how to propagate the non-Gaussian dependency structure of the C l posterior through to the posterior density of the parameters. If desired, the analysis can be coupled to theoretical (cosmological) priors and can yield the posterior density of cosmological parameter estimates directly from the time-ordered data. The method does not hinge on special assumptions about the survey geometry or noise properties, etc., It is based on a Monte Carlo approach and hence parallelizes trivially. No trace or determinant evaluations are necessary. The feasibility of this approach rests on the ability to solve the systems of linear equations which arise. These are of the same size and computational complexity as the map-making equations. We describe a preconditioned conjugate gradient technique that solves this problem and demonstrate in a numerical example that the computational time required for each Monte Carlo sample scales as n p 3/2 with the number of pixels n p . We use our method to analyze the data from the Differential Microwave Radiometer on the Cosmic Background Explorer and explore the non-Gaussian joint posterior density of the C l from the Differential Microwave Radiometer on the Cosmic Background Explorer in several projections
SeaWiFS technical report series. Volume 4: An analysis of GAC sampling algorithms. A case study
Yeh, Eueng-Nan (Editor); Hooker, Stanford B. (Editor); Hooker, Stanford B. (Editor); Mccain, Charles R. (Editor); Fu, Gary (Editor)
1992-01-01
The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) instrument will sample at approximately a 1 km resolution at nadir which will be broadcast for reception by realtime ground stations. However, the global data set will be comprised of coarser four kilometer data which will be recorded and broadcast to the SeaWiFS Project for processing. Several algorithms for degrading the one kilometer data to four kilometer data are examined using imagery from the Coastal Zone Color Scanner (CZCS) in an effort to determine which algorithm would best preserve the statistical characteristics of the derived products generated from the one kilometer data. Of the algorithms tested, subsampling based on a fixed pixel within a 4 x 4 pixel array is judged to yield the most consistent results when compared to the one kilometer data products.
Gibbs free-energy difference between the glass and crystalline phases of a Ni-Zr alloy
Ohsaka, K.; Trinh, E. H.; Holzer, J. C.; Johnson, W. L.
1993-01-01
The heats of eutectic melting and devitrification, and the specific heats of the crystalline, glass, and liquid phases have been measured for a Ni24Zr76 alloy. The data are used to calculate the Gibbs free-energy difference, Delta G(AC), between the real glass and the crystal on an assumption that the liquid-glass transition is second order. The result shows that Delta G(AC) continuously increases as the temperature decreases in contrast to the ideal glass case where Delta G(AC) is assumed to be independent of temperature.
Chinese handwriting recognition an algorithmic perspective
Su, Tonghua
2013-01-01
This book provides an algorithmic perspective on the recent development of Chinese handwriting recognition. Two technically sound strategies, the segmentation-free and integrated segmentation-recognition strategy, are investigated and algorithms that have worked well in practice are primarily focused on. Baseline systems are initially presented for these strategies and are subsequently expanded on and incrementally improved. The sophisticated algorithms covered include: 1) string sample expansion algorithms which synthesize string samples from isolated characters or distort realistic string samples; 2) enhanced feature representation algorithms, e.g. enhanced four-plane features and Delta features; 3) novel learning algorithms, such as Perceptron learning with dynamic margin, MPE training and distributed training; and lastly 4) ensemble algorithms, that is, combining the two strategies using both parallel structure and serial structure. All the while, the book moves from basic to advanced algorithms, helping ...
Kobayashi, Chigusa; Jung, Jaewoon; Matsunaga, Yasuhiro; Mori, Takaharu; Ando, Tadashi; Tamura, Koichi; Kamiya, Motoshi; Sugita, Yuji
2017-09-30
GENeralized-Ensemble SImulation System (GENESIS) is a software package for molecular dynamics (MD) simulation of biological systems. It is designed to extend limitations in system size and accessible time scale by adopting highly parallelized schemes and enhanced conformational sampling algorithms. In this new version, GENESIS 1.1, new functions and advanced algorithms have been added. The all-atom and coarse-grained potential energy functions used in AMBER and GROMACS packages now become available in addition to CHARMM energy functions. The performance of MD simulations has been greatly improved by further optimization, multiple time-step integration, and hybrid (CPU + GPU) computing. The string method and replica-exchange umbrella sampling with flexible collective variable choice are used for finding the minimum free-energy pathway and obtaining free-energy profiles for conformational changes of a macromolecule. These new features increase the usefulness and power of GENESIS for modeling and simulation in biological research. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Blandamer, Michael J.; Cullis, Paul M.; Soldi, L. Giorgio; Engberts, Jan B.F.N.; Kacperska, Anna; Os, Nico M. van
1995-01-01
Micellar colloids are distinguished from other colloids by their association-dissociation equilibrium in solution between monomers, counter-ions and micelles. According to classical thermodynamics, the standard Gibbs energy of formation of micelles at fixed temperature and pressure can be related to
International Nuclear Information System (INIS)
Okumura, Hisashi
2010-01-01
I review two new generalized-ensemble algorithms for molecular dynamics and Monte Carlo simulations of biomolecules, that is, the multibaric–multithermal algorithm and the partial multicanonical algorithm. In the multibaric–multithermal algorithm, two-dimensional random walks not only in the potential-energy space but also in the volume space are realized. One can discuss the temperature dependence and pressure dependence of biomolecules with this algorithm. The partial multicanonical simulation samples a wide range of only an important part of potential energy, so that one can concentrate the effort to determine a multicanonical weight factor only on the important energy terms. This algorithm has higher sampling efficiency than the multicanonical and canonical algorithms. (review)
A course on large deviations with an introduction to Gibbs measures
Rassoul-Agha, Firas
2015-01-01
This is an introductory course on the methods of computing asymptotics of probabilities of rare events: the theory of large deviations. The book combines large deviation theory with basic statistical mechanics, namely Gibbs measures with their variational characterization and the phase transition of the Ising model, in a text intended for a one semester or quarter course. The book begins with a straightforward approach to the key ideas and results of large deviation theory in the context of independent identically distributed random variables. This includes Cramér's theorem, relative entropy, Sanov's theorem, process level large deviations, convex duality, and change of measure arguments. Dependence is introduced through the interactions potentials of equilibrium statistical mechanics. The phase transition of the Ising model is proved in two different ways: first in the classical way with the Peierls argument, Dobrushin's uniqueness condition, and correlation inequalities and then a second time through the ...
Fröhlich, Jürg; Knowles, Antti; Schlein, Benjamin; Sohinger, Vedran
2017-12-01
We prove that Gibbs measures of nonlinear Schrödinger equations arise as high-temperature limits of thermal states in many-body quantum mechanics. Our results hold for defocusing interactions in dimensions {d =1,2,3}. The many-body quantum thermal states that we consider are the grand canonical ensemble for d = 1 and an appropriate modification of the grand canonical ensemble for {d =2,3}. In dimensions d = 2, 3, the Gibbs measures are supported on singular distributions, and a renormalization of the chemical potential is necessary. On the many-body quantum side, the need for renormalization is manifested by a rapid growth of the number of particles. We relate the original many-body quantum problem to a renormalized version obtained by solving a counterterm problem. Our proof is based on ideas from field theory, using a perturbative expansion in the interaction, organized by using a diagrammatic representation, and on Borel resummation of the resulting series.
An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network
Directory of Open Access Journals (Sweden)
Kai Hu
2013-01-01
Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.
International Nuclear Information System (INIS)
Silverio, Sara C.; Rodriguez, Oscar; Teixeira, Jose A.; Macedo, Eugenia A.
2010-01-01
The Gibbs free energy of transfer of a suitable hydrophobic probe can be regarded as a measure of the relative hydrophobicity of the different phases. The methylene group (CH 2 ) can be considered hydrophobic, and thus be a suitable probe for hydrophobicity. In this work, the partition coefficients of a series of five dinitrophenylated-amino acids were experimentally determined, at 23 o C, in three different tie-lines of the biphasic systems: (UCON + K 2 HPO 4 ), (UCON + potassium phosphate buffer, pH 7), (UCON + KH 2 PO 4 ), (UCON + Na 2 HPO 4 ), (UCON + sodium phosphate buffer, pH 7), and (UCON + NaH 2 PO 4 ). The Gibbs free energy of transfer of CH 2 units were calculated from the partition coefficients and used to compare the relative hydrophobicity of the equilibrium phases. The largest relative hydrophobicity was found for the ATPS formed by dihydrogen phosphate salts.
A genetic algorithm-based framework for wavelength selection on sample categorization.
Anzanello, Michel J; Yamashita, Gabrielli; Marcelo, Marcelo; Fogliatto, Flávio S; Ortiz, Rafael S; Mariotti, Kristiane; Ferrão, Marco F
2017-08-01
In forensic and pharmaceutical scenarios, the application of chemometrics and optimization techniques has unveiled common and peculiar features of seized medicine and drug samples, helping investigative forces to track illegal operations. This paper proposes a novel framework aimed at identifying relevant subsets of attenuated total reflectance Fourier transform infrared (ATR-FTIR) wavelengths for classifying samples into two classes, for example authentic or forged categories in case of medicines, or salt or base form in cocaine analysis. In the first step of the framework, the ATR-FTIR spectra were partitioned into equidistant intervals and the k-nearest neighbour (KNN) classification technique was applied to each interval to insert samples into proper classes. In the next step, selected intervals were refined through the genetic algorithm (GA) by identifying a limited number of wavelengths from the intervals previously selected aimed at maximizing classification accuracy. When applied to Cialis®, Viagra®, and cocaine ATR-FTIR datasets, the proposed method substantially decreased the number of wavelengths needed to categorize, and increased the classification accuracy. From a practical perspective, the proposed method provides investigative forces with valuable information towards monitoring illegal production of drugs and medicines. In addition, focusing on a reduced subset of wavelengths allows the development of portable devices capable of testing the authenticity of samples during police checking events, avoiding the need for later laboratorial analyses and reducing equipment expenses. Theoretically, the proposed GA-based approach yields more refined solutions than the current methods relying on interval approaches, which tend to insert irrelevant wavelengths in the retained intervals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Pethica, Brian A
2007-12-21
As indicated by Gibbs and made explicit by Guggenheim, the electrical potential difference between two regions of different chemical composition cannot be measured. The Gibbs-Guggenheim Principle restricts the use of classical electrostatics in electrochemical theories as thermodynamically unsound with some few approximate exceptions, notably for dilute electrolyte solutions and concomitant low potentials where the linear limit for the exponential of the relevant Boltzmann distribution applies. The Principle invalidates the widespread use of forms of the Poisson-Boltzmann equation which do not include the non-electrostatic components of the chemical potentials of the ions. From a thermodynamic analysis of the parallel plate electrical condenser, employing only measurable electrical quantities and taking into account the chemical potentials of the components of the dielectric and their adsorption at the surfaces of the condenser plates, an experimental procedure to provide exceptions to the Principle has been proposed. This procedure is now reconsidered and rejected. No other related experimental procedures circumvent the Principle. Widely-used theoretical descriptions of electrolyte solutions, charged surfaces and colloid dispersions which neglect the Principle are briefly discussed. MD methods avoid the limitations of the Poisson-Bolzmann equation. Theoretical models which include the non-electrostatic components of the inter-ion and ion-surface interactions in solutions and colloid systems assume the additivity of dispersion and electrostatic forces. An experimental procedure to test this assumption is identified from the thermodynamics of condensers at microscopic plate separations. The available experimental data from Kelvin probe studies are preliminary, but tend against additivity. A corollary to the Gibbs-Guggenheim Principle is enunciated, and the Principle is restated that for any charged species, neither the difference in electrostatic potential nor the
Toher, Cormac; Oses, Corey; Plata, Jose J.; Hicks, David; Rose, Frisco; Levy, Ohad; de Jong, Maarten; Asta, Mark; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano
2017-06-01
Thorough characterization of the thermomechanical properties of materials requires difficult and time-consuming experiments. This severely limits the availability of data and is one of the main obstacles for the development of effective accelerated materials design strategies. The rapid screening of new potential materials requires highly integrated, sophisticated, and robust computational approaches. We tackled the challenge by developing an automated, integrated workflow with robust error-correction within the AFLOW framework which combines the newly developed "Automatic Elasticity Library" with the previously implemented GIBBS method. The first extracts the mechanical properties from automatic self-consistent stress-strain calculations, while the latter employs those mechanical properties to evaluate the thermodynamics within the Debye model. This new thermoelastic workflow is benchmarked against a set of 74 experimentally characterized systems to pinpoint a robust computational methodology for the evaluation of bulk and shear moduli, Poisson ratios, Debye temperatures, Grüneisen parameters, and thermal conductivities of a wide variety of materials. The effect of different choices of equations of state and exchange-correlation functionals is examined and the optimum combination of properties for the Leibfried-Schlömann prediction of thermal conductivity is identified, leading to improved agreement with experimental results than the GIBBS-only approach. The framework has been applied to the AFLOW.org data repositories to compute the thermoelastic properties of over 3500 unique materials. The results are now available online by using an expanded version of the REST-API described in the Appendix.
Isham, M. A.
1992-01-01
Silicon carbide and silicon nitride are considered for application as structural materials and coating in advanced propulsion systems including nuclear thermal. Three-dimensional Gibbs free energy were constructed for reactions involving these materials in H2 and H2/H2O. Free energy plots are functions of temperature and pressure. Calculations used the definition of Gibbs free energy where the spontaneity of reactions is calculated as a function of temperature and pressure. Silicon carbide decomposes to Si and CH4 in pure H2 and forms a SiO2 scale in a wet atmosphere. Silicon nitride remains stable under all conditions. There was no apparent difference in reaction thermodynamics between ideal and Van der Waals treatment of gaseous species.
Gelb, Lev D; Chakraborty, Somendra Nath
2011-12-14
The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase. © 2011 American Institute of Physics
Demonstration and resolution of the Gibbs paradox of the first kind
International Nuclear Information System (INIS)
Peters, Hjalmar
2014-01-01
The Gibbs paradox of the first kind (GP1) refers to the false increase in entropy which, in statistical mechanics, is calculated from the process of combining two gas systems S1 and S2 consisting of distinguishable particles. Presented in a somewhat modified form, the GP1 manifests as a contradiction to the second law of thermodynamics. Contrary to popular belief, this contradiction affects not only classical but also quantum statistical mechanics. This paper resolves the GP1 by considering two effects. (i) The uncertainty about which particles are located in S1 and which in S2 contributes to the entropies of S1 and S2. (ii) S1 and S2 are correlated by the fact that if a certain particle is located in one system, it cannot be located in the other. As a consequence, the entropy of the total system consisting of S1 and S2 is not the sum of the entropies of S1 and S2. (paper)
The osmotic second virial coefficient and the Gibbs-McMillan-Mayer framework
DEFF Research Database (Denmark)
Mollerup, J.M.; Breil, Martin Peter
2009-01-01
The osmotic second virial coefficient is a key parameter in light scattering, protein crystallisation. self-interaction chromatography, and osmometry. The interpretation of the osmotic second virial coefficient depends on the set of independent variables. This commonly includes the independent...... variables associated with the Kirkwood-Buff, the McMillan-Mayer, and the Lewis-Randall solution theories. In this paper we analyse the osmotic second virial coefficient using a Gibbs-McMillan-Mayer framework which is similar to the McMillan-Mayer framework with the exception that pressure rather than volume...... is an independent variable. A Taylor expansion is applied to the osmotic pressure of a solution where one of the solutes is a small molecule, a salt for instance, that equilibrates between the two phases. Other solutes are retained. Solvents are small molecules that equilibrate between the two phases...
de Bildt, Annelies; Sytema, Sjoerd; Zander, Eric; Bölte, Sven; Sturm, Harald; Yirmiya, Nurit; Yaari, Maya; Charman, Tony; Salomone, Erica; LeCouteur, Ann; Green, Jonathan; Bedia, Ricardo Canal; Primo, Patricia García; van Daalen, Emma; de Jonge, Maretha V.; Guðmundsdóttir, Emilía; Jóhannsdóttir, Sigurrós; Raleva, Marija; Boskovska, Meri; Rogé, Bernadette; Baduel, Sophie; Moilanen, Irma; Yliherva, Anneli; Buitelaar, Jan; Oosterling, Iris J.
2015-01-01
The current study aimed to investigate the Autism Diagnostic Interview-Revised (ADI-R) algorithms for toddlers and young preschoolers (Kim and Lord, "J Autism Dev Disord" 42(1):82-93, 2012) in a non-US sample from ten sites in nine countries (n = 1,104). The construct validity indicated a good fit of the algorithms. The diagnostic…
Fast Bayesian Non-Negative Matrix Factorisation and Tri-Factorisation
DEFF Research Database (Denmark)
Brouwer, Thomas; Frellsen, Jes; Liò, Pietro
We present a fast variational Bayesian algorithm for performing non-negative matrix factorisation and tri-factorisation. We show that our approach achieves faster convergence per iteration and timestep (wall-clock) than Gibbs sampling and non-probabilistic approaches, and do not require additional...... samples to estimate the posterior. We show that in particular for matrix tri-factorisation convergence is difficult, but our variational Bayesian approach offers a fast solution, allowing the tri-factorisation approach to be used more effectively....
International Nuclear Information System (INIS)
Xu, H.; Wang, Y.
1999-01-01
In this letter, a linear free energy relationship is used to predict the Gibbs free energies of formation of crystalline phases of pyrochlore and zirconolite families with stoichiometry of MCaTi 2 O 7 (or, CaMTi 2 O 7 ,) from the known thermodynamic properties of aqueous tetravalent cations (M 4+ ). The linear free energy relationship for tetravalent cations is expressed as ΔG f,M v X 0 =a M v X ΔG n,M 4+ 0 +b M v X +β M v X r M 4+ , where the coefficients a M v X , b M v X , and β M v X characterize a particular structural family of M v X, r M 4+ is the ionic radius of M 4+ cation, ΔG f,M v X 0 is the standard Gibbs free energy of formation of M v X, and ΔG n,M 4+ 0 is the standard non-solvation energy of cation M 4+ . The coefficients for the structural family of zirconolite with the stoichiometry of M 4+ CaTi 2 O 7 are estimated to be: a M v X =0.5717, b M v X =-4284.67 (kJ/mol), and β M v X =27.2 (kJ/mol nm). The coefficients for the structural family of pyrochlore with the stoichiometry of M 4+ CaTi 2 O 7 are estimated to be: a M v X =0.5717, b M v X =-4174.25 (kJ/mol), and β M v X =13.4 (kJ/mol nm). Using the linear free energy relationship, the Gibbs free energies of formation of various zirconolite and pyrochlore phases are calculated. (orig.)
Evard, Margarita E.; Volkov, Aleksandr E.; Belyaev, Fedor S.; Ignatova, Anna D.
2018-05-01
The choice of Gibbs' potential for microstructural modeling of FCC ↔ HCP martensitic transformation in FeMn-based shape memory alloys is discussed. Threefold symmetry of the HCP phase is taken into account on specifying internal variables characterizing volume fractions of martensite variants. Constraints imposed on model constants by thermodynamic equilibrium conditions are formulated.
Directory of Open Access Journals (Sweden)
Daniel Vasiliu
Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.
Directory of Open Access Journals (Sweden)
Zhihua Wang
2017-05-01
Full Text Available Crude oil is generally produced with water, and the water cut produced by oil wells is increasingly common over their lifetime, so it is inevitable to create emulsions during oil production. However, the formation of emulsions presents a costly problem in surface process particularly, both in terms of transportation energy consumption and separation efficiency. To deal with the production and operational problems which are related to crude oil emulsions, especially to ensure the separation and transportation of crude oil-water systems, it is necessary to better understand the emulsification mechanism of crude oil under different conditions from the aspects of bulk and interfacial properties. The concept of shearing energy was introduced in this study to reveal the driving force for emulsification. The relationship between shearing stress in the flow field and interfacial tension (IFT was established, and the correlation between shearing energy and interfacial Gibbs free energy was developed. The potential of the developed correlation model was validated using the experimental and field data on emulsification behavior. It was also shown how droplet deformation could be predicted from a random deformation degree and orientation angle. The results indicated that shearing energy as the energy produced by shearing stress working in the flow field is the driving force activating the emulsification behavior. The deformation degree and orientation angle of dispersed phase droplet are associated with the interfacial properties, rheological properties and the experienced turbulence degree. The correlation between shearing stress and IFT can be quantified if droplet deformation degree vs. droplet orientation angle data is available. When the water cut is close to the inversion point of waxy crude oil emulsion, the interfacial Gibbs free energy change decreased and the shearing energy increased. This feature is also presented in the special regions where
International Nuclear Information System (INIS)
Naslain, R.; Thebault, J.; Hagenmuller, P.; Bernard, C.
1979-01-01
A thermodynamic approach based on the minimization of the total Gibbs free energy of the system is used to study the chemical vapour deposition (CVD) of boron from BCl 3 -H 2 or BBr 3 -H 2 mixtures on various types of substrates (at 1000 < T< 1900 K and 1 atm). In this approach it is assumed that states close to equilibrium are reached in the boron CVD apparatus. (Auth.)
Recursive algorithms for phylogenetic tree counting.
Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J
2013-10-28
In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.
Efficiency of alternative MCMC strategies illustrated using the reaction norm model
DEFF Research Database (Denmark)
Shariati, Mohammad Mahdi; Sørensen, D.
2008-01-01
The Markov chain Monte Carlo (MCMC) strategy provides remarkable flexibility for fitting complex hierarchical models. However, when parameters are highly correlated in their posterior distributions and their number is large, a particular MCMC algorithm may perform poorly and the resulting...... in the low correlation scenario where SG was the best strategy. The two LH proposals could not compete with any of the Gibbs sampling algorithms. In this study it was not possible to find an MCMC strategy that performs optimally across the range of target distributions and across all possible values...
International Nuclear Information System (INIS)
Xiong Shiyun; Qi Weihong; Huang Baiyun; Wang Mingpu; Li Yejun
2010-01-01
The Debye model of Helmholtz free energy for bulk material is generalized to Gibbs free energy (GFE) model for nanomaterial, while a shape factor is introduced to characterize the shape effect on GFE. The structural transitions of Ti and Zr nanoparticles are predicted based on GFE. It is further found that GFE decreases with the shape factor and increases with decreasing of the particle size. The critical size of structural transformation for nanoparticles goes up as temperature increases in the absence of change in shape factor. For specified temperature, the critical size climbs up with the increase of shape factor. The present predictions agree well with experiment values.
International Nuclear Information System (INIS)
Jebri, Sonia; Khattech, Ismail; Jemal, Mohamed
2017-01-01
Highlights: • A-type carbonate hydroxyapatites with 0 ⩽ x ⩽ 1 were prepared and characterized by DRX, IR spectroscopy and CHN analysis. • The heat of solution was measured in 9 wt% HNO 3 using an isoperibol calorimeter. • The standard enthalpy of formation was determined by thermochemical cycle. • Gibbs free energy has been deduced by estimating standard entropy of formation. • Carbonatation increases the stability till x = 0.6 mol. - Abstract: « A » type carbonate phosphocalcium hydroxyapatites having the general formula Ca 10 (PO 4 ) 6 (OH) (2-2x) (CO 3 ) x with 0 ⩽ x ⩽ 1, were prepared by solid gas reaction in the temperature range of 700–1000 °C. The obtained materials were characterized by X-ray diffraction and infrared spectroscopy. The carbonate content has been determined by C–H–N analysis. The heat of solution of these products was measured at T = 298 K in 9 wt% nitric acid solution using an isoperibol calorimeter. A thermochemical cycle was proposed and complementary experiences were performed in order to access to the standard enthalpies of formation of these phosphates. The results were compared to those previously obtained on apatites containing strontium and barium and show a decrease with the carbonate amount introduced in the lattice. This quantity becomes more negative as the ratio of substitution increases. Estimation of the entropy of formation allowed the determination of standard Gibbs free energy of formation of these compounds. The study showed that the substitution of hydroxyl by carbonate ions contributes to the stabilisation of the apatite structure.
topicmodels: An R Package for Fitting Topic Models
Directory of Open Access Journals (Sweden)
Bettina Grun
2011-05-01
Full Text Available Topic models allow the probabilistic modeling of term frequency occurrences in documents. The fitted model can be used to estimate the similarity between documents as well as between a set of specified keywords using an additional layer of latent variables which are referred to as topics. The R package topicmodels provides basic infrastructure for fitting topic models based on data structures from the text mining package tm. The package includes interfaces to two algorithms for fitting topic models: the variational expectation-maximization algorithm provided by David M. Blei and co-authors and an algorithm using Gibbs sampling by Xuan-Hieu Phan and co-authors.
Starr, Francis W.; Douglas, Jack F.; Sastry, Srikanth
2013-01-01
We carefully examine common measures of dynamical heterogeneity for a model polymer melt and test how these scales compare with those hypothesized by the Adam and Gibbs (AG) and random first-order transition (RFOT) theories of relaxation in glass-forming liquids. To this end, we first analyze clusters of highly mobile particles, the string-like collective motion of these mobile particles, and clusters of relative low mobility. We show that the time scale of the high-mobility clusters and stri...
Energy Technology Data Exchange (ETDEWEB)
Bloch, Claude; Dominicis, Cyrano de [Commissariat a l' energie atomique et aux energies alternatives - CEA, Centre d' Etudes Nucleaires de Saclay, Gif-sur-Yvette (France)
1959-07-01
Starting from an expansion derived in a previous work, we study the contribution to the Gibbs potential of the two-body dynamical correlations, taking into account the statistical correlations. Such a contribution is of interest for low density systems at low temperature. In the zero density limit, it reduces to the Beth Uhlenbeck expression of the second virial coefficient. For a system of fermions in the zero temperature limit, it yields the contribution of the Brueckner reaction matrix to the ground state energy, plus, under certain conditions, additional terms of the form exp. (β |Δ|), where the Δ are the binding energies of 'bound states' of the type first discussed by L. Cooper. Finally, we study the wave function of two particles immersed in a medium (defined by its temperature and chemical potential). lt satisfies an equation generalizing the Bethe Goldstone equation for an arbitrary temperature. Reprint of a paper published in 'Nuclear Physics' 10, 1959, p. 181-196 [French] Partant d'un developpement extrait d'un precedent travail, nous etudions la contribution au potentiel de Gibbs des relations dynamiques du systeme de deux corps, en tenant compte des relations statistiques. Une telle contribution presente de l'interet pour les systemes a densite faible et a basse temperature. A la densite limite zero, elle se ramene a l'expression de Beth Uhlenbeck du second coefficient virial. Pour un systeme de fermions a la temperature limite zero, il produit la contribution de la matrice de reaction de Brueckner au niveau fondamental, plus, dans certaines conditions, des termes additionnels de la forme exp. (β |Δ|), ou les Δ sont les energies de liaison des 'etats lies' du premier type, discutes auparavant par L. Cooper. Finalement, on etudie la fonction d'onde de deux particules immerges dans un milieu (definie par sa temperature et son potentiel chimique). Il satisfait a une equation generalisant l'equation de Bethe Goldstone pour une temperature arbitraire
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
Bhadra, Anindya; Carroll, Raymond J
2016-07-01
In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.
Level-0 trigger algorithms for the ALICE PHOS detector
Wang, D; Wang, Y P; Huang, G M; Kral, J; Yin, Z B; Zhou, D C; Zhang, F; Ullaland, K; Muller, H; Liu, L J
2011-01-01
The PHOS level-0 trigger provides a minimum bias trigger for p-p collisions and information for a level-1 trigger at both p-p and Pb-Pb collisions. There are two level-0 trigger generating algorithms under consideration: the Direct Comparison algorithm and the Weighted Sum algorithm. In order to study trigger algorithms via simulation, a simplified equivalent model is extracted from the trigger electronics to derive the waveform function of the Analog-or signal as input to the trigger algorithms. Simulations shown that the Weighted Sum algorithm can achieve higher trigger efficiency and provide more precise single channel energy information than the direct compare algorithm. An energy resolution of 9.75 MeV can be achieved with the Weighted Sum algorithm at a sampling rate of 40 Msps (mega samples per second) at 1 GeV. The timing performance at a sampling rate of 40 Msps with the Weighted Sum algorithm is better than that at a sampling rate of 20 Msps with both algorithms. The level-0 trigger can be delivered...
Naumov, V. V.; Isaeva, V. A.; Kuzina, E. N.; Sharnin, V. A.
2012-12-01
Gibbs energies for the transfer of glycylglycine and glycylglycinate ions from water to water-dimethylsulfoxide solvents are determined from the interface distribution of substances between immiscible phases in the composition range of 0.00 to 0.20 molar fractions of DMSO at 298.15 K. It is shown that with a rise in the concentration of nonaqueous components in solution, we observe the solvation of dipeptide and its anion, due mainly to the destabilization of the carboxyl group.
He, Ping
2012-01-01
The long-standing puzzle surrounding the statistical mechanics of self-gravitating systems has not yet been solved successfully. We formulate a systematic theoretical framework of entropy-based statistical mechanics for spherically symmetric collisionless self-gravitating systems. We use an approach that is very different from that of the conventional statistical mechanics of short-range interaction systems. We demonstrate that the equilibrium states of self-gravitating systems consist of both mechanical and statistical equilibria, with the former characterized by a series of velocity-moment equations and the latter by statistical equilibrium equations, which should be derived from the entropy principle. The velocity-moment equations of all orders are derived from the steady-state collisionless Boltzmann equation. We point out that the ergodicity is invalid for the whole self-gravitating system, but it can be re-established locally. Based on the local ergodicity, using Fermi-Dirac-like statistics, with the non-degenerate condition and the spatial independence of the local microstates, we rederive the Boltzmann-Gibbs entropy. This is consistent with the validity of the collisionless Boltzmann equation, and should be the correct entropy form for collisionless self-gravitating systems. Apart from the usual constraints of mass and energy conservation, we demonstrate that the series of moment or virialization equations must be included as additional constraints on the entropy functional when performing the variational calculus; this is an extension to the original prescription by White & Narayan. Any possible velocity distribution can be produced by the statistical-mechanical approach that we have developed with the extended Boltzmann-Gibbs/White-Narayan statistics. Finally, we discuss the questions of negative specific heat and ensemble inequivalence for self-gravitating systems.
Using the Perceptron Algorithm to Find Consistent Hypotheses
Anthony, M.; Shawe-Taylor, J.
1993-01-01
The perceptron learning algorithm yields quite naturally an algorithm for finding a linearly separable boolean function consistent with a sample of such a function. Using the idea of a specifying sample, we give a simple proof that this algorithm is not efficient, in general.
Size Fluctuations of Near Critical Nuclei and Gibbs Free Energy for Nucleation of BDA on Cu(001)
Schwarz, Daniel; van Gastel, Raoul; Zandvliet, Harold J. W.; Poelsema, Bene
2012-07-01
We present a low-energy electron microscopy study of nucleation and growth of BDA on Cu(001) at low supersaturation. At sufficiently high coverage, a dilute BDA phase coexists with c(8×8) crystallites. The real-time microscopic information allows a direct visualization of near-critical nuclei, determination of the supersaturation and the line tension of the crystallites, and, thus, derivation of the Gibbs free energy for nucleation. The resulting critical nucleus size nicely agrees with the measured value. Nuclei up to 4-6 times larger still decay with finite probability, urging reconsideration of the classic perception of a critical nucleus.
Duque, Michel; Andraca, Adriana; Goldstein, Patricia; del Castillo, Luis Felipe
2018-04-01
The Adam-Gibbs equation has been used for more than five decades, and still a question remains unanswered on the temperature dependence of the chemical potential it includes. Nowadays, it is a well-known fact that in fragile glass formers, actually the behavior of the system depends on the temperature region it is being studied. Transport coefficients change due to the appearance of heterogeneity in the liquid as it is supercooled. Using the different forms for the logarithmic shift factor and the form of the configurational entropy, we evaluate this temperature dependence and present a discussion on our results.
ExSample. A library for sampling Sudakov-type distributions
Energy Technology Data Exchange (ETDEWEB)
Plaetzer, Simon
2011-08-15
Sudakov-type distributions are at the heart of generating radiation in parton showers as well as contemporary NLO matching algorithms along the lines of the POWHEG algorithm. In this paper, the C++ library ExSample is introduced, which implements adaptive sampling of Sudakov-type distributions for splitting kernels which are in general only known numerically. Besides the evolution variable, the splitting kernels can depend on an arbitrary number of other degrees of freedom to be sampled, and any number of further parameters which are fixed on an event-by-event basis. (orig.)
ExSample. A library for sampling Sudakov-type distributions
International Nuclear Information System (INIS)
Plaetzer, Simon
2011-08-01
Sudakov-type distributions are at the heart of generating radiation in parton showers as well as contemporary NLO matching algorithms along the lines of the POWHEG algorithm. In this paper, the C++ library ExSample is introduced, which implements adaptive sampling of Sudakov-type distributions for splitting kernels which are in general only known numerically. Besides the evolution variable, the splitting kernels can depend on an arbitrary number of other degrees of freedom to be sampled, and any number of further parameters which are fixed on an event-by-event basis. (orig.)
International Nuclear Information System (INIS)
Lima da Silva, Aline; Heck, Nestor Cesar
2003-01-01
Equilibrium concentrations are traditionally calculated with the help of equilibrium constant equations from selected reactions. This procedure, however, is only useful for simpler problems. Analysis of the equilibrium state in a multicomponent and multiphase system necessarily involves solution of several simultaneous equations, and, as the number of system components grows, the required computation becomes more complex and tedious. A more direct and general method for solving the problem is the direct minimization of the Gibbs energy function. The solution for the nonlinear problem consists in minimizing the objective function (Gibbs energy of the system) subjected to the constraints of the elemental mass-balance. To solve it, usually a computer code is developed, which requires considerable testing and debugging efforts. In this work, a simple method to predict equilibrium composition in multicomponent systems is presented, which makes use of an electronic spreadsheet. The ability to carry out these calculations within a spreadsheet environment shows several advantages. First, spreadsheets are available 'universally' on nearly all personal computers. Second, the input and output capabilities of spreadsheets can be effectively used to monitor calculated results. Third, no additional systems or programs need to be learned. In this way, spreadsheets can be as suitable in computing equilibrium concentrations as well as to be used as teaching and learning aids. This work describes, therefore, the use of the Solver tool, contained in the Microsoft Excel spreadsheet package, on computing equilibrium concentrations in a multicomponent system, by the method of direct Gibbs energy minimization. The four phases Fe-Cr-O-C-Ni system is used as an example to illustrate the method proposed. The pure stoichiometric phases considered in equilibrium calculations are: Cr 2 O 3 (s) and FeO C r 2 O 3 (s). The atmosphere consists of O 2 , CO e CO 2 constituents. The liquid iron
Semi-blind sparse image reconstruction with application to MRFM.
Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O
2012-09-01
We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.
International Nuclear Information System (INIS)
Tong, Cao; Sun, Zhili; Zhao, Qianli; Wang, Qibin; Wang, Shuang
2015-01-01
To solve the problem of large computation when failure probability with time-consuming numerical model is calculated, we propose an improved active learning reliability method called AK-SSIS based on AK-IS algorithm. First, an improved iterative stopping criterion in active learning is presented so that iterations decrease dramatically. Second, the proposed method introduces Subset simulation importance sampling (SSIS) into the active learning reliability calculation, and then a learning function suitable for SSIS is proposed. Finally, the efficiency of AK-SSIS is proved by two academic examples from the literature. The results show that AK-SSIS requires fewer calls to the performance function than AK-IS, and the failure probability obtained from AK-SSIS is very robust and accurate. Then this method is applied on a spur gear pair for tooth contact fatigue reliability analysis.
Energy Technology Data Exchange (ETDEWEB)
Tong, Cao; Sun, Zhili; Zhao, Qianli; Wang, Qibin [Northeastern University, Shenyang (China); Wang, Shuang [Jiangxi University of Science and Technology, Ganzhou (China)
2015-08-15
To solve the problem of large computation when failure probability with time-consuming numerical model is calculated, we propose an improved active learning reliability method called AK-SSIS based on AK-IS algorithm. First, an improved iterative stopping criterion in active learning is presented so that iterations decrease dramatically. Second, the proposed method introduces Subset simulation importance sampling (SSIS) into the active learning reliability calculation, and then a learning function suitable for SSIS is proposed. Finally, the efficiency of AK-SSIS is proved by two academic examples from the literature. The results show that AK-SSIS requires fewer calls to the performance function than AK-IS, and the failure probability obtained from AK-SSIS is very robust and accurate. Then this method is applied on a spur gear pair for tooth contact fatigue reliability analysis.
Performance of Jet Algorithms in CMS
CMS Collaboration
The CMS Combined Software and Analysis Challenge 2007 (CSA07) is well underway and expected to produce a wealth of physics analyses to be applied to the first incoming detector data in 2008. The JetMET group of CMS supports four different jet clustering algorithms for the CSA07 Monte Carlo samples, with two different parameterizations each: \\fastkt, \\siscone, \\midpoint, and \\itcone. We present several studies comparing the performance of these algorithms using QCD dijet and \\ttbar Monte Carlo samples. We specifically observe that the \\siscone algorithm performs equal to or better than the \\midpoint algorithm in all presented studies and propose that \\siscone be adopted as the preferred cone-based jet clustering algorithm in future CMS physics analyses, as it is preferred by theorists for its infrared- and collinear-safety to all orders of perturbative QCD. We furthermore encourage the use of the \\fastkt algorithm which is found to perform as good as any other algorithm under study, features dramatically reduc...
Czech Academy of Sciences Publication Activity Database
Langmaier, Jan; Záliš, Stanislav; Samec, Zdeněk; Bovtun, Viktor; Kempa, Martin
2013-01-01
Roč. 87, JAN 2013 (2013), s. 591-598 ISSN 0013-4686 R&D Projects: GA ČR GAP206/11/0707 Institutional support: RVO:61388955 ; RVO:68378271 Keywords : ionic liquid s * cyclic voltammetry * standard Gibbs energy of ion transfer Subject RIV: CG - Electrochemistry Impact factor: 4.086, year: 2013
Chemical Disequilibria and Sources of Gibbs Free Energy Inside Enceladus
Zolotov, M. Y.
2010-12-01
Non-photosynthetic organisms use chemical disequilibria in the environment to gain metabolic energy from enzyme catalyzed oxidation-reduction (redox) reactions. The presence of carbon dioxide, ammonia, formaldehyde, methanol, methane and other hydrocarbons in the eruptive plume of Enceladus [1] implies diverse redox disequilibria in the interior. In the history of the moon, redox disequilibria could have been activated through melting of a volatile-rich ice and following water-rock-organic interactions. Previous and/or present aqueous processes are consistent with the detection of NaCl and Na2CO3/NaHCO3-bearing grains emitted from Enceladus [2]. A low K/Na ratio in the grains [2] and a low upper limit for N2 in the plume [3] indicate low temperature (possibly enzymes if organisms were (are) present. The redox conditions in aqueous systems and amounts of available Gibbs free energy should have been affected by the production, consumption and escape of hydrogen. Aqueous oxidation of minerals (Fe-Ni metal, Fe-Ni phosphides, etc.) accreted on Enceladus should have led to H2 production, which is consistent with H2 detection in the plume [1]. Numerical evaluations based on concentrations of plume gases [1] reveal sufficient energy sources available to support metabolically diverse life at a wide range of activities (a) of dissolved H2 (log aH2 from 0 to -10). Formaldehyde, carbon dioxide [c.f. 4], HCN (if it is present), methanol, acetylene and other hydrocarbons have the potential to react with H2 to form methane. Aqueous hydrogenations of acetylene, HCN and formaldehyde to produce methanol are energetically favorable as well. Both favorable hydrogenation and hydration of HCN lead to formation of ammonia. Condensed organic species could also participate in redox reactions. Methane and ammonia are the final products of these putative redox transformations. Sulfates may have not formed in cold and/or short-term aqueous environments with a limited H2 escape. In contrast to
Mishra, Arabinda; Anderson, Adam W; Wu, Xi; Gore, John C; Ding, Zhaohua
2010-08-01
The purpose of this work is to design a neuronal fiber tracking algorithm, which will be more suitable for reconstruction of fibers associated with functionally important regions in the human brain. The functional activations in the brain normally occur in the gray matter regions. Hence the fibers bordering these regions are weakly myelinated, resulting in poor performance of conventional tractography methods to trace the fiber links between them. A lower fractional anisotropy in this region makes it even difficult to track the fibers in the presence of noise. In this work, the authors focused on a stochastic approach to reconstruct these fiber pathways based on a Bayesian regularization framework. To estimate the true fiber direction (propagation vector), the a priori and conditional probability density functions are calculated in advance and are modeled as multivariate normal. The variance of the estimated tensor element vector is associated with the uncertainty due to noise and partial volume averaging (PVA). An adaptive and multiple sampling of the estimated tensor element vector, which is a function of the pre-estimated variance, overcomes the effect of noise and PVA in this work. The algorithm has been rigorously tested using a variety of synthetic data sets. The quantitative comparison of the results to standard algorithms motivated the authors to implement it for in vivo DTI data analysis. The algorithm has been implemented to delineate fibers in two major language pathways (Broca's to SMA and Broca's to Wernicke's) across 12 healthy subjects. Though the mean of standard deviation was marginally bigger than conventional (Euler's) approach [P. J. Basser et al., "In vivo fiber tractography using DT-MRI data," Magn. Reson. Med. 44(4), 625-632 (2000)], the number of extracted fibers in this approach was significantly higher. The authors also compared the performance of the proposed method to Lu's method [Y. Lu et al., "Improved fiber tractography with Bayesian
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
Directory of Open Access Journals (Sweden)
W. L. Silva
2008-09-01
Full Text Available The reduction efficiency is an important variable during the black liquor burning process in the Kraft recovery boiler. This variable value is obtained by slow experimental routines and the delay of this measure disturbs the pulp and paper industry customary control. This paper describes an optimization approach for the reduction efficiency determination in the furnace bottom of the recovery boiler based on the minimization of the Gibbs free energy. The industrial data used in this study were directly obtained from CENIBRA's data acquisition system. The resulting approach is able to predict the steady state behavior of the chemical composition of the furnace recovery boiler, - especially the reduction efficiency when different operational conditions are used. This result confirms the potential of this approach in the analysis of the daily operation of the recovery boiler.
Comment on "Inference with minimal Gibbs free energy in information field theory".
Iatsenko, D; Stefanovska, A; McClintock, P V E
2012-03-01
Enßlin and Weig [Phys. Rev. E 82, 051112 (2010)] have introduced a "minimum Gibbs free energy" (MGFE) approach for estimation of the mean signal and signal uncertainty in Bayesian inference problems: it aims to combine the maximum a posteriori (MAP) and maximum entropy (ME) principles. We point out, however, that there are some important questions to be clarified before the new approach can be considered fully justified, and therefore able to be used with confidence. In particular, after obtaining a Gaussian approximation to the posterior in terms of the MGFE at some temperature T, this approximation should always be raised to the power of T to yield a reliable estimate. In addition, we show explicitly that MGFE indeed incorporates the MAP principle, as well as the MDI (minimum discrimination information) approach, but not the well-known ME principle of Jaynes [E.T. Jaynes, Phys. Rev. 106, 620 (1957)]. We also illuminate some related issues and resolve apparent discrepancies. Finally, we investigate the performance of MGFE estimation for different values of T, and we discuss the advantages and shortcomings of the approach.
International Nuclear Information System (INIS)
Tang, Ying; Du, Yong; Zhang, Lijun; Yuan, Xiaoming; Kaptay, George
2012-01-01
Highlights: ► An exponential formulation to describe ternary excess Gibbs energy is proposed. ► Theoretical analysis is performed to verify stability of phase using new formulation. ► Al–Mg–Si system and its boundary binaries have been assessed by the new formulation. ► Present calculations for Al–Mg–Si system are more reasonable than previous ones. - Abstract: An exponential formulation was proposed to replace the linear interaction parameter in the Redlich–Kister (R–K) polynomial for the excess Gibbs energy of ternary solution phase. The theoretical analysis indicates that the proposed new exponential formulation can not only avoid the artificial miscibility gap at high temperatures but also describe the ternary system well. A thermodynamic description for the Al–Mg–Si system and its boundary binaries was then performed by using both R–K linear and exponential formulations. The inverted miscibility gaps occurring in the Mg–Si and the Al–Mg–Si systems at high temperatures due to the use of R–K linear polynomials are avoided by using the new formulation. Besides, the thermodynamic properties predicted with the new formulation confirm the general thermodynamic belief that the solution phase approaches to the ideal solution at infinite temperatures, which cannot be described with the traditional R–K linear polynomials.
Artrith, Nongnuch; Urban, Alexander; Ceder, Gerbrand
2018-06-01
The atomistic modeling of amorphous materials requires structure sizes and sampling statistics that are challenging to achieve with first-principles methods. Here, we propose a methodology to speed up the sampling of amorphous and disordered materials using a combination of a genetic algorithm and a specialized machine-learning potential based on artificial neural networks (ANNs). We show for the example of the amorphous LiSi alloy that around 1000 first-principles calculations are sufficient for the ANN-potential assisted sampling of low-energy atomic configurations in the entire amorphous LixSi phase space. The obtained phase diagram is validated by comparison with the results from an extensive sampling of LixSi configurations using molecular dynamics simulations and a general ANN potential trained to ˜45 000 first-principles calculations. This demonstrates the utility of the approach for the first-principles modeling of amorphous materials.
International Nuclear Information System (INIS)
Sitko, Rafal
2008-01-01
Knowledge of X-ray tube spectral distribution is necessary in theoretical methods of matrix correction, i.e. in both fundamental parameter (FP) methods and theoretical influence coefficient algorithms. Thus, the influence of X-ray tube distribution on the accuracy of the analysis of thin films and bulk samples is presented. The calculations are performed using experimental X-ray tube spectra taken from the literature and theoretical X-ray tube spectra evaluated by three different algorithms proposed by Pella et al. (X-Ray Spectrom. 14 (1985) 125-135), Ebel (X-Ray Spectrom. 28 (1999) 255-266), and Finkelshtein and Pavlova (X-Ray Spectrom. 28 (1999) 27-32). In this study, Fe-Cr-Ni system is selected as an example and the calculations are performed for X-ray tubes commonly applied in X-ray fluorescence analysis (XRF), i.e., Cr, Mo, Rh and W. The influence of X-ray tube spectra on FP analysis is evaluated when quantification is performed using various types of calibration samples. FP analysis of bulk samples is performed using pure-element bulk standards and multielement bulk standards similar to the analyzed material, whereas for FP analysis of thin films, the bulk and thin pure-element standards are used. For the evaluation of the influence of X-ray tube spectra on XRF analysis performed by theoretical influence coefficient methods, two algorithms for bulk samples are selected, i.e. Claisse-Quintin (Can. Spectrosc. 12 (1967) 129-134) and COLA algorithms (G.R. Lachance, Paper Presented at the International Conference on Industrial Inorganic Elemental Analysis, Metz, France, June 3, 1981) and two algorithms (constant and linear coefficients) for thin films recently proposed by Sitko (X-Ray Spectrom. 37 (2008) 265-272)
Williams, Christopher J.; Moffitt, Christine M.
2003-03-01
An important emerging issue in fisheries biology is the health of free-ranging populations of fish, particularly with respect to the prevalence of certain pathogens. For many years, pathologists focused on captive populations and interest was in the presence or absence of certain pathogens, so it was economically attractive to test pooled samples of fish. Recently, investigators have begun to study individual fish prevalence from pooled samples. Estimation of disease prevalence from pooled samples is straightforward when assay sensitivity and specificity are perfect, but this assumption is unrealistic. Here we illustrate the use of a Bayesian approach for estimating disease prevalence from pooled samples when sensitivity and specificity are not perfect. We also focus on diagnostic plots to monitor the convergence of the Gibbs-sampling-based Bayesian analysis. The methods are illustrated with a sample data set.
International Nuclear Information System (INIS)
Xu, H.
1999-01-01
In this letter, the Sverjensky-Molling equation derived from a linear free energy relationship is used to calculate the Gibbs free energies of formation of zirconolite crystalline phases (MZrTi 2 O 7 and MHfTi 2 O 7 ) from the known thermodynamic properties of the corresponding aqueous divalent cations (M 2+ ). Sverjensky-Molling equation is expressed as ΔG 0 f,M v X =a M v X ΔG 0 n,M 2+ +b M v X +β M v X r M 2+ , where the coefficients a M v X , b M v X , and β M v X characterize a particular structural family of M v X, r M 2+ is the ionic radius of M 2+ cation, ΔG f,M v X 0 is the standard Gibbs free energy of formation of M v X, and ΔG 0 n,M 2+ is the standard non-solvation energy of cation M 2+ . This relationship can be used to predict the Gibbs free energies of formation of various fictive phases (such as BaZrTi 2 O 7 , SrZrTi 2 O 7 , PbZrTi 2 O 7 , etc.) that may form solid solution with CaZrTi 2 O 7 in actual Synroc-based nuclear waste forms. Based on obtained linear free energy relationships, it is predicted that large cations (e.g., Ba and Ra) prefer to be in perovskite structure, and small cations (e.g., Ca, Zn, and Cd) prefer to be in zirconolite structure. (orig.)
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Super resolution reconstruction of μ-CT image of rock sample using neighbour embedding algorithm
Wang, Yuzhu; Rahman, Sheik S.; Arns, Christoph H.
2018-03-01
X-ray computed tomography (μ-CT) is considered to be the most effective way to obtain the inner structure of rock sample without destructions. However, its limited resolution hampers its ability to probe sub-micro structures which is critical for flow transportation of rock sample. In this study, we propose an innovative methodology to improve the resolution of μ-CT image using neighbour embedding algorithm where low frequency information is provided by μ-CT image itself while high frequency information is supplemented by high resolution scanning electron microscopy (SEM) image. In order to obtain prior for reconstruction, a large number of image patch pairs contain high- and low- image patches are extracted from the Gaussian image pyramid generated by SEM image. These image patch pairs contain abundant information about tomographic evolution of local porous structures under different resolution spaces. Relying on the assumption of self-similarity of porous structure, this prior information can be used to supervise the reconstruction of high resolution μ-CT image effectively. The experimental results show that the proposed method is able to achieve the state-of-the-art performance.
General Algorithm (High level)
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...
Genetic Algorithm Applied to the Eigenvalue Equalization Filtered-x LMS Algorithm (EE-FXLMS
Directory of Open Access Journals (Sweden)
Stephan P. Lovstedt
2008-01-01
Full Text Available The FXLMS algorithm, used extensively in active noise control (ANC, exhibits frequency-dependent convergence behavior. This leads to degraded performance for time-varying tonal noise and noise with multiple stationary tones. Previous work by the authors proposed the eigenvalue equalization filtered-x least mean squares (EE-FXLMS algorithm. For that algorithm, magnitude coefficients of the secondary path transfer function are modified to decrease variation in the eigenvalues of the filtered-x autocorrelation matrix, while preserving the phase, giving faster convergence and increasing overall attenuation. This paper revisits the EE-FXLMS algorithm, using a genetic algorithm to find magnitude coefficients that give the least variation in eigenvalues. This method overcomes some of the problems with implementing the EE-FXLMS algorithm arising from finite resolution of sampled systems. Experimental control results using the original secondary path model, and a modified secondary path model for both the previous implementation of EE-FXLMS and the genetic algorithm implementation are compared.
Energy Technology Data Exchange (ETDEWEB)
Kurtz, S.E.; Fields, D.E.
1983-10-01
The KSTEST code presented here is designed to perform the Kolmogorov-Smirnov one-sample test. The code may be used as a stand-alone program or the principal subroutines may be excerpted and used to service other programs. The Kolmogorov-Smirnov one-sample test is a nonparametric goodness-of-fit test. A number of codes to perform this test are in existence, but they suffer from the inability to provide meaningful results in the case of small sample sizes (number of values less than or equal to 80). The KSTEST code overcomes this inadequacy by using two distinct algorithms. If the sample size is greater than 80, an asymptotic series developed by Smirnov is evaluated. If the sample size is 80 or less, a table of values generated by Birnbaum is referenced. Valid results can be obtained from KSTEST when the sample contains from 3 to 300 data points. The program was developed on a Digital Equipment Corporation PDP-10 computer using the FORTRAN-10 language. The code size is approximately 450 card images and the typical CPU execution time is 0.19 s.
Detecting chaos in irregularly sampled time series.
Kulp, C W
2013-09-01
Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.
Adaptive Metropolis Sampling with Product Distributions
Wolpert, David H.; Lee, Chiu Fan
2005-01-01
The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.
Gibbs free energy difference between the undercooled liquid and the beta phase of a Ti-Cr alloy
Ohsaka, K.; Trinh, E. H.; Holzer, J. C.; Johnson, W. L.
1992-01-01
The heat of fusion and the specific heats of the solid and liquid have been experimentally determined for a Ti60Cr40 alloy. The data are used to evaluate the Gibbs free energy difference, delta-G, between the liquid and the beta phase as a function of temperature to verify a reported spontaneous vitrification (SV) of the beta phase in Ti-Cr alloys. The results show that SV of an undistorted beta phase in the Ti60Cr40 alloy at 873 K is not feasible because delta-G is positive at the temperature. However, delta-G may become negative with additional excess free energy to the beta phase in the form of defects.
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-02-26
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.
On Invertible Sampling and Adaptive Security
DEFF Research Database (Denmark)
Ishai, Yuval; Kumarasubramanian, Abishek; Orlandi, Claudio
2011-01-01
functionalities was left open. We provide the first convincing evidence that the answer to this question is negative, namely that some (randomized) functionalities cannot be realized with adaptive security. We obtain this result by studying the following related invertible sampling problem: given an efficient...... sampling algorithm A, obtain another sampling algorithm B such that the output of B is computationally indistinguishable from the output of A, but B can be efficiently inverted (even if A cannot). This invertible sampling problem is independently motivated by other cryptographic applications. We show......, under strong but well studied assumptions, that there exist efficient sampling algorithms A for which invertible sampling as above is impossible. At the same time, we show that a general feasibility result for adaptively secure MPC implies that invertible sampling is possible for every A, thereby...
A Fast and Accurate Algorithm for l1 Minimization Problems in Compressive Sampling (Preprint)
2013-01-22
However, updating uk+1 via the formulation of Step 2 in Algorithm 1 can be implemented through the use of the component-wise Gauss - Seidel iteration which...may accelerate the rate of convergence of the algorithm and therefore reduce the total CPU-time consumed. The efficiency of component-wise Gauss - Seidel ...Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse Problems, 28 (2012), p
Partial multicanonical algorithm for molecular dynamics and Monte Carlo simulations.
Okumura, Hisashi
2008-09-28
Partial multicanonical algorithm is proposed for molecular dynamics and Monte Carlo simulations. The partial multicanonical simulation samples a wide range of a part of the potential-energy terms, which is necessary to sample the conformational space widely, whereas a wide range of total potential energy is sampled in the multicanonical algorithm. Thus, one can concentrate the effort to determine the weight factor only on the important energy terms in the partial multicanonical simulation. The partial multicanonical, multicanonical, and canonical molecular dynamics algorithms were applied to an alanine dipeptide in explicit water solvent. The canonical simulation sampled the states of P(II), C(5), alpha(R), and alpha(P). The multicanonical simulation covered the alpha(L) state as well as these states. The partial multicanonical simulation also sampled the C(7) (ax) state in addition to the states that were sampled by the multicanonical simulation. In the partial multicanonical simulation, furthermore, backbone dihedral angles phi and psi rotated more frequently than those in the multicanonical and canonical simulations. These results mean that the partial multicanonical algorithm has a higher sampling efficiency than the multicanonical and canonical algorithms.
A Survey of Blue-Noise Sampling and Its Applications
Yan, Dongming; Guo, Jian-Wei; Wang, Bin; Zhang, Xiao-Peng; Wonka, Peter
2015-01-01
In this paper, we survey recent approaches to blue-noise sampling and discuss their beneficial applications. We discuss the sampling algorithms that use points as sampling primitives and classify the sampling algorithms based on various aspects, e.g., the sampling domain and the type of algorithm. We demonstrate several well-known applications that can be improved by recent blue-noise sampling techniques, as well as some new applications such as dynamic sampling and blue-noise remeshing.
A Survey of Blue-Noise Sampling and Its Applications
Yan, Dongming
2015-05-05
In this paper, we survey recent approaches to blue-noise sampling and discuss their beneficial applications. We discuss the sampling algorithms that use points as sampling primitives and classify the sampling algorithms based on various aspects, e.g., the sampling domain and the type of algorithm. We demonstrate several well-known applications that can be improved by recent blue-noise sampling techniques, as well as some new applications such as dynamic sampling and blue-noise remeshing.
Hierarchical Bayesian sparse image reconstruction with application to MRFM.
Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves
2009-09-01
This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-01-01
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169
Lu, Xiuyuan; Van Roy, Benjamin
2017-01-01
Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...
Fast algorithm for Morphological Filters
International Nuclear Information System (INIS)
Lou Shan; Jiang Xiangqian; Scott, Paul J
2011-01-01
In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.
Energy Technology Data Exchange (ETDEWEB)
Prasad, T.E. Vittal [Properties Group, Chemical Engineering Laboratory, Indian Institute of Chemical Technology, Hyderabad 500 007 (India); Venkanna, N. [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Kumar, Y. Naveen [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Ashok, K. [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Sirisha, N.M. [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Prasad, D.H.L. [Properties Group, Chemical Engineering Laboratory, Indian Institute of Chemical Technology, Hyderabad 500 007 (India)]. E-mail: dasika@iict.res.in
2007-07-15
Bubble point temperatures at 95.23 kPa, over the entire composition range are measured for the binary mixtures formed by p-cresol with 1,2-dichloroethane, 1,1,2,2-tetrachloroethane trichloroethylene, tetrachloroethylene, and o- , m- , and p-xylenes, making use of a Swietoslawski-type ebulliometer. Liquid phase mole fraction (x {sub 1}) versus bubble point temperature (T) measurements are found to be well represented by the Wilson model. The optimum Wilson parameters are used to calculate the vapor phase composition, activity coefficients, and excess Gibbs free energy. The results are discussed.
Phase relations and gibbs energies in the system Mn-Rh-O
Jacob, K. T.; Sriram, M. V.
1994-07-01
Phase relations in the system Mn-Rh-O are established at 1273 K by equilibrating different compositions either in evacuated quartz ampules or in pure oxygen at a pressure of 1.01 × 105 Pa. The quenched samples are examined by optical microscopy, X-ray diffraction, and energy-dispersive X-ray analysis (EDAX). The alloys and intermetallics in the binary Mn-Rh system are found to be in equilibrium with MnO. There is only one ternary compound, MnRh2O4, with normal spinel structure in the system. The compound Mn3O4 has a tetragonal structure at 1273 K. A solid solution is formed between MnRh2O4 and Mn3O4. The solid solution has the cubic structure over a large range of composition and coexists with metallic rhodium. The partial pressure of oxygen corresponding to this two-phase equilibrium is measured as a function of the composition of the spinel solid solution and temperature. A new solid-state cell, with three separate electrode compartments, is designed to measure accurately the chemical potential of oxygen in the two-phase mixture, Rh + Mn3-2xRh2xO4, which has 1 degree of freedom at constant temperature. From the electromotive force (emf), thermodynamic mixing properties of the Mn3O4-MnRh2O4 solid solution and Gibbs energy of formation of MnRh2O4 are deduced. The activities exhibit negative deviations from Raoult’s law for most of the composition range, except near Mn3O4, where a two-phase region exists. In the cubic phase, the entropy of mixing of the two Rh3+ and Mn3+ ions on the octahedral site of the spinel is ideal, and the enthalpy of mixing is positive and symmetric with respect to composition. For the formation of the spinel (sp) from component oxides with rock salt (rs) and orthorhombic (orth) structures according to the reaction, MnO (rs) + Rh2O3 (orth) → MnRh2O4 (sp), ΔG° = -49,680 + 1.56T (±500) J mol-1 The oxygen potentials corresponding to MnO + Mn3O4 and Rh + Rh2O3 equilibria are also obtained from potentiometric measurements on galvanic
Hypothesis testing in genetic linkage analysis via Gibbs sampling ( )
African Journals Online (AJOL)
hope&shola
2010-12-06
Dec 6, 2010 ... The existing theory assumes an asymptotic normality for score statistics which is violated on boundary ... Monte Carlo approach is proposed to overcome this problem. ... probability, that is, the probability that an individual with.
A Learning Algorithm for Multimodal Grammar Inference.
D'Ulizia, A; Ferri, F; Grifoni, P
2011-12-01
The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.
An efficient quantum algorithm for spectral estimation
Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth
2017-03-01
We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.
Training nuclei detection algorithms with simple annotations
Directory of Open Access Journals (Sweden)
Henning Kost
2017-01-01
Full Text Available Background: Generating good training datasets is essential for machine learning-based nuclei detection methods. However, creating exhaustive nuclei contour annotations, to derive optimal training data from, is often infeasible. Methods: We compared different approaches for training nuclei detection methods solely based on nucleus center markers. Such markers contain less accurate information, especially with regard to nuclear boundaries, but can be produced much easier and in greater quantities. The approaches use different automated sample extraction methods to derive image positions and class labels from nucleus center markers. In addition, the approaches use different automated sample selection methods to improve the detection quality of the classification algorithm and reduce the run time of the training process. We evaluated the approaches based on a previously published generic nuclei detection algorithm and a set of Ki-67-stained breast cancer images. Results: A Voronoi tessellation-based sample extraction method produced the best performing training sets. However, subsampling of the extracted training samples was crucial. Even simple class balancing improved the detection quality considerably. The incorporation of active learning led to a further increase in detection quality. Conclusions: With appropriate sample extraction and selection methods, nuclei detection algorithms trained on the basis of simple center marker annotations can produce comparable quality to algorithms trained on conventionally created training sets.
Top Tagging by Deep Learning Algorithm
Akil, Ali
2015-01-01
In this report I will show the application of a deep learning algorithm on a Monte Carlo simulation sample to test its performance in tagging hadronic decays of boosted top quarks and compare what we get with the results of the application of some other algorithms.
Directory of Open Access Journals (Sweden)
Angela Biaggio
2005-01-01
Full Text Available Thirty female and 30 male university students each from Joao Pessoa and Porto Alegre were compared to a comparable Norwegian sample of 60 female and 60 male students. Except for a suggestion of differences in women's cultural orientation, comparisons on Gibbs' test of justice morality, the ECI test for ethic of care, Bem's sex role inventory, and Triandis' test for cultural orientations showed that all differences were between the Norwegian sample and the Brazilian samples as a unit. Brazilians showed a differentiation of sex roles, which was not shown in Norwegians, and higher scores on the collectivism cultural orientation. Norwegians showed higher scores ECI, which might be because of a culture bias in the test. No difference was shown for individualism cultural orientation, and on Gibbs' test. Men scored higher on the total individualism measure, and women on vertical collectivism. JP women scored as more hedonistic and individual than the PA women, who scores as more traditional than the JP women.
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
Nested Sampling with Constrained Hamiltonian Monte Carlo
Betancourt, M. J.
2010-01-01
Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.
Directory of Open Access Journals (Sweden)
Brian Godsey
Full Text Available MicroRNAs (miRs are known to play an important role in mRNA regulation, often by binding to complementary sequences in "target" mRNAs. Recently, several methods have been developed by which existing sequence-based target predictions can be combined with miR and mRNA expression data to infer true miR-mRNA targeting relationships. It has been shown that the combination of these two approaches gives more reliable results than either by itself. While a few such algorithms give excellent results, none fully addresses expression data sets with a natural ordering of the samples. If the samples in an experiment can be ordered or partially ordered by their expected similarity to one another, such as for time-series or studies of development processes, stages, or types, (e.g. cell type, disease, growth, aging, there are unique opportunities to infer miR-mRNA interactions that may be specific to the underlying processes, and existing methods do not exploit this. We propose an algorithm which specifically addresses [partially] ordered expression data and takes advantage of sample similarities based on the ordering structure. This is done within a Bayesian framework which specifies posterior distributions and therefore statistical significance for each model parameter and latent variable. We apply our model to a previously published expression data set of paired miR and mRNA arrays in five partially ordered conditions, with biological replicates, related to multiple myeloma, and we show how considering potential orderings can improve the inference of miR-mRNA interactions, as measured by existing knowledge about the involved transcripts.
Iterative importance sampling algorithms for parameter estimation
Morzfeld, Matthias; Day, Marcus S.; Grout, Ray W.; Pau, George Shu Heng; Finsterle, Stefan A.; Bell, John B.
2016-01-01
In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov Chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is ...
Directory of Open Access Journals (Sweden)
René Roland Colditz
2015-07-01
Full Text Available Land cover mapping for large regions often employs satellite images of medium to coarse spatial resolution, which complicates mapping of discrete classes. Class memberships, which estimate the proportion of each class for every pixel, have been suggested as an alternative. This paper compares different strategies of training data allocation for discrete and continuous land cover mapping using classification and regression tree algorithms. In addition to measures of discrete and continuous map accuracy the correct estimation of the area is another important criteria. A subset of the 30 m national land cover dataset of 2006 (NLCD2006 of the United States was used as reference set to classify NADIR BRDF-adjusted surface reflectance time series of MODIS at 900 m spatial resolution. Results show that sampling of heterogeneous pixels and sample allocation according to the expected area of each class is best for classification trees. Regression trees for continuous land cover mapping should be trained with random allocation, and predictions should be normalized with a linear scaling function to correctly estimate the total area. From the tested algorithms random forest classification yields lower errors than boosted trees of C5.0, and Cubist shows higher accuracies than random forest regression.
Lawton, Pat
2004-01-01
The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.
Markov Chain Monte Carlo Methods
Indian Academy of Sciences (India)
Keywords. Markov chain; state space; stationary transition probability; stationary distribution; irreducibility; aperiodicity; stationarity; M-H algorithm; proposal distribution; acceptance probability; image processing; Gibbs sampler.
BayesLCA: An R Package for Bayesian Latent Class Analysis
Directory of Open Access Journals (Sweden)
Arthur White
2014-11-01
Full Text Available The BayesLCA package for R provides tools for performing latent class analysis within a Bayesian setting. Three methods for fitting the model are provided, incorporating an expectation-maximization algorithm, Gibbs sampling and a variational Bayes approximation. The article briefly outlines the methodology behind each of these techniques and discusses some of the technical difficulties associated with them. Methods to remedy these problems are also described. Visualization methods for each of these techniques are included, as well as criteria to aid model selection.
Learning Methods for Dynamic Topic Modeling in Automated Behavior Analysis.
Isupova, Olga; Kuzin, Danil; Mihaylova, Lyudmila
2017-09-27
Semisupervised and unsupervised systems provide operators with invaluable support and can tremendously reduce the operators' load. In the light of the necessity to process large volumes of video data and provide autonomous decisions, this paper proposes new learning algorithms for activity analysis in video. The activities and behaviors are described by a dynamic topic model. Two novel learning algorithms based on the expectation maximization approach and variational Bayes inference are proposed. Theoretical derivations of the posterior estimates of model parameters are given. The designed learning algorithms are compared with the Gibbs sampling inference scheme introduced earlier in the literature. A detailed comparison of the learning algorithms is presented on real video data. We also propose an anomaly localization procedure, elegantly embedded in the topic modeling framework. It is shown that the developed learning algorithms can achieve 95% success rate. The proposed framework can be applied to a number of areas, including transportation systems, security, and surveillance.
Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme
2013-07-01
The design of RNA sequences folding into predefined secondary structures is a milestone for many synthetic biology and gene therapy studies. Most of the current software uses similar local search strategies (i.e. a random seed is progressively adapted to acquire the desired folding properties) and more importantly do not allow the user to control explicitly the nucleotide distribution such as the GC-content in their sequences. However, the latter is an important criterion for large-scale applications as it could presumably be used to design sequences with better transcription rates and/or structural plasticity. In this article, we introduce IncaRNAtion, a novel algorithm to design RNA sequences folding into target secondary structures with a predefined nucleotide distribution. IncaRNAtion uses a global sampling approach and weighted sampling techniques. We show that our approach is fast (i.e. running time comparable or better than local search methods), seedless (we remove the bias of the seed in local search heuristics) and successfully generates high-quality sequences (i.e. thermodynamically stable) for any GC-content. To complete this study, we develop a hybrid method combining our global sampling approach with local search strategies. Remarkably, our glocal methodology overcomes both local and global approaches for sampling sequences with a specific GC-content and target structure. IncaRNAtion is available at csb.cs.mcgill.ca/incarnation/. Supplementary data are available at Bioinformatics online.
Superposition Enhanced Nested Sampling
Directory of Open Access Journals (Sweden)
Stefano Martiniani
2014-08-01
Full Text Available The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.
Novel search algorithms for a mid-infrared spectral library of cotton contaminants.
Loudermilk, J Brian; Himmelsbach, David S; Barton, Franklin E; de Haseth, James A
2008-06-01
During harvest, a variety of plant based contaminants are collected along with cotton lint. The USDA previously created a mid-infrared, attenuated total reflection (ATR), Fourier transform infrared (FT-IR) spectral library of cotton contaminants for contaminant identification as the contaminants have negative impacts on yarn quality. This library has shown impressive identification rates for extremely similar cellulose based contaminants in cases where the library was representative of the samples searched. When spectra of contaminant samples from crops grown in different geographic locations, seasons, and conditions and measured with a different spectrometer and accessories were searched, identification rates for standard search algorithms decreased significantly. Six standard algorithms were examined: dot product, correlation, sum of absolute values of differences, sum of the square root of the absolute values of differences, sum of absolute values of differences of derivatives, and sum of squared differences of derivatives. Four categories of contaminants derived from cotton plants were considered: leaf, stem, seed coat, and hull. Experiments revealed that the performance of the standard search algorithms depended upon the category of sample being searched and that different algorithms provided complementary information about sample identity. These results indicated that choosing a single standard algorithm to search the library was not possible. Three voting scheme algorithms based on result frequency, result rank, category frequency, or a combination of these factors for the results returned by the standard algorithms were developed and tested for their capability to overcome the unpredictability of the standard algorithms' performances. The group voting scheme search was based on the number of spectra from each category of samples represented in the library returned in the top ten results of the standard algorithms. This group algorithm was able to identify
Electron dose map inversion based on several algorithms
International Nuclear Information System (INIS)
Li Gui; Zheng Huaqing; Wu Yican; Fds Team
2010-01-01
The reconstruction to the electron dose map in radiation therapy was investigated by constructing the inversion model of electron dose map with different algorithms. The inversion model of electron dose map based on nonlinear programming was used, and this model was applied the penetration dose map to invert the total space one. The realization of this inversion model was by several inversion algorithms. The test results with seven samples show that except the NMinimize algorithm, which worked for just one sample, with great error,though,all the inversion algorithms could be realized to our inversion model rapidly and accurately. The Levenberg-Marquardt algorithm, having the greatest accuracy and speed, could be considered as the first choice in electron dose map inversion.Further tests show that more error would be created when the data close to the electron range was used (tail error). The tail error might be caused by the approximation of mean energy spectra, and this should be considered to improve the method. The time-saving and accurate algorithms could be used to achieve real-time dose map inversion. By selecting the best inversion algorithm, the clinical need in real-time dose verification can be satisfied. (authors)
Directory of Open Access Journals (Sweden)
Liu Yang
2017-01-01
Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.
Unsupervised classification of multivariate geostatistical data: Two algorithms
Romary, Thomas; Ors, Fabien; Rivoirard, Jacques; Deraisme, Jacques
2015-12-01
With the increasing development of remote sensing platforms and the evolution of sampling facilities in mining and oil industry, spatial datasets are becoming increasingly large, inform a growing number of variables and cover wider and wider areas. Therefore, it is often necessary to split the domain of study to account for radically different behaviors of the natural phenomenon over the domain and to simplify the subsequent modeling step. The definition of these areas can be seen as a problem of unsupervised classification, or clustering, where we try to divide the domain into homogeneous domains with respect to the values taken by the variables in hand. The application of classical clustering methods, designed for independent observations, does not ensure the spatial coherence of the resulting classes. Image segmentation methods, based on e.g. Markov random fields, are not adapted to irregularly sampled data. Other existing approaches, based on mixtures of Gaussian random functions estimated via the expectation-maximization algorithm, are limited to reasonable sample sizes and a small number of variables. In this work, we propose two algorithms based on adaptations of classical algorithms to multivariate geostatistical data. Both algorithms are model free and can handle large volumes of multivariate, irregularly spaced data. The first one proceeds by agglomerative hierarchical clustering. The spatial coherence is ensured by a proximity condition imposed for two clusters to merge. This proximity condition relies on a graph organizing the data in the coordinates space. The hierarchical algorithm can then be seen as a graph-partitioning algorithm. Following this interpretation, a spatial version of the spectral clustering algorithm is also proposed. The performances of both algorithms are assessed on toy examples and a mining dataset.
Canonical sampling of a lattice gas
International Nuclear Information System (INIS)
Mueller, W.F.
1997-01-01
It is shown that a sampling algorithm, recently proposed in conjunction with a lattice-gas model of nuclear fragmentation, samples the canonical ensemble only in an approximate fashion. A residual weight factor has to be taken into account to calculate correct thermodynamic averages. Then, however, the algorithm is numerically inefficient. copyright 1997 The American Physical Society
Naumov, Sergej; von Sonntag, Clemens
2011-11-01
Free radicals are common intermediates in the chemistry of ozone in aqueous solution. Their reactions with ozone have been probed by calculating the standard Gibbs free energies of such reactions using density functional theory (Jaguar 7.6 program). O(2) reacts fast and irreversibly only with simple carbon-centered radicals. In contrast, ozone also reacts irreversibly with conjugated carbon-centered radicals such as bisallylic (hydroxycylohexadienyl) radicals, with conjugated carbon/oxygen-centered radicals such as phenoxyl radicals, and even with nitrogen- oxygen-, sulfur-, and halogen-centered radicals. In these reactions, further ozone-reactive radicals are generated. Chain reactions may destroy ozone without giving rise to products other than O(2). This may be of importance when ozonation is used in pollution control, and reactions of free radicals with ozone have to be taken into account in modeling such processes.
International Nuclear Information System (INIS)
Okano, Yasushi
1999-08-01
In order to analyze the reaction heat and compounds due to sodium combustion, the multiphase chemical equilibrium calculation program for chemical reaction among sodium, oxygen and hydrogen is developed in this study. The developed numerical program is named BISHOP; which denotes Bi-Phase, Sodium - Oxygen - Hydrogen, Chemical Equilibrium Calculation Program'. Gibbs free energy minimization method is used because of the special merits that easily add and change chemical species, and generally deal many thermochemical reaction systems in addition to constant temperature and pressure one. Three new methods are developed for solving multi-phase sodium reaction system in this study. One is to construct equation system by simplifying phase, and the other is to expand the Gibbs free energy minimization method into multi-phase system, and the last is to establish the effective searching method for the minimum value. Chemical compounds by the combustion of sodium in the air are calculated using BISHOP. The Calculated temperature and moisture conditions where sodium-oxide and hydroxide are formed qualitatively agree with the experiments. Deformation of sodium hydride is calculated by the program. The estimated result of the relationship between the deformation temperature and pressure closely agree with the well known experimental equation of Roy and Rodgers. It is concluded that BISHOP can be used for evaluated the combustion and deformation behaviors of sodium and its compounds. Hydrogen formation condition of the dump-tank room at the sodium leak event of FBR is quantitatively evaluated by BISHOP. It can be concluded that to keep the temperature of dump-tank room lower is effective method to suppress the formation of hydrogen. In case of choosing the lower inflammability limit of 4.1 mol% as the hydrogen concentration criterion, formation reaction of sodium hydride from sodium and hydrogen is facilitated below the room temperature of 800 K, and concentration of hydrogen
Cost-effective analysis of different algorithms for the diagnosis of hepatitis C virus infection
Directory of Open Access Journals (Sweden)
A.M.E.C. Barreto
2008-02-01
Full Text Available We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co ratio of ELISA anti-HCV samples that show s/co ratio ³95% concordance with immunoblot (IB positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79. The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.
Limit order book and its modeling in terms of Gibbs Grand-Canonical Ensemble
Bicci, Alberto
2016-12-01
In the domain of so called Econophysics some attempts have been already made for applying the theory of thermodynamics and statistical mechanics to economics and financial markets. In this paper a similar approach is made from a different perspective, trying to model the limit order book and price formation process of a given stock by the Grand-Canonical Gibbs Ensemble for the bid and ask orders. The application of the Bose-Einstein statistics to this ensemble allows then to derive the distribution of the sell and buy orders as a function of price. As a consequence we can define in a meaningful way expressions for the temperatures of the ensembles of bid orders and of ask orders, which are a function of minimum bid, maximum ask and closure prices of the stock as well as of the exchanged volume of shares. It is demonstrated that the difference between the ask and bid orders temperatures can be related to the VAO (Volume Accumulation Oscillator), an indicator empirically defined in Technical Analysis of stock markets. Furthermore the derived distributions for aggregate bid and ask orders can be subject to well defined validations against real data, giving a falsifiable character to the model.
Grasp Algorithms For Optotactile Robotic Sample Acquisition, Phase I
National Aeronautics and Space Administration — Robotic sample acquisition is basically grasping. Multi-finger robot sample grasping devices are controlled to securely pick up samples. While optimal grasps for...
A Gibbs potential expansion with a quantic system made up of a large number of particles
International Nuclear Information System (INIS)
Bloch, Claude; Dominicis, Cyrano de
1959-01-01
Starting from an expansion derived in a previous work, we study the contribution to the Gibbs potential of the two-body dynamical correlations, taking into account the statistical correlations. Such a contribution is of interest for low density systems at low temperature. In the zero density limit, it reduces to the Beth Uhlenbeck expression of the second virial coefficient. For a system of fermions in the zero temperature limit, it yields the contribution of the Brueckner reaction matrix to the ground state energy, plus, under certain conditions, additional terms of the form exp. (β |Δ|), where the Δ are the binding energies of 'bound states' of the type first discussed by L. Cooper. Finally, we study the wave function of two particles immersed in a medium (defined by its temperature and chemical potential). lt satisfies an equation generalizing the Bethe Goldstone equation for an arbitrary temperature. Reprint of a paper published in 'Nuclear Physics' 10, 1959, p. 181-196 [fr
The generalization ability of online SVM classification based on Markov sampling.
Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang
2015-03-01
In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.
Gibbs-Thomson Law for Singular Step Segments: Thermodynamics Versus Kinetics
Chernov, A. A.
2003-01-01
Classical Burton-Cabrera-Frank theory presumes that thermal fluctuations are so fast that at any time density of kinks on a step is comparable with the reciprocal intermolecular distance, so that the step rate is about isotropic within the crystal plane. Such azimuthal isotropy is, however, often not the case: Kink density may be much lower. In particular, it was recently found on the (010) face of orthorhombic lysozyme that interkink distance may exceed 500-600 intermolecular distances. Under such conditions, Gibbs-Thomson law (GTL) may not be applicable: On a straight step segment between two corners, communication between the comers occurs exclusively by kink exchange. Annihilation between kinks of opposite sign generated at the comers results in the grain in step energy entering GTL. If the step segment length l much greater than D/v, where D and v are the kink diffusivity and propagation rate, respectively, the opposite kinks have practically no chance to annihilate and GTL is not applicable. The opposite condition of the GTL applicability, l much less than D/v, is equivalent to the requirement that relative supersaturation Delta(sub mu)/kT much less than alpha/l, where alpha is molecular size. Thus, GTL may be applied to a segment of 10(exp 3)alpha approx. 3 x 10(exp -5)cm approx 0.3 micron only if supersaturation is less than 0.1%, while practically used driving forces for crystallization are much larger. Relationships alternative to the GTL for different, but low, kink density have been discussed. They confirm experimental evidences that the Burton-Cabrera-Frank theory of spiral growth is growth rates twice as low as compared to the observed figures. Also, application of GTL results in unrealistic step energy while suggested kinetic law give reasonable figures.
SUBLIMATION-DRIVEN ACTIVITY IN MAIN-BELT COMET 313P/GIBBS
Energy Technology Data Exchange (ETDEWEB)
Hsieh, Henry H. [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Hainaut, Olivier [European Southern Observatory, Karl-Schwarzschild-Straße 2, D-85748 Garching bei München (Germany); Novaković, Bojan [Department of Astronomy, Faculty of Mathematics, University of Belgrade, Studentski trg 16, 11000 Belgrade (Serbia); Bolin, Bryce [Observatoire de la Côte d’Azur, Boulevard de l’Observatoire, B.P. 4229, F-06304 Nice Cedex 4 (France); Denneau, Larry; Haghighipour, Nader; Kleyna, Jan; Meech, Karen J.; Schunova, Eva; Wainscoat, Richard J. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Fitzsimmons, Alan [Astrophysics Research Centre, Queens University Belfast, Belfast BT7 1NN (United Kingdom); Kokotanekova, Rosita; Snodgrass, Colin [Planetary and Space Sciences, Department of Physical Sciences, The Open University, Milton Keynes MK7 6AA (United Kingdom); Lacerda, Pedro [Max Planck Institute for Solar System Research, Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Micheli, Marco [ESA SSA NEO Coordination Centre, Frascati, RM (Italy); Moskovitz, Nick; Wasserman, Lawrence [Lowell Observatory, 1400 W. Mars Hill Road, Flagstaff, AZ 86001 (United States); Waszczak, Adam, E-mail: hhsieh@asiaa.sinica.edu.tw [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125 (United States)
2015-02-10
We present an observational and dynamical study of newly discovered main-belt comet 313P/Gibbs. We find that the object is clearly active both in observations obtained in 2014 and in precovery observations obtained in 2003 by the Sloan Digital Sky Survey, strongly suggesting that its activity is sublimation-driven. This conclusion is supported by a photometric analysis showing an increase in the total brightness of the comet over the 2014 observing period, and dust modeling results showing that the dust emission persists over at least three months during both active periods, where we find start dates for emission no later than 2003 July 24 ± 10 for the 2003 active period and 2014 July 28 ± 10 for the 2014 active period. From serendipitous observations by the Subaru Telescope in 2004 when the object was apparently inactive, we estimate that the nucleus has an absolute R-band magnitude of H{sub R} = 17.1 ± 0.3, corresponding to an effective nucleus radius of r{sub e} ∼ 1.00 ± 0.15 km. The object’s faintness at that time means we cannot rule out the presence of activity, and so this computed radius should be considered an upper limit. We find that 313P’s orbit is intrinsically chaotic, having a Lyapunov time of T{sub l} = 12,000 yr and being located near two three-body mean-motion resonances with Jupiter and Saturn, 11J-1S-5A and 10J+12S-7A, yet appears stable over >50 Myr in an apparent example of stable chaos. We furthermore find that 313P is the second main-belt comet, after P/2012 T1 (PANSTARRS), to belong to the ∼155 Myr old Lixiaohua asteroid family.
Effects of visualization on algorithm comprehension
Mulvey, Matthew
Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.
UMAPRM: Uniformly sampling the medial axis
Yeh, Hsin-Yi Cindy; Denny, Jory; Lindsey, Aaron; Thomas, Shawna; Amato, Nancy M.
2014-01-01
© 2014 IEEE. Maintaining clearance, or distance from obstacles, is a vital component of successful motion planning algorithms. Maintaining high clearance often creates safer paths for robots. Contemporary sampling-based planning algorithms
International Nuclear Information System (INIS)
Rog, G.; Kucza, W.; Kozlowska-Rog, A.
2004-01-01
The standard Gibbs free energy of formation of LiMnO 2 and LiMn 2 O 4 at the temperatures of (680, 740 and 800) K has been determined with the help of the solid-state galvanic cells involving lithium-β-alumina electrolyte. The equilibrium electrical potentials of cathode containing Li x Mn 2 O 4 spinel, in the composition ranges 0≤x≤1 and 1≤x≤2, vs. metallic lithium in the reversible intercalation galvanic cell have been calculated. The existence of two-voltage plateaus which appeared during charging and discharging processes in reversible intercalation of lithium into Li x Mn 2 O 4 spinel, has been discussed
Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel
2018-01-01
Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.
Energy Technology Data Exchange (ETDEWEB)
Rauscher, Sarah; Pomes, Regis, E-mail: pomes@sickkids.ca
2010-11-01
Simulated tempering distributed replica sampling (STDR) is a generalized-ensemble method designed specifically for simulations of large molecular systems on shared and heterogeneous computing platforms [Rauscher, Neale and Pomes (2009) J. Chem. Theor. Comput. 5, 2640]. The STDR algorithm consists of an alternation of two steps: (1) a short molecular dynamics (MD) simulation; and (2) a stochastic temperature jump. Repeating these steps thousands of times results in a random walk in temperature, which allows the system to overcome energetic barriers, thereby enhancing conformational sampling. The aim of the present paper is to provide a practical guide to applying STDR to complex biomolecular systems. We discuss the details of our STDR implementation, which is a highly-parallel algorithm designed to maximize computational efficiency while simultaneously minimizing network communication and data storage requirements. Using a 35-residue disordered peptide in explicit water as a test system, we characterize the efficiency of the STDR algorithm with respect to both diffusion in temperature space and statistical convergence of structural properties. Importantly, we show that STDR provides a dramatic enhancement of conformational sampling compared to a canonical MD simulation.
Experimental scheme and restoration algorithm of block compression sensing
Zhang, Linxia; Zhou, Qun; Ke, Jun
2018-01-01
Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.
International Nuclear Information System (INIS)
Li Jing; Sun Yi; Zhu Peiping
2013-01-01
Differential phase-contrast computed tomography (DPC-CT) reconstruction problems are usually solved by using parallel-, fan- or cone-beam algorithms. For rod-shaped objects, the x-ray beams cannot recover all the slices of the sample at the same time. Thus, if a rod-shaped sample is required to be reconstructed by the above algorithms, one should alternately perform translation and rotation on this sample, which leads to lower efficiency. The helical cone-beam CT may significantly improve scanning efficiency for rod-shaped objects over other algorithms. In this paper, we propose a theoretically exact filter-backprojection algorithm for helical cone-beam DPC-CT, which can be applied to reconstruct the refractive index decrement distribution of the samples directly from two-dimensional differential phase-contrast images. Numerical simulations are conducted to verify the proposed algorithm. Our work provides a potential solution for inspecting the rod-shaped samples using DPC-CT, which may be applicable with the evolution of DPC-CT equipments. (paper)
The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...
inverse gaussian model for small area estimation via gibbs sampling
African Journals Online (AJOL)
ADMIN
1 Department of Decision Sciences and MIS, Concordia University, Montréal,. Québec ... method by application to household income survey data, comparing it against the usual lognormal ...... pensions, superannuation and annuities and other.
Advanced signal separation and recovery algorithms for digital x-ray spectroscopy
International Nuclear Information System (INIS)
Mahmoud, Imbaby I.; El-Tokhy, Mohamed S.
2015-01-01
X-ray spectroscopy is widely used for in-situ applications for samples analysis. Therefore, spectrum drawing and assessment of x-ray spectroscopy with high accuracy is the main scope of this paper. A Silicon Lithium Si(Li) detector that cooled with a nitrogen is used for signal extraction. The resolution of the ADC is 12 bits. Also, the sampling rate of ADC is 5 MHz. Hence, different algorithms are implemented. These algorithms were run on a personal computer with Intel core TM i5-3470 CPU and 3.20 GHz. These algorithms are signal preprocessing, signal separation and recovery algorithms, and spectrum drawing algorithm. Moreover, statistical measurements are used for evaluation of these algorithms. Signal preprocessing based on DC-offset correction and signal de-noising is performed. DC-offset correction was done by using minimum value of radiation signal. However, signal de-noising was implemented using fourth order finite impulse response (FIR) filter, linear phase least-square FIR filter, complex wavelet transforms (CWT) and Kalman filter methods. We noticed that Kalman filter achieves large peak signal to noise ratio (PSNR) and lower error than other methods. However, CWT takes much longer execution time. Moreover, three different algorithms that allow correction of x-ray signal overlapping are presented. These algorithms are 1D non-derivative peak search algorithm, second derivative peak search algorithm and extrema algorithm. Additionally, the effect of signal separation and recovery algorithms on spectrum drawing is measured. Comparison between these algorithms is introduced. The obtained results confirm that second derivative peak search algorithm as well as extrema algorithm have very small error in comparison with 1D non-derivative peak search algorithm. However, the second derivative peak search algorithm takes much longer execution time. Therefore, extrema algorithm introduces better results over other algorithms. It has the advantage of recovering and
Energy Technology Data Exchange (ETDEWEB)
Sobolev, S. L., E-mail: sobolev@icp.ac.ru [Russian Academy of Sciences, Institute of Problems of Chemical Physics (Russian Federation)
2017-03-15
An analytical model has been developed to describe the influence of solute trapping during rapid alloy solidification on the components of the Gibbs free energy change at the phase interface with emphasis on the solute drag energy. For relatively low interface velocity V < V{sub D}, where V{sub D} is the characteristic diffusion velocity, all the components, namely mixing part, local nonequilibrium part, and solute drag, significantly depend on solute diffusion and partitioning. When V ≥ V{sub D}, the local nonequilibrium effects lead to a sharp transition to diffusionless solidification. The transition is accompanied by complete solute trapping and vanishing solute drag energy, i.e. partitionless and “dragless” solidification.
Lee, Kuo Hao; Chen, Jianhan
2017-11-15
Recasting temperature replica exchange (T-RE) as a special case of Gibbs sampling has led to a simple and efficient scheme for enhanced mixing (Chodera and Shirts, J. Chem. Phys., 2011, 135, 194110). To critically examine if T-RE with independence sampling (T-REis) improves conformational sampling, we performed T-RE and T-REis simulations of ordered and disordered proteins using coarse-grained and atomistic models. The results demonstrate that T-REis effectively increase the replica mobility in temperatures space with minimal computational overhead, especially for folded proteins. However, enhanced mixing does not translate well into improved conformational sampling. The convergences of thermodynamic properties interested are similar, with slight improvements for T-REis of ordered systems. The study re-affirms the efficiency of T-RE does not appear to be limited by temperature diffusion, but by the inherent rates of spontaneous large-scale conformational re-arrangements. Due to its simplicity and efficacy of enhanced mixing, T-REis is expected to be more effective when incorporated with various Hamiltonian-RE protocols. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
SamplingStrata: An R Package for the Optimization of Strati?ed Sampling
Directory of Open Access Journals (Sweden)
Giulio Barcaroli
2014-11-01
Full Text Available When designing a sampling survey, usually constraints are set on the desired precision levels regarding one or more target estimates (the Ys. If a sampling frame is available, containing auxiliary information related to each unit (the Xs, it is possible to adopt a stratified sample design. For any given strati?cation of the frame, in the multivariate case it is possible to solve the problem of the best allocation of units in strata, by minimizing a cost function sub ject to precision constraints (or, conversely, by maximizing the precision of the estimates under a given budget. The problem is to determine the best stratification in the frame, i.e., the one that ensures the overall minimal cost of the sample necessary to satisfy precision constraints. The Xs can be categorical or continuous; continuous ones can be transformed into categorical ones. The most detailed strati?cation is given by the Cartesian product of the Xs (the atomic strata. A way to determine the best stratification is to explore exhaustively the set of all possible partitions derivable by the set of atomic strata, evaluating each one by calculating the corresponding cost in terms of the sample required to satisfy precision constraints. This is una?ordable in practical situations, where the dimension of the space of the partitions can be very high. Another possible way is to explore the space of partitions with an algorithm that is particularly suitable in such situations: the genetic algorithm. The R package SamplingStrata, based on the use of a genetic algorithm, allows to determine the best strati?cation for a population frame, i.e., the one that ensures the minimum sample cost necessary to satisfy precision constraints, in a multivariate and multi-domain case.
Low-dose multiple-information retrieval algorithm for X-ray grating-based imaging
International Nuclear Information System (INIS)
Wang Zhentian; Huang Zhifeng; Chen Zhiqiang; Zhang Li; Jiang Xiaolei; Kang Kejun; Yin Hongxia; Wang Zhenchang; Stampanoni, Marco
2011-01-01
The present work proposes a low dose information retrieval algorithm for X-ray grating-based multiple-information imaging (GB-MII) method, which can retrieve the attenuation, refraction and scattering information of samples by only three images. This algorithm aims at reducing the exposure time and the doses delivered to the sample. The multiple-information retrieval problem in GB-MII is solved by transforming a nonlinear equations set to a linear equations and adopting the nature of the trigonometric functions. The proposed algorithm is validated by experiments both on conventional X-ray source and synchrotron X-ray source, and compared with the traditional multiple-image-based retrieval algorithm. The experimental results show that our algorithm is comparable with the traditional retrieval algorithm and especially suitable for high Signal-to-Noise system.
A study of the Boltzmann and Gibbs entropies in the context of a stochastic toy model
Malgieri, Massimiliano; Onorato, Pasquale; De Ambrosis, Anna
2018-05-01
In this article we reconsider a stochastic toy model of thermal contact, first introduced in Onorato et al (2017 Eur. J. Phys. 38 045102), showing its educational potential for clarifying some current issues in the foundations of thermodynamics. The toy model can be realized in practice using dice and coins, and can be seen as representing thermal coupling of two subsystems with energy bounded from above. The system is used as a playground for studying the different behaviours of the Boltzmann and Gibbs temperatures and entropies in the approach to steady state. The process that models thermal contact between the two subsystems can be proved to be an ergodic, reversible Markov chain; thus the dynamics produces an equilibrium distribution in which the weight of each state is proportional to its multiplicity in terms of microstates. Each one of the two subsystems, taken separately, is formally equivalent to an Ising spin system in the non-interacting limit. The model is intended for educational purposes, and the level of readership of the article is aimed at advanced undergraduates.
Armstrong, Ian S; Hoffmann, Sandra A
2016-11-01
The interest in quantitative single photon emission computer tomography (SPECT) shows potential in a number of clinical applications and now several vendors are providing software and hardware solutions to allow 'SUV-SPECT' to mirror metrics used in PET imaging. This brief technical report assesses the accuracy of activity concentration measurements using a new algorithm 'xSPECT' from Siemens Healthcare. SPECT/CT data were acquired from a uniform cylinder with 5, 10, 15 and 20 s/projection and NEMA image quality phantom with 25 s/projection. The NEMA phantom had hot spheres filled with an 8 : 1 activity concentration relative to the background compartment. Reconstructions were performed using parameters defined by manufacturer presets available with the algorithm. The accuracy of activity concentration measurements was assessed. A dose calibrator-camera cross-calibration factor (CCF) was derived from the uniform phantom data. In uniform phantom images, a positive bias was observed, ranging from ∼6% in the lower count images to ∼4% in the higher-count images. On the basis of the higher-count data, a CCF of 0.96 was derived. As expected, considerable negative bias was measured in the NEMA spheres using region mean values whereas positive bias was measured in the four largest NEMA spheres. Nonmonotonically increasing recovery curves for the hot spheres suggested the presence of Gibbs edge enhancement from resolution modelling. Sufficiently accurate activity concentration measurements can easily be measured on images reconstructed with the xSPECT algorithm without a CCF. However, the use of a CCF is likely to improve accuracy further. A manual conversion of voxel values into SUV should be possible, provided that the patient weight, injected activity and time between injection and imaging are all known accurately.
A novel procedure on next generation sequencing data analysis using text mining algorithm.
Zhao, Weizhong; Chen, James J; Perkins, Roger; Wang, Yuping; Liu, Zhichao; Hong, Huixiao; Tong, Weida; Zou, Wen
2016-05-13
Next-generation sequencing (NGS) technologies have provided researchers with vast possibilities in various biological and biomedical research areas. Efficient data mining strategies are in high demand for large scale comparative and evolutional studies to be performed on the large amounts of data derived from NGS projects. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. We report a novel procedure to analyse NGS data using topic modeling. It consists of four major procedures: NGS data retrieval, preprocessing, topic modeling, and data mining using Latent Dirichlet Allocation (LDA) topic outputs. The NGS data set of the Salmonella enterica strains were used as a case study to show the workflow of this procedure. The perplexity measurement of the topic numbers and the convergence efficiencies of Gibbs sampling were calculated and discussed for achieving the best result from the proposed procedure. The output topics by LDA algorithms could be treated as features of Salmonella strains to accurately describe the genetic diversity of fliC gene in various serotypes. The results of a two-way hierarchical clustering and data matrix analysis on LDA-derived matrices successfully classified Salmonella serotypes based on the NGS data. The implementation of topic modeling in NGS data analysis procedure provides a new way to elucidate genetic information from NGS data, and identify the gene-phenotype relationships and biomarkers, especially in the era of biological and medical big data. The implementation of topic modeling in NGS data analysis provides a new way to elucidate genetic information from NGS data, and identify the gene-phenotype relationships and biomarkers, especially in the era of biological and medical big data.
A recurrence-weighted prediction algorithm for musical analysis
Colucci, Renato; Leguizamon Cucunuba, Juan Sebastián; Lloyd, Simon
2018-03-01
Forecasting the future behaviour of a system using past data is an important topic. In this article we apply nonlinear time series analysis in the context of music, and present new algorithms for extending a sample of music, while maintaining characteristics similar to the original piece. By using ideas from ergodic theory, we adapt the classical prediction method of Lorenz analogues so as to take into account recurrence times, and demonstrate with examples, how the new algorithm can produce predictions with a high degree of similarity to the original sample.
An integral conservative gridding--algorithm using Hermitian curve interpolation.
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-11-07
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to
An integral conservative gridding-algorithm using Hermitian curve interpolation
International Nuclear Information System (INIS)
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-01-01
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to
Use of the MULTINEST algorithm for gravitational wave data analysis
International Nuclear Information System (INIS)
Feroz, Farhan; Hobson, Michael P; Gair, Jonathan R; Porter, Edward K
2009-01-01
We describe an application of the MULTINEST algorithm to gravitational wave data analysis. MULTINEST is a multimodal nested sampling algorithm designed to efficiently evaluate the Bayesian evidence and return posterior probability densities for likelihood surfaces containing multiple secondary modes. The algorithm employs a set of 'live' points which are updated by partitioning the set into multiple overlapping ellipsoids and sampling uniformly from within them. This set of 'live' points climbs up the likelihood surface through nested iso-likelihood contours and the evidence and posterior distributions can be recovered from the point set evolution. The algorithm is model independent in the sense that the specific problem being tackled enters only through the likelihood computation, and does not change how the 'live' point set is updated. In this paper, we consider the use of the algorithm for gravitational wave data analysis by searching a simulated LISA data set containing two non-spinning supermassive black hole binary signals. The algorithm is able to rapidly identify all the modes of the solution and recover the true parameters of the sources to high precision.
Sampling Operations on Big Data
2015-11-29
gories. These include edge sampling methods where edges are selected by a predetermined criteria; snowball sampling methods where algorithms start... Sampling Operations on Big Data Vijay Gadepally, Taylor Herr, Luke Johnson, Lauren Milechin, Maja Milosavljevic, Benjamin A. Miller Lincoln...process and disseminate information for discovery and exploration under real-time constraints. Common signal processing operations such as sampling and
Phase stability analysis of liquid-liquid equilibrium with stochastic methods
Directory of Open Access Journals (Sweden)
G. Nagatani
2008-09-01
Full Text Available Minimization of Gibbs free energy using activity coefficient models and nonlinear equation solution techniques is commonly applied to phase stability problems. However, when conventional techniques, such as the Newton-Raphson method, are employed, serious convergence problems may arise. Due to the existence of multiple solutions, several problems can be found in modeling liquid-liquid equilibrium of multicomponent systems, which are highly dependent on the initial guess. In this work phase stability analysis of liquid-liquid equilibrium is investigated using the NRTL model. For this purpose, two distinct stochastic numerical algorithms are employed to minimize the tangent plane distance of Gibbs free energy: a subdivision algorithm that can find all roots of nonlinear equations for liquid-liquid stability analysis and the Simulated Annealing method. Results obtained in this work for the two stochastic algorithms are compared with those of the Interval Newton method from the literature. Several different binary and multicomponent systems from the literature were successfully investigated.
Modelling metal-humate interactions: an approach based on the Gibbs-Donnan concept
International Nuclear Information System (INIS)
Ephraim, J.H.
1995-01-01
Humic and fulvic acids constitute an appreciable portion of organic substances in both aquatic and terrestrial environments. Their ability to sequester metal ions and other trace elements has engaged the interest of numerous environmental scientists recently and even though considerable advances have been made, a lot more remains unknown in the area. The existence of high molecular weight fractions and functional group heterogeneity have endowed ion exchange characteristics to these substances. For example, the cation exchange capacities of some humic substances have been compared to those of smectites. Recent development in the solution chemistry has also indicated that humic substances have the capability to interact with other anions because of their amphiphilic nature. In this paper, metal-humate interaction is described by relying heavily on information obtained from treatment of the solution chemistry of ion exchangers as typical polymers. In such a treatment, the perturbations to the metal-humate interaction are estimated by resort to the Gibbs-Donnan concept where the humic substance molecule is envisaged as having a potential counter-ion concentrating region around its molecular domain into which diffusible components can enter or leave depending on their corresponding electrochemical potentials. Information from studies with ion exchangers have been adapted to describe ionic equilibria involving these substances by making it possible to characterise the configuration/conformation of these natural organic acids and to correct for electrostatic effects in the metal-humate interaction. The resultant unified physicochemical approach has facilitated the identification and estimation of the complications to the solution chemistry of humic substances. (authors). 15 refs., 1 fig
An improved multi-domain convolution tracking algorithm
Sun, Xin; Wang, Haiying; Zeng, Yingsen
2018-04-01
Along with the wide application of the Deep Learning in the field of Computer vision, Deep learning has become a mainstream direction in the field of object tracking. The tracking algorithm in this paper is based on the improved multidomain convolution neural network, and the VOT video set is pre-trained on the network by multi-domain training strategy. In the process of online tracking, the network evaluates candidate targets sampled from vicinity of the prediction target in the previous with Gaussian distribution, and the candidate target with the highest score is recognized as the prediction target of this frame. The Bounding Box Regression model is introduced to make the prediction target closer to the ground-truths target box of the test set. Grouping-update strategy is involved to extract and select useful update samples in each frame, which can effectively prevent over fitting. And adapt to changes in both target and environment. To improve the speed of the algorithm while maintaining the performance, the number of candidate target succeed in adjusting dynamically with the help of Self-adaption parameter Strategy. Finally, the algorithm is tested by OTB set, compared with other high-performance tracking algorithms, and the plot of success rate and the accuracy are drawn. which illustrates outstanding performance of the tracking algorithm in this paper.
Evaluation Of Algorithms Of Anti- HIV Antibody Tests
Directory of Open Access Journals (Sweden)
Paranjape R.S
1997-01-01
Full Text Available Research question: Can alternate algorithms be used in place of conventional algorithm for epidemiological studies of HIV infection with less expenses? Objective: To compare the results of HIV sero- prevalence as determined by test algorithms combining three kits with conventional test algorithm. Study design: Cross â€" sectional. Participants: 282 truck drivers. Statistical analysis: Sensitivity and specificity analysis and predictive values. Results: Three different algorithms that do not include Western Blot (WB were compared with the conventional algorithm, in a truck driver population with 5.6% prevalence of HIV â€"I infection. Algorithms with one EIA (Genetic Systems or Biotest and a rapid test (immunocomb or with two EIAs showed 100% positive predictive value in relation to the conventional algorithm. Using an algorithm with EIA as screening test and a rapid test as a confirmatory test was 50 to 70% less expensive than the conventional algorithm per positive scrum sample. These algorithms obviate the interpretation of indeterminate results and also give differential diagnosis of HIV-2 infection. Alternate algorithms are ideally suited for community based control programme in developing countries. Application of these algorithms in population with low prevalence should also be studied in order to evaluate universal applicability.
Energy Preserved Sampling for Compressed Sensing MRI
Directory of Open Access Journals (Sweden)
Yudong Zhang
2014-01-01
Full Text Available The sampling patterns, cost functions, and reconstruction algorithms play important roles in optimizing compressed sensing magnetic resonance imaging (CS-MRI. Simple random sampling patterns did not take into account the energy distribution in k-space and resulted in suboptimal reconstruction of MR images. Therefore, a variety of variable density (VD based samplings patterns had been developed. To further improve it, we propose a novel energy preserving sampling (ePRESS method. Besides, we improve the cost function by introducing phase correction and region of support matrix, and we propose iterative thresholding algorithm (ITA to solve the improved cost function. We evaluate the proposed ePRESS sampling method, improved cost function, and ITA reconstruction algorithm by 2D digital phantom and 2D in vivo MR brains of healthy volunteers. These assessments demonstrate that the proposed ePRESS method performs better than VD, POWER, and BKO; the improved cost function can achieve better reconstruction quality than conventional cost function; and the ITA is faster than SISTA and is competitive with FISTA in terms of computation time.
Special nuclear material inventory sampling plans
International Nuclear Information System (INIS)
Vaccaro, H.S.; Goldman, A.S.
1987-01-01
This paper presents improved procedures for obtaining statistically valid sampling plans for nuclear facilities. The double sampling concept and methods for developing optimal double sampling plans are described. An algorithm is described that is satisfactory for finding optimal double sampling plans and choosing appropriate detection and false alarm probabilities
An algorithm for online optimization of accelerators
Energy Technology Data Exchange (ETDEWEB)
Huang, Xiaobiao [SLAC National Accelerator Lab., Menlo Park, CA (United States); Corbett, Jeff [SLAC National Accelerator Lab., Menlo Park, CA (United States); Safranek, James [SLAC National Accelerator Lab., Menlo Park, CA (United States); Wu, Juhao [SLAC National Accelerator Lab., Menlo Park, CA (United States)
2013-10-01
We developed a general algorithm for online optimization of accelerator performance, i.e., online tuning, using the performance measure as the objective function. This method, named robust conjugate direction search (RCDS), combines the conjugate direction set approach of Powell's method with a robust line optimizer which considers the random noise in bracketing the minimum and uses parabolic fit of data points that uniformly sample the bracketed zone. Moreover, it is much more robust against noise than traditional algorithms and is therefore suitable for online application. Simulation and experimental studies have been carried out to demonstrate the strength of the new algorithm.
Entropy landscape and non-Gibbs solutions in constraint satisfaction problems
International Nuclear Information System (INIS)
Dall'Asta, L.; Ramezanpour, A.; Zecchina, R.
2008-05-01
We study the entropy landscape of solutions for the bicoloring problem in random graphs, a representative difficult constraint satisfaction problem. Our goal is to classify which type of clusters of solutions are addressed by different algorithms. In the first part of the study we use the cavity method to obtain the number of clusters with a given internal entropy and determine the phase diagram of the problem, e.g. dynamical, rigidity and SAT-UNSAT transitions. In the second part of the paper we analyze different algorithms and locate their behavior in the entropy landscape of the problem. For instance we show that a smoothed version of a decimation strategy based on Belief Propagation is able to find solutions belonging to sub-dominant clusters even beyond the so called rigidity transition where the thermodynamically relevant clusters become frozen. These non-equilibrium solutions belong to the most probable unfrozen clusters. (author)
Optimal sampling strategy for data mining
International Nuclear Information System (INIS)
Ghaffar, A.; Shahbaz, M.; Mahmood, W.
2013-01-01
Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)
Institute of Scientific and Technical Information of China (English)
WANG ShunJin; ZHANG Hua
2007-01-01
Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino
2018-05-01
This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.
Directory of Open Access Journals (Sweden)
Dazhi Jiang
2015-01-01
Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.
A new LMS algorithm for analysis of atrial fibrillation signals.
Ciaccio, Edward J; Biviano, Angelo B; Whang, William; Garan, Hasan
2012-03-26
A biomedical signal can be defined by its extrinsic features (x-axis and y-axis shift and scale) and intrinsic features (shape after normalization of extrinsic features). In this study, an LMS algorithm utilizing the method of differential steepest descent is developed, and is tested by normalization of extrinsic features in complex fractionated atrial electrograms (CFAE). Equations for normalization of x-axis and y-axis shift and scale are first derived. The algorithm is implemented for real-time analysis of CFAE acquired during atrial fibrillation (AF). Data was acquired at a 977 Hz sampling rate from 10 paroxysmal and 10 persistent AF patients undergoing clinical electrophysiologic study and catheter ablation therapy. Over 24 trials, normalization characteristics using the new algorithm with four weights were compared to the Widrow-Hoff LMS algorithm with four tapped delays. The time for convergence, and the mean squared error (MSE) after convergence, were compared. The new LMS algorithm was also applied to lead aVF of the electrocardiogram in one patient with longstanding persistent AF, to enhance the F wave and to monitor extrinsic changes in signal shape. The average waveform over a 25 s interval was used as a prototypical reference signal for matching with the aVF lead. Based on the derivation equations, the y-shift and y-scale adjustments of the new LMS algorithm were shown to be equivalent to the scalar form of the Widrow-Hoff LMS algorithm. For x-shift and x-scale adjustments, rather than implementing a long tapped delay as in Widrow-Hoff LMS, the new method uses only two weights. After convergence, the MSE for matching paroxysmal CFAE averaged 0.46 ± 0.49 μV(2)/sample for the new LMS algorithm versus 0.72 ± 0.35 μV(2)/sample for Widrow-Hoff LMS. The MSE for matching persistent CFAE averaged 0.55 ± 0.95 μV(2)/sample for the new LMS algorithm versus 0.62 ± 0.55 μV(2)/sample for Widrow-Hoff LMS. There were no significant differences in estimation
A new LMS algorithm for analysis of atrial fibrillation signals
Directory of Open Access Journals (Sweden)
Ciaccio Edward J
2012-03-01
Full Text Available Abstract Background A biomedical signal can be defined by its extrinsic features (x-axis and y-axis shift and scale and intrinsic features (shape after normalization of extrinsic features. In this study, an LMS algorithm utilizing the method of differential steepest descent is developed, and is tested by normalization of extrinsic features in complex fractionated atrial electrograms (CFAE. Method Equations for normalization of x-axis and y-axis shift and scale are first derived. The algorithm is implemented for real-time analysis of CFAE acquired during atrial fibrillation (AF. Data was acquired at a 977 Hz sampling rate from 10 paroxysmal and 10 persistent AF patients undergoing clinical electrophysiologic study and catheter ablation therapy. Over 24 trials, normalization characteristics using the new algorithm with four weights were compared to the Widrow-Hoff LMS algorithm with four tapped delays. The time for convergence, and the mean squared error (MSE after convergence, were compared. The new LMS algorithm was also applied to lead aVF of the electrocardiogram in one patient with longstanding persistent AF, to enhance the F wave and to monitor extrinsic changes in signal shape. The average waveform over a 25 s interval was used as a prototypical reference signal for matching with the aVF lead. Results Based on the derivation equations, the y-shift and y-scale adjustments of the new LMS algorithm were shown to be equivalent to the scalar form of the Widrow-Hoff LMS algorithm. For x-shift and x-scale adjustments, rather than implementing a long tapped delay as in Widrow-Hoff LMS, the new method uses only two weights. After convergence, the MSE for matching paroxysmal CFAE averaged 0.46 ± 0.49μV2/sample for the new LMS algorithm versus 0.72 ± 0.35μV2/sample for Widrow-Hoff LMS. The MSE for matching persistent CFAE averaged 0.55 ± 0.95μV2/sample for the new LMS algorithm versus 0.62 ± 0.55μV2/sample for Widrow
Research on compressive sensing reconstruction algorithm based on total variation model
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
Cabrera-Bañegil, Manuel; Hurtado-Sánchez, María Del Carmen; Galeano-Díaz, Teresa; Durán-Merás, Isabel
2017-04-01
The potential of front-face fluorescence spectroscopy combined with second-order chemometric methods was investigated for the quantification of the main polyphenols present in wine samples. Parallel factor analysis (PARAFAC) and unfolded-partial least squares coupled to residual bilinearization (U-PLS/RBL) were assessed for the quantification of catechin, epicatechin, quercetin, resveratrol, caffeic acid, gallic acid, p-coumaric acid, and vanillic acid in red wines. Excitation-emission matrices of different red wine samples, without pretreatment, were obtained in front-face mode, recording emission between 290 and 450 nm, exciting between 240 and 290 nm, for the analysis of epicatechin, catechin, caffeic acid, gallic acid, and vanillic acid; and excitation and emission between 300-360 and 330-400nm, respectively, for the analysis of resveratrol. U-PLS/RBL algorithm provided the best results and this methodology was validated by an optimized liquid chromatographic coupled to diode array and fluorimetric detectors procedure, obtaining a very good correlation for vanillic acid, caffeic acid, epicatechin and resveratrol. Copyright © 2016 Elsevier Ltd. All rights reserved.
Use of the MULTINEST algorithm for gravitational wave data analysis
Energy Technology Data Exchange (ETDEWEB)
Feroz, Farhan; Hobson, Michael P [Astrophysics Group, Cavendish Laboratory, JJ Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Gair, Jonathan R [Institute of Astronomy, Madingley Road, Cambridge CB3 0HA (United Kingdom); Porter, Edward K [APC, UMR 7164, Universite Paris 7 Denis Diderot, 10, rue Alice Domon et Leonie Duquet, 75205 Paris Cedex 13 (France)
2009-11-07
We describe an application of the MULTINEST algorithm to gravitational wave data analysis. MULTINEST is a multimodal nested sampling algorithm designed to efficiently evaluate the Bayesian evidence and return posterior probability densities for likelihood surfaces containing multiple secondary modes. The algorithm employs a set of 'live' points which are updated by partitioning the set into multiple overlapping ellipsoids and sampling uniformly from within them. This set of 'live' points climbs up the likelihood surface through nested iso-likelihood contours and the evidence and posterior distributions can be recovered from the point set evolution. The algorithm is model independent in the sense that the specific problem being tackled enters only through the likelihood computation, and does not change how the 'live' point set is updated. In this paper, we consider the use of the algorithm for gravitational wave data analysis by searching a simulated LISA data set containing two non-spinning supermassive black hole binary signals. The algorithm is able to rapidly identify all the modes of the solution and recover the true parameters of the sources to high precision.
Efficient Unbiased Rendering using Enlightened Local Path Sampling
DEFF Research Database (Denmark)
Kristensen, Anders Wang
measurements, which are the solution to the adjoint light transport problem. The second is a representation of the distribution of radiance and importance in the scene. We also derive a new method of particle sampling, which is advantageous compared to existing methods. Together we call the resulting algorithm....... The downside to using these algorithms is that they can be slow to converge. Due to the nature of Monte Carlo methods, the results are random variables subject to variance. This manifests itself as noise in the images, which can only be reduced by generating more samples. The reason these methods are slow...... is because of a lack of eeffective methods of importance sampling. Most global illumination algorithms are based on local path sampling, which is essentially a recipe for constructing random walks. Using this procedure paths are built based on information given explicitly as part of scene description...
Two-Step Proximal Gradient Algorithm for Low-Rank Matrix Completion
Directory of Open Access Journals (Sweden)
Qiuyu Wang
2016-06-01
Full Text Available In this paper, we propose a two-step proximal gradient algorithm to solve nuclear norm regularized least squares for the purpose of recovering low-rank data matrix from sampling of its entries. Each iteration generated by the proposed algorithm is a combination of the latest three points, namely, the previous point, the current iterate, and its proximal gradient point. This algorithm preserves the computational simplicity of classical proximal gradient algorithm where a singular value decomposition in proximal operator is involved. Global convergence is followed directly in the literature. Numerical results are reported to show the efficiency of the algorithm.
The Gibbs free energy of homogeneous nucleation: From atomistic nuclei to the planar limit.
Cheng, Bingqing; Tribello, Gareth A; Ceriotti, Michele
2017-09-14
In this paper we discuss how the information contained in atomistic simulations of homogeneous nucleation should be used when fitting the parameters in macroscopic nucleation models. We show how the number of solid and liquid atoms in such simulations can be determined unambiguously by using a Gibbs dividing surface and how the free energy as a function of the number of solid atoms in the nucleus can thus be extracted. We then show that the parameters (the chemical potential, the interfacial free energy, and a Tolman correction) of a model based on classical nucleation theory can be fitted using the information contained in these free-energy profiles but that the parameters in such models are highly correlated. This correlation is unfortunate as it ensures that small errors in the computed free energy surface can give rise to large errors in the extrapolated properties of the fitted model. To resolve this problem we thus propose a method for fitting macroscopic nucleation models that uses simulations of planar interfaces and simulations of three-dimensional nuclei in tandem. We show that when the chemical potentials and the interface energy are pinned to their planar-interface values, more precise estimates for the Tolman length are obtained. Extrapolating the free energy profile obtained from small simulation boxes to larger nuclei is thus more reliable.
Representing and computing regular languages on massively parallel networks
Energy Technology Data Exchange (ETDEWEB)
Miller, M.I.; O' Sullivan, J.A. (Electronic Systems and Research Lab., of Electrical Engineering, Washington Univ., St. Louis, MO (US)); Boysam, B. (Dept. of Electrical, Computer and Systems Engineering, Rensselaer Polytechnic Inst., Troy, NY (US)); Smith, K.R. (Dept. of Electrical Engineering, Southern Illinois Univ., Edwardsville, IL (US))
1991-01-01
This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochastic diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.
Advanced defect detection algorithm using clustering in ultrasonic NDE
Gongzhang, Rui; Gachagan, Anthony
2016-02-01
A range of materials used in industry exhibit scattering properties which limits ultrasonic NDE. Many algorithms have been proposed to enhance defect detection ability, such as the well-known Split Spectrum Processing (SSP) technique. Scattering noise usually cannot be fully removed and the remaining noise can be easily confused with real feature signals, hence becoming artefacts during the image interpretation stage. This paper presents an advanced algorithm to further reduce the influence of artefacts remaining in A-scan data after processing using a conventional defect detection algorithm. The raw A-scan data can be acquired from either traditional single transducer or phased array configurations. The proposed algorithm uses the concept of unsupervised machine learning to cluster segmental defect signals from pre-processed A-scans into different classes. The distinction and similarity between each class and the ensemble of randomly selected noise segments can be observed by applying a classification algorithm. Each class will then be labelled as `legitimate reflector' or `artefacts' based on this observation and the expected probability of defection (PoD) and probability of false alarm (PFA) determined. To facilitate data collection and validate the proposed algorithm, a 5MHz linear array transducer is used to collect A-scans from both austenitic steel and Inconel samples. Each pulse-echo A-scan is pre-processed using SSP and the subsequent application of the proposed clustering algorithm has provided an additional reduction to PFA while maintaining PoD for both samples compared with SSP results alone.
Modeling Electric Double-Layer Capacitors Using Charge Variation Methodology in Gibbs Ensemble
Directory of Open Access Journals (Sweden)
Ganeshprasad Pavaskar
2018-01-01
Full Text Available Supercapacitors deliver higher power than batteries and find applications in grid integration and electric vehicles. Recent work by Chmiola et al. (2006 has revealed unexpected increase in the capacitance of porous carbon electrodes using ionic liquids as electrolytes. The work has generated curiosity among both experimentalists and theoreticians. Here, we have performed molecular simulations using a recently developed technique (Punnathanam, 2014 for simulating supercapacitor system. In this technique, the two electrodes (containing electrolyte in slit pore are simulated in two different boxes using the Gibbs ensemble methodology. This reduces the number of particles required and interfacial interactions, which helps in reducing computational load. The method simulates an electric double-layer capacitor (EDLC with macroscopic electrodes with much smaller system sizes. In addition, the charges on individual electrode atoms are allowed to vary in response to movement of electrolyte ions (i.e., electrode is polarizable while ensuring these atoms are at the same electric potential. We also present the application of our technique on EDLCs with the electrodes modeled as slit pores and as complex three-dimensional pore networks for different electrolyte geometries. The smallest pore geometry showed an increase in capacitance toward the potential of 0 charge. This is in agreement with the new understanding of the electrical double layer in regions of dense ionic packing, as noted by Kornyshev’s theoretical model (Kornyshev, 2007, which also showed a similar trend. This is not addressed by the classical Gouy–Chapman theory for the electric double layer. Furthermore, the electrode polarizability simulated in the model improved the accuracy of the calculated capacitance. However, its addition did not significantly alter the capacitance values in the voltage range considered.
Lightweight link dimensioning using sFlow sampling
DEFF Research Database (Denmark)
de Oliviera Schmidt, Ricardo; Sadre, Ramin; Sperotto, Anna
2013-01-01
not be trivial in high-speed links. Aiming scalability, operators often deploy packet sampling on monitoring, but little is known how it affects link dimensioning. In this paper we assess the feasibility of lightweight link dimensioning using sFlow, which is a widely-deployed traffic monitoring tool. We...... implement sFlow sampling algorithm and use a previously proposed and validated dimensioning formula that needs traffic variance. We validate our approach using packet captures from real networks. Results show that the proposed procedure is successful for a range of sampling rates and that, due to randomness...... of sampling algorithm, the error introduced by scaling the traffic variance yields more conservative results that cope with short-term traffic fluctuations....
Quasi Gradient Projection Algorithm for Sparse Reconstruction in Compressed Sensing
Directory of Open Access Journals (Sweden)
Xin Meng
2014-02-01
Full Text Available Compressed sensing is a novel signal sampling theory under the condition that the signal is sparse or compressible. The existing recovery algorithms based on the gradient projection can either need prior knowledge or recovery the signal poorly. In this paper, a new algorithm based on gradient projection is proposed, which is referred as Quasi Gradient Projection. The algorithm presented quasi gradient direction and two step sizes schemes along this direction. The algorithm doesn’t need any prior knowledge of the original signal. Simulation results demonstrate that the presented algorithm cans recovery the signal more correctly than GPSR which also don’t need prior knowledge. Meanwhile, the algorithm has a lower computation complexity.
Comparison of transition-matrix sampling procedures
DEFF Research Database (Denmark)
Yevick, D.; Reimer, M.; Tromborg, Bjarne
2009-01-01
We compare the accuracy of the multicanonical procedure with that of transition-matrix models of static and dynamic communication system properties incorporating different acceptance rules. We find that for appropriate ranges of the underlying numerical parameters, algorithmically simple yet high...... accurate procedures can be employed in place of the standard multicanonical sampling algorithm....
Distribution Bottlenecks in Classification Algorithms
Zwartjes, G.J.; Havinga, Paul J.M.; Smit, Gerardus Johannes Maria; Hurink, Johann L.
2012-01-01
The abundance of data available on Wireless Sensor Networks makes online processing necessary. In industrial applications for example, the correct operation of equipment can be the point of interest while raw sampled data is of minor importance. Classication algorithms can be used to make state
Density meter algorithm and system for estimating sampling/mixing uncertainty
International Nuclear Information System (INIS)
Shine, E.P.
1986-01-01
The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses
Density meter algorithm and system for estimating sampling/mixing uncertainty
International Nuclear Information System (INIS)
Shine, E.P.
1986-01-01
The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses
Efficient and exact sampling of simple graphs with given arbitrary degree sequence.
Directory of Open Access Journals (Sweden)
Charo I Del Genio
Full Text Available Uniform sampling from graphical realizations of a given degree sequence is a fundamental component in simulation-based measurements of network observables, with applications ranging from epidemics, through social networks to Internet modeling. Existing graph sampling methods are either link-swap based (Markov-Chain Monte Carlo algorithms or stub-matching based (the Configuration Model. Both types are ill-controlled, with typically unknown mixing times for link-swap methods and uncontrolled rejections for the Configuration Model. Here we propose an efficient, polynomial time algorithm that generates statistically independent graph samples with a given, arbitrary, degree sequence. The algorithm provides a weight associated with each sample, allowing the observable to be measured either uniformly over the graph ensemble, or, alternatively, with a desired distribution. Unlike other algorithms, this method always produces a sample, without back-tracking or rejections. Using a central limit theorem-based reasoning, we argue, that for large , and for degree sequences admitting many realizations, the sample weights are expected to have a lognormal distribution. As examples, we apply our algorithm to generate networks with degree sequences drawn from power-law distributions and from binomial distributions.
Watermarking Algorithms for 3D NURBS Graphic Data
Directory of Open Access Journals (Sweden)
Jae Jun Lee
2004-10-01
Full Text Available Two watermarking algorithms for 3D nonuniform rational B-spline (NURBS graphic data are proposed: one is appropriate for the steganography, and the other for watermarking. Instead of directly embedding data into the parameters of NURBS, the proposed algorithms embed data into the 2D virtual images extracted by parameter sampling of 3D model. As a result, the proposed steganography algorithm can embed information into more places of the surface than the conventional algorithm, while preserving the data size of the model. Also, any existing 2D watermarking technique can be used for the watermarking of 3D NURBS surfaces. From the experiment, it is found that the algorithm for the watermarking is robust to the attacks on weights, control points, and knots. It is also found to be robust to the remodeling of NURBS models.
International Nuclear Information System (INIS)
Sandstroem, Malin Hannah; Bostroem, Dan; Rosen, Erik
2006-01-01
The equilibrium reactions: 3Ca 2 P 2 O 7 (s)+6Ni(s)-bar 2Ca 3 (PO 4 ) 2 (s)+2Ni 3 P(s)+52O 2 (g) and 2Ca(PO 3 ) 2 (s)+6Ni(s)-bar Ca 2 P 2 O 7 (s)+2Ni 3 P(s)+52O 2 (g) were studied in the temperature range 890K to 1140K. The oxygen equilibrium pressures were determined using galvanic cells incorporating yttria stabilized zirconia as solid electrolyte. From the measured data and using the literature values of standard Gibbs free energy of formation for Ca 3 (PO 4 ) 2 and Ni 3 P, the following relationship of the standard Gibbs free energy of formation for Ca 2 P 2 O 7 and Ca(PO 3 ) 2 were calculated:Δ f G o (Ca 2 P 2 O 7 )+/-11/(kJ.mol -1 )=-3475.9+1.5441(T/K)-0.1051(T/K).ln(T/K)andΔ f G o (Ca(PO 3 ) 2 )+/-12/(kJ.mol -1 )=-3334.8+6.1561(T/K)-0.6950(T/K).ln(T/K)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
Comparison of SeaWinds Backscatter Imaging Algorithms
Long, David G.
2017-01-01
This paper compares the performance and tradeoffs of various backscatter imaging algorithms for the SeaWinds scatterometer when multiple passes over a target are available. Reconstruction methods are compared with conventional gridding algorithms. In particular, the performance and tradeoffs in conventional ‘drop in the bucket’ (DIB) gridding at the intrinsic sensor resolution are compared to high-spatial-resolution imaging algorithms such as fine-resolution DIB and the scatterometer image reconstruction (SIR) that generate enhanced-resolution backscatter images. Various options for each algorithm are explored, including considering both linear and dB computation. The effects of sampling density and reconstruction quality versus time are explored. Both simulated and actual data results are considered. The results demonstrate the effectiveness of high-resolution reconstruction using SIR as well as its limitations and the limitations of DIB and fDIB. PMID:28828143
A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.
Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo
2018-04-01
Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.
A Modular Low-Complexity ECG Delineation Algorithm for Real-Time Embedded Systems.
Bote, Jose Manuel; Recas, Joaquin; Rincon, Francisco; Atienza, David; Hermida, Roman
2018-03-01
This work presents a new modular and low-complexity algorithm for the delineation of the different ECG waves (QRS, P and T peaks, onsets, and end). Involving a reduced number of operations per second and having a small memory footprint, this algorithm is intended to perform real-time delineation on resource-constrained embedded systems. The modular design allows the algorithm to automatically adjust the delineation quality in runtime to a wide range of modes and sampling rates, from a ultralow-power mode when no arrhythmia is detected, in which the ECG is sampled at low frequency, to a complete high-accuracy delineation mode, in which the ECG is sampled at high frequency and all the ECG fiducial points are detected, in the case of arrhythmia. The delineation algorithm has been adjusted using the QT database, providing very high sensitivity and positive predictivity, and validated with the MIT database. The errors in the delineation of all the fiducial points are below the tolerances given by the Common Standards for Electrocardiography Committee in the high-accuracy mode, except for the P wave onset, for which the algorithm is above the agreed tolerances by only a fraction of the sample duration. The computational load for the ultralow-power 8-MHz TI MSP430 series microcontroller ranges from 0.2% to 8.5% according to the mode used.
Developing an Enhanced Lightning Jump Algorithm for Operational Use
Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.
2009-01-01
Overall Goals: 1. Build on the lightning jump framework set through previous studies. 2. Understand what typically occurs in nonsevere convection with respect to increases in lightning. 3. Ultimately develop a lightning jump algorithm for use on the Geostationary Lightning Mapper (GLM). 4 Lightning jump algorithm configurations were developed (2(sigma), 3(sigma), Threshold 10 and Threshold 8). 5 algorithms were tested on a population of 47 nonsevere and 38 severe thunderstorms. Results indicate that the 2(sigma) algorithm performed best over the entire thunderstorm sample set with a POD of 87%, a far of 35%, a CSI of 59% and a HSS of 75%.
Low dose reconstruction algorithm for differential phase contrast imaging.
Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni
2011-01-01
Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.
Elsheikh, Ahmed H.
2014-02-01
An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior distributions by iteratively re-focusing a set of samples to high likelihood regions. NS allows representing the posterior probability density function (PDF) with a smaller number of samples and reduces the curse of dimensionality effects. The main difficulty of the NS algorithm is in the constrained sampling step which is commonly performed using a random walk Markov Chain Monte-Carlo (MCMC) algorithm. In this work, we perform a two-stage sampling using a polynomial chaos response surface to filter out rejected samples in the Markov Chain Monte-Carlo method. The combined use of nested sampling and the two-stage MCMC based on approximate response surfaces provides significant computational gains in terms of the number of simulation runs. The proposed algorithm is applied for calibration and model selection of subsurface flow models. © 2013.
Fast and accurate algorithm for the computation of complex linear canonical transforms.
Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus
2010-09-01
A fast and accurate algorithm is developed for the numerical computation of the family of complex linear canonical transforms (CLCTs), which represent the input-output relationship of complex quadratic-phase systems. Allowing the linear canonical transform parameters to be complex numbers makes it possible to represent paraxial optical systems that involve complex parameters. These include lossy systems such as Gaussian apertures, Gaussian ducts, or complex graded-index media, as well as lossless thin lenses and sections of free space and any arbitrary combinations of them. Complex-ordered fractional Fourier transforms (CFRTs) are a special case of CLCTs, and therefore a fast and accurate algorithm to compute CFRTs is included as a special case of the presented algorithm. The algorithm is based on decomposition of an arbitrary CLCT matrix into real and complex chirp multiplications and Fourier transforms. The samples of the output are obtained from the samples of the input in approximately N log N time, where N is the number of input samples. A space-bandwidth product tracking formalism is developed to ensure that the number of samples is information-theoretically sufficient to reconstruct the continuous transform, but not unnecessarily redundant.
External Threat Risk Assessment Algorithm (ExTRAA)
Energy Technology Data Exchange (ETDEWEB)
Powell, Troy C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-08-01
Two risk assessment algorithms and philosophies have been augmented and combined to form a new algorit hm, the External Threat Risk Assessment Algorithm (ExTRAA), that allows for effective and statistically sound analysis of external threat sources in relation to individual attack methods . In addition to the attack method use probability and the attack method employment consequence, t he concept of defining threat sources is added to the risk assessment process. Sample data is tabulated and depicted in radar plots and bar graphs for algorithm demonstration purposes. The largest success of ExTRAA is its ability to visualize the kind of r isk posed in a given situation using the radar plot method.
An three-dimensional imaging algorithm based on the radiation model of electric dipole
International Nuclear Information System (INIS)
Tian Bo; Zhong Weijun; Tong Chuangming
2011-01-01
A three-dimensional imaging algorithm based on the radiation model of dipole (DBP) is presented. On the foundation of researching the principle of the back projection (BP) algorithm, the relationship between the near field imaging model and far field imaging model is analyzed based on the scattering model. Firstly, the far field sampling data is transferred to the near field sampling data through applying the radiation theory of dipole. Then the dealt sampling data was projected to the imaging region to obtain the images of targets. The capability of the new algorithm to detect targets is verified by using finite-difference time-domain method (FDTD), and the coupling effect for imaging is analyzed. (authors)
Saha, Abhijit; Vivas, A. Katherina
2017-12-01
Ongoing and future surveys with repeat imaging in multiple bands are producing (or will produce) time-spaced measurements of brightness, resulting in the identification of large numbers of variable sources in the sky. A large fraction of these are periodic variables: compilations of these are of scientific interest for a variety of purposes. Unavoidably, the data sets from many such surveys not only have sparse sampling, but also have embedded frequencies in the observing cadence that beat against the natural periodicities of any object under investigation. Such limitations can make period determination ambiguous and uncertain. For multiband data sets with asynchronous measurements in multiple passbands, we wish to maximally use the information on periodicity in a manner that is agnostic of differences in the light-curve shapes across the different channels. Given large volumes of data, computational efficiency is also at a premium. This paper develops and presents a computationally economic method for determining periodicity that combines the results from two different classes of period-determination algorithms. The underlying principles are illustrated through examples. The effectiveness of this approach for combining asynchronously sampled measurements in multiple observables that share an underlying fundamental frequency is also demonstrated.
Models and algorithms for biomolecules and molecular networks
DasGupta, Bhaskar
2016-01-01
By providing expositions to modeling principles, theories, computational solutions, and open problems, this reference presents a full scope on relevant biological phenomena, modeling frameworks, technical challenges, and algorithms. * Up-to-date developments of structures of biomolecules, systems biology, advanced models, and algorithms * Sampling techniques for estimating evolutionary rates and generating molecular structures * Accurate computation of probability landscape of stochastic networks, solving discrete chemical master equations * End-of-chapter exercises
Sampling Large Graphs for Anticipatory Analytics
2015-05-15
low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning
Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704
Toward a Principled Sampling Theory for Quasi-Orders.
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.
Toward a Principled Sampling Theory for Quasi-Orders
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601
Kadoura, Ahmad Salim
2014-03-17
Molecular simulation could provide detailed description of fluid systems when compared to experimental techniques. They can also replace equations of state; however, molecular simulation usually costs considerable computational efforts. Several techniques have been developed to overcome such high computational costs. In this paper, two early rejection schemes, a conservative and a hybrid one, are introduced. In these two methods, undesired configurations generated by the Monte Carlo trials are rejected earlier than it would when using conventional algorithms. The methods are tested for structureless single-component Lennard-Jones particles in both canonical and NVT-Gibbs ensembles. The computational time reduction for both ensembles is observed at a wide range of thermodynamic conditions. Results show that computational time savings are directly proportional to the rejection rate of Monte Carlo trials. The proposed conservative scheme has shown to be successful in saving up to 40% of the computational time in the canonical ensemble and up to 30% in the NVT-Gibbs ensemble when compared to standard algorithms. In addition, it preserves the exact Markov chains produced by the Metropolis scheme. Further enhancement for NVT-Gibbs ensemble is achieved by combining this technique with the bond formation early rejection one. The hybrid method achieves more than 50% saving of the central processing unit (CPU) time.
DEFF Research Database (Denmark)
Hansen, Kasper Lage; Szallasi, Zoltan Imre; Eklund, Aron Charles
2009-01-01
evaluated consistency using the Pearson correlation between measurements obtained on the two platforms. Also, we introduce the log-ratio discrepancy as a more relevant measure of discordance between gene expression platforms. Of nine preprocessing algorithms tested, PLIER+16 produced expression values...
Bayesian and Classical Estimation of Stress-Strength Reliability for Inverse Weibull Lifetime Models
Directory of Open Access Journals (Sweden)
Qixuan Bi
2017-06-01
Full Text Available In this paper, we consider the problem of estimating stress-strength reliability for inverse Weibull lifetime models having the same shape parameters but different scale parameters. We obtain the maximum likelihood estimator and its asymptotic distribution. Since the classical estimator doesn’t hold explicit forms, we propose an approximate maximum likelihood estimator. The asymptotic confidence interval and two bootstrap intervals are obtained. Using the Gibbs sampling technique, Bayesian estimator and the corresponding credible interval are obtained. The Metropolis-Hastings algorithm is used to generate random variates. Monte Carlo simulations are conducted to compare the proposed methods. Analysis of a real dataset is performed.
Phase retrieval via incremental truncated amplitude flow algorithm
Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao
2017-10-01
This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.
AdaBoost-based algorithm for network intrusion detection.
Hu, Weiming; Hu, Wei; Maybank, Steve
2008-04-01
Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.
Searching for the majority: algorithms of voluntary control.
Directory of Open Access Journals (Sweden)
Jin Fan
Full Text Available Voluntary control of information processing is crucial to allocate resources and prioritize the processes that are most important under a given situation; the algorithms underlying such control, however, are often not clear. We investigated possible algorithms of control for the performance of the majority function, in which participants searched for and identified one of two alternative categories (left or right pointing arrows as composing the majority in each stimulus set. We manipulated the amount (set size of 1, 3, and 5 and content (ratio of left and right pointing arrows within a set of the inputs to test competing hypotheses regarding mental operations for information processing. Using a novel measure based on computational load, we found that reaction time was best predicted by a grouping search algorithm as compared to alternative algorithms (i.e., exhaustive or self-terminating search. The grouping search algorithm involves sampling and resampling of the inputs before a decision is reached. These findings highlight the importance of investigating the implications of voluntary control via algorithms of mental operations.
Digital signal processing algorithms for nuclear particle spectroscopy
International Nuclear Information System (INIS)
Zejnalova, O.; Zejnalov, Sh.; Hambsch, F.J.; Oberstedt, S.
2007-01-01
Digital signal processing algorithms for nuclear particle spectroscopy are described along with a digital pile-up elimination method applicable to equidistantly sampled detector signals pre-processed by a charge-sensitive preamplifier. The signal processing algorithms are provided as recursive one- or multi-step procedures which can be easily programmed using modern computer programming languages. The influence of the number of bits of the sampling analogue-to-digital converter on the final signal-to-noise ratio of the spectrometer is considered. Algorithms for a digital shaping-filter amplifier, for a digital pile-up elimination scheme and for ballistic deficit correction were investigated using a high purity germanium detector. The pile-up elimination method was originally developed for fission fragment spectroscopy using a Frisch-grid back-to-back double ionization chamber and was mainly intended for pile-up elimination in case of high alpha-radioactivity of the fissile target. The developed pile-up elimination method affects only the electronic noise generated by the preamplifier. Therefore the influence of the pile-up elimination scheme on the final resolution of the spectrometer is investigated in terms of the distance between pile-up pulses. The efficiency of the developed algorithms is compared with other signal processing schemes published in literature
Nakamura, Munehiro; Kajiwara, Yusuke; Otsuka, Atsushi; Kimura, Haruhiko
2013-10-02
Over-sampling methods based on Synthetic Minority Over-sampling Technique (SMOTE) have been proposed for classification problems of imbalanced biomedical data. However, the existing over-sampling methods achieve slightly better or sometimes worse result than the simplest SMOTE. In order to improve the effectiveness of SMOTE, this paper presents a novel over-sampling method using codebooks obtained by the learning vector quantization. In general, even when an existing SMOTE applied to a biomedical dataset, its empty feature space is still so huge that most classification algorithms would not perform well on estimating borderlines between classes. To tackle this problem, our over-sampling method generates synthetic samples which occupy more feature space than the other SMOTE algorithms. Briefly saying, our over-sampling method enables to generate useful synthetic samples by referring to actual samples taken from real-world datasets. Experiments on eight real-world imbalanced datasets demonstrate that our proposed over-sampling method performs better than the simplest SMOTE on four of five standard classification algorithms. Moreover, it is seen that the performance of our method increases if the latest SMOTE called MWMOTE is used in our algorithm. Experiments on datasets for β-turn types prediction show some important patterns that have not been seen in previous analyses. The proposed over-sampling method generates useful synthetic samples for the classification of imbalanced biomedical data. Besides, the proposed over-sampling method is basically compatible with basic classification algorithms and the existing over-sampling methods.
Linac design algorithm with symmetric segments
International Nuclear Information System (INIS)
Takeda, Harunori; Young, L.M.; Nath, S.; Billen, J.H.; Stovall, J.E.
1996-01-01
The cell lengths in linacs of traditional design are typically graded as a function of particle velocity. By making groups of cells and individual cells symmetric in both the CCDTL AND CCL, the cavity design as well as mechanical design and fabrication is simplified without compromising the performance. We have implemented a design algorithm in the PARMILA code in which cells and multi-cavity segments are made symmetric, significantly reducing the number of unique components. Using the symmetric algorithm, a sample linac design was generated and its performance compared with a similar one of conventional design
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Enhanced clinical pharmacy service targeting tools: risk-predictive algorithms.
El Hajji, Feras W D; Scullin, Claire; Scott, Michael G; McElnay, James C
2015-04-01
This study aimed to determine the value of using a mix of clinical pharmacy data and routine hospital admission spell data in the development of predictive algorithms. Exploration of risk factors in hospitalized patients, together with the targeting strategies devised, will enable the prioritization of clinical pharmacy services to optimize patient outcomes. Predictive algorithms were developed using a number of detailed steps using a 75% sample of integrated medicines management (IMM) patients, and validated using the remaining 25%. IMM patients receive targeted clinical pharmacy input throughout their hospital stay. The algorithms were applied to the validation sample, and predicted risk probability was generated for each patient from the coefficients. Risk threshold for the algorithms were determined by identifying the cut-off points of risk scores at which the algorithm would have the highest discriminative performance. Clinical pharmacy staffing levels were obtained from the pharmacy department staffing database. Numbers of previous emergency admissions and admission medicines together with age-adjusted co-morbidity and diuretic receipt formed a 12-month post-discharge and/or readmission risk algorithm. Age-adjusted co-morbidity proved to be the best index to predict mortality. Increased numbers of clinical pharmacy staff at ward level was correlated with a reduction in risk-adjusted mortality index (RAMI). Algorithms created were valid in predicting risk of in-hospital and post-discharge mortality and risk of hospital readmission 3, 6 and 12 months post-discharge. The provision of ward-based clinical pharmacy services is a key component to reducing RAMI and enabling the full benefits of pharmacy input to patient care to be realized. © 2014 John Wiley & Sons, Ltd.
Fault Diagnosis of Supervision and Homogenization Distance Based on Local Linear Embedding Algorithm
Directory of Open Access Journals (Sweden)
Guangbin Wang
2015-01-01
Full Text Available In view of the problems of uneven distribution of reality fault samples and dimension reduction effect of locally linear embedding (LLE algorithm which is easily affected by neighboring points, an improved local linear embedding algorithm of homogenization distance (HLLE is developed. The method makes the overall distribution of sample points tend to be homogenization and reduces the influence of neighboring points using homogenization distance instead of the traditional Euclidean distance. It is helpful to choose effective neighboring points to construct weight matrix for dimension reduction. Because the fault recognition performance improvement of HLLE is limited and unstable, the paper further proposes a new local linear embedding algorithm of supervision and homogenization distance (SHLLE by adding the supervised learning mechanism. On the basis of homogenization distance, supervised learning increases the category information of sample points so that the same category of sample points will be gathered and the heterogeneous category of sample points will be scattered. It effectively improves the performance of fault diagnosis and maintains stability at the same time. A comparison of the methods mentioned above was made by simulation experiment with rotor system fault diagnosis, and the results show that SHLLE algorithm has superior fault recognition performance.
Parallelization of a spherical Sn transport theory algorithm
International Nuclear Information System (INIS)
Haghighat, A.
1989-01-01
The work described in this paper derives a parallel algorithm for an R-dependent spherical S N transport theory algorithm and studies its performance by testing different sample problems. The S N transport method is one of the most accurate techniques used to solve the linear Boltzmann equation. Several studies have been done on the vectorization of the S N algorithms; however, very few studies have been performed on the parallelization of this algorithm. Weinke and Hommoto have looked at the parallel processing of the different energy groups, and Azmy recently studied the parallel processing of the inner iterations of an X-Y S N nodal transport theory method. Both studies have reported very encouraging results, which have prompted us to look at the parallel processing of an R-dependent S N spherical geometry algorithm. This geometry was chosen because, in spite of its simplicity, it contains the complications of the curvilinear geometries (i.e., redistribution of neutrons over the discretized angular bins)
An improved initialization center k-means clustering algorithm based on distance and density
Duan, Yanling; Liu, Qun; Xia, Shuyin
2018-04-01
Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.
Directory of Open Access Journals (Sweden)
Lee Tae-Hoon
2016-12-01
Full Text Available In many cases, a X¯$\\overline X $ control chart based on a performance variable is used in industrial fields. Typically, the control chart monitors the measurements of a performance variable itself. However, if the performance variable is too costly or impossible to measure, and a less expensive surrogate variable is available, the process may be more efficiently controlled using surrogate variables. In this paper, we present a model for the economic statistical design of a VSI (Variable Sampling Interval X¯$\\overline X $ control chart using a surrogate variable that is linearly correlated with the performance variable. We derive the total average profit model from an economic viewpoint and apply the model to a Very High Temperature Reactor (VHTR nuclear fuel measurement system and derive the optimal result using genetic algorithms. Compared with the control chart based on a performance variable, the proposed model gives a larger expected net income per unit of time in the long-run if the correlation between the performance variable and the surrogate variable is relatively high. The proposed model was confined to the sample mean control chart under the assumption that a single assignable cause occurs according to the Poisson process. However, the model may also be extended to other types of control charts using a single or multiple assignable cause assumptions such as VSS (Variable Sample Size X¯$\\overline X $ control chart, EWMA, CUSUM charts and so on.
Posuvailo, V. M.; Klapkiv, M. D.; Student, M. M.; Sirak, Y. Y.; Pokhmurska, H. V.
2017-03-01
The oxide ceramic coating with copper inclusions was synthesized by the method of plasma electrolytic oxidation (PEO). Calculations of the Gibbs energies of reactions between the plasma channel elements with inclusions of copper and copper oxide were carried out. Two methods of forming the oxide-ceramic coatings on aluminum base in electrolytic plasma with copper inclusions were established. The first method - consist in the introduction of copper into the aluminum matrix, the second - copper oxide. During the synthesis of oxide ceramic coatings plasma channel does not react with copper and copper oxide-ceramic included in the coating. In the second case is reduction of copper oxide in interaction with elements of the plasma channel. The content of oxide-ceramic layer was investigated by X-ray and X-ray microelement analysis. The inclusions of copper, CuAl2, Cu9Al4 in the oxide-ceramic coatings were found. It was established that in the spark plasma channels alongside with the oxidation reaction occurs also the reaction aluminothermic reduction of the metal that allows us to dope the oxide-ceramic coating by metal the isobaric-isothermal potential oxidation of which is less negative than the potential of the aluminum oxide.
International Nuclear Information System (INIS)
Cotes, S.; Fernandez Guillermet, A.; Sade, M.
1999-01-01
Very recent, accurate dilatometric measurements of the fcc hcp martensitic transformation (MT) temperatures are used to develop a new thermodynamic description of the fcc and hcp phases in the Fe-Mn-Si system, based on phenomenological models for the Gibbs energy function. The composition dependence of the driving forces for the fcc→hcp and the hcp→fcc MTs is established. Detailed calculations of the MT temperatures are reported, which are used to investigate the systematic effects of Si additions upon the MT temperatures of Fe-Mn alloys. A critical comparison with one of the most recent thermodynamic analyses of the Fe-Mn-Si system, which is due to Forsberg and Agren, is also presented. (orig.)
Directory of Open Access Journals (Sweden)
Tiago Campos Pereira
2007-01-01
Full Text Available The RNA interference (RNAi technique is a recent technology that uses double-stranded RNA molecules to promote potent and specific gene silencing. The application of this technique to molecular biology has increased considerably, from gene function identification to disease treatment. However, not all small interfering RNAs (siRNAs are equally efficient, making target selection an essential procedure. Here we present Strand Analysis (SA, a free online software tool able to identify and classify the best RNAi targets based on Gibbs free energy (deltaG. Furthermore, particular features of the software, such as the free energy landscape and deltaG gradient, may be used to shed light on RNA-induced silencing complex (RISC activity and RNAi mechanisms, which makes the SA software a distinct and innovative tool.
Radtke, Valentin; Ermantraut, Andreas; Himmel, Daniel; Koslowski, Thorsten; Leito, Ivo; Krossing, Ingo
2018-02-23
Described is a procedure for the thermodynamically rigorous, experimental determination of the Gibbs energy of transfer of single ions between solvents. The method is based on potential difference measurements between two electrochemical half cells with different solvents connected by an ideal ionic liquid salt bridge (ILSB). Discussed are the specific requirements for the IL with regard to the procedure, thus ensuring that the liquid junction potentials (LJP) at both ends of the ILSB are mostly canceled. The remaining parts of the LJPs can be determined by separate electromotive force measurements. No extra-thermodynamic assumptions are necessary for this procedure. The accuracy of the measurements depends, amongst others, on the ideality of the IL used, as shown in our companion paper Part II. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Estimating rare events in biochemical systems using conditional sampling
Sundar, V. S.
2017-01-01
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
Acoustic Impedance Inversion of Seismic Data Using Genetic Algorithm
Eladj, Said; Djarfour, Noureddine; Ferahtia, Djalal; Ouadfeul, Sid-Ali
2013-04-01
The inversion of seismic data can be used to constrain estimates of the Earth's acoustic impedance structure. This kind of problem is usually known to be non-linear, high-dimensional, with a complex search space which may be riddled with many local minima, and results in irregular objective functions. We investigate here the performance and the application of a genetic algorithm, in the inversion of seismic data. The proposed algorithm has the advantage of being easily implemented without getting stuck in local minima. The effects of population size, Elitism strategy, uniform cross-over and lower mutation are examined. The optimum solution parameters and performance were decided as a function of the testing error convergence with respect to the generation number. To calculate the fitness function, we used L2 norm of the sample-to-sample difference between the reference and the inverted trace. The cross-over probability is of 0.9-0.95 and mutation has been tested at 0.01 probability. The application of such a genetic algorithm to synthetic data shows that the inverted acoustic impedance section was efficient. Keywords: Seismic, Inversion, acoustic impedance, genetic algorithm, fitness functions, cross-over, mutation.
QUEST: Eliminating Online Supervised Learning for Efficient Classification Algorithms.
Zwartjes, Ardjan; Havinga, Paul J M; Smit, Gerard J M; Hurink, Johann L
2016-10-01
In this work, we introduce QUEST (QUantile Estimation after Supervised Training), an adaptive classification algorithm for Wireless Sensor Networks (WSNs) that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting raw sensor data puts high demands on the battery, reducing network life time. By merely transmitting partial results or classifications based on the sampled data, the amount of traffic on the network can be significantly reduced. Such classifications can be made by learning based algorithms using sampled data. An important issue, however, is the training phase of these learning based algorithms. Training a deployed sensor network requires a lot of communication and an impractical amount of human involvement. QUEST is a hybrid algorithm that combines supervised learning in a controlled environment with unsupervised learning on the location of deployment. Using the SITEX02 dataset, we demonstrate that the presented solution works with a performance penalty of less than 10% in 90% of the tests. Under some circumstances, it even outperforms a network of classifiers completely trained with supervised learning. As a result, the need for on-site supervised learning and communication for training is completely eliminated by our solution.
A maximum-likelihood reconstruction algorithm for tomographic gamma-ray nondestructive assay
International Nuclear Information System (INIS)
Prettyman, T.H.; Estep, R.J.; Cole, R.A.; Sheppard, G.A.
1994-01-01
A new tomographic reconstruction algorithm for nondestructive assay with high resolution gamma-ray spectroscopy (HRGS) is presented. The reconstruction problem is formulated using a maximum-likelihood approach in which the statistical structure of both the gross and continuum measurements used to determine the full-energy response in HRGS is precisely modeled. An accelerated expectation-maximization algorithm is used to determine the optimal solution. The algorithm is applied to safeguards and environmental assays of large samples (for example, 55-gal. drums) in which high continuum levels caused by Compton scattering are routinely encountered. Details of the implementation of the algorithm and a comparative study of the algorithm's performance are presented
Directory of Open Access Journals (Sweden)
Long-Hua Ma
2011-08-01
Full Text Available A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform.
STAR Algorithm Integration Team - Facilitating operational algorithm development
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Elsheikh, Ahmed H.; Hoteit, Ibrahim; Wheeler, Mary Fanett
2014-01-01
An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior
On the Cooley-Turkey Fast Fourier algorithm for arbitrary factors ...
African Journals Online (AJOL)
Atonuje and Okonta in [1] developed the Cooley-Turkey Fast Fourier transform algorithm and its application to the Fourier transform of discretely sampled data points N, expressed in terms of a power y of 2. In this paper, we extend the formalism of [1] Cookey-Turkey Fast Fourier transform algorithm. The method is developed ...
Detection of cracks in shafts with the Approximated Entropy algorithm
Sampaio, Diego Luchesi; Nicoletti, Rodrigo
2016-05-01
The Approximate Entropy is a statistical calculus used primarily in the fields of Medicine, Biology, and Telecommunication for classifying and identifying complex signal data. In this work, an Approximate Entropy algorithm is used to detect cracks in a rotating shaft. The signals of the cracked shaft are obtained from numerical simulations of a de Laval rotor with breathing cracks modelled by the Fracture Mechanics. In this case, one analysed the vertical displacements of the rotor during run-up transients. The results show the feasibility of detecting cracks from 5% depth, irrespective of the unbalance of the rotating system and crack orientation in the shaft. The results also show that the algorithm can differentiate the occurrence of crack only, misalignment only, and crack + misalignment in the system. However, the algorithm is sensitive to intrinsic parameters p (number of data points in a sample vector) and f (fraction of the standard deviation that defines the minimum distance between two sample vectors), and good results are only obtained by appropriately choosing their values according to the sampling rate of the signal.
A new approximate algorithm for image reconstruction in cone-beam spiral CT at small cone-angles
International Nuclear Information System (INIS)
Schaller, S.; Flohr, T.; Steffen, P.
1996-01-01
This paper presents a new approximate algorithm for image reconstruction with cone-beam spiral CT data at relatively small cone-angles. Based on the algorithm of Wang et al., our method combines a special complementary interpolation with filtered backprojection. The presented algorithm has three main advantages over Wang's algorithm: (1) It overcomes the pitch limitation of Wang's algorithm. (2) It significantly improves z-resolution when suitable sampling schemes are applied. (3) It avoids the waste of applied radiation dose inherent to Wang's algorithm. Usage of the total applied dose is an important requirement in medical imaging. Our method has been implemented on a standard workstation. Reconstructions of computer-simulated data of different phantoms, assuming sampling conditions and image quality requirements typical to medical CT, show encouraging results
Lecca, Michela; Modena, Carla Maria; Rizzi, Alessandro
2018-01-01
Milano Retinexes are spatial color algorithms, part of the Retinex family, usually employed for image enhancement. They modify the color of each pixel taking into account the surrounding colors and their position, catching in this way the local spatial color distribution relevant to image enhancement. We present T-Rex (from the words threshold and Retinex), an implementation of Milano Retinex, whose main novelty is the use of the pixel intensity as a self-regulating threshold to deterministically sample local color information. The experiments, carried out on real-world pictures, show that T-Rex image enhancement performance are in line with those of the Milano Retinex family: T-Rex increases the brightness, the contrast, and the flatness of the channel distributions of the input image, making more intelligible the content of pictures acquired under difficult light conditions.
An Algorithm for Investigating the Structure of Material Surfaces
Directory of Open Access Journals (Sweden)
M. Toman
2003-01-01
Full Text Available The aim of this paper is to summarize the algorithm and the experience that have been achieved in the investigation of grain structure of surfaces of certain materials, particularly from samples of gold. The main parts of the algorithm to be discussed are:1. acquisition of input data,2. localization of grain region,3. representation of grain size,4. representation of outputs (postprocessing.
Genetic local search algorithm for optimization design of diffractive optical elements.
Zhou, G; Chen, Y; Wang, Z; Song, H
1999-07-10
We propose a genetic local search algorithm (GLSA) for the optimization design of diffractive optical elements (DOE's). This hybrid algorithm incorporates advantages of both genetic algorithm (GA) and local search techniques. It appears better able to locate the global minimum compared with a canonical GA. Sample cases investigated here include the optimization design of binary-phase Dammann gratings, continuous surface-relief grating array generators, and a uniform top-hat focal plane intensity profile generator. Two GLSA's whose incorporated local search techniques are the hill-climbing method and the simulated annealing algorithm are investigated. Numerical experimental results demonstrate that the proposed algorithm is highly efficient and robust. DOE's that have high diffraction efficiency and excellent uniformity can be achieved by use of the algorithm we propose.
Real-time algorithm for acoustic imaging with a microphone array.
Huang, Xun
2009-05-01
Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.
Block Least Mean Squares Algorithm over Distributed Wireless Sensor Network
Directory of Open Access Journals (Sweden)
T. Panigrahi
2012-01-01
Full Text Available In a distributed parameter estimation problem, during each sampling instant, a typical sensor node communicates its estimate either by the diffusion algorithm or by the incremental algorithm. Both these conventional distributed algorithms involve significant communication overheads and, consequently, defeat the basic purpose of wireless sensor networks. In the present paper, we therefore propose two new distributed algorithms, namely, block diffusion least mean square (BDLMS and block incremental least mean square (BILMS by extending the concept of block adaptive filtering techniques to the distributed adaptation scenario. The performance analysis of the proposed BDLMS and BILMS algorithms has been carried out and found to have similar performances to those offered by conventional diffusion LMS and incremental LMS algorithms, respectively. The convergence analyses of the proposed algorithms obtained from the simulation study are also found to be in agreement with the theoretical analysis. The remarkable and interesting aspect of the proposed block-based algorithms is that their communication overheads per node and latencies are less than those of the conventional algorithms by a factor as high as the block size used in the algorithms.
An improved approach to exchange non-rectangular departments in CRAFT algorithm
Esmaeili Aliabadi, Danial; Pourghannad, Behrooz
2012-01-01
In this Paper, an algorithm which improves CRAFT algorithm’s efficacy is developed. CRAFT is an algorithm widely used to solve facility layout problems. Our proposed method, named Plasma, can be used to improve CRAFT results. In this note, Plasma algorithm is tested in several sample problems. The comparison between Plasma and classic CRAFT and also Micro-CRAFT indicates that Plasma is successful in cost reduction in comparison with CRAFT and Micro-CRAFT.
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
Algorithm development for Maxwell's equations for computational electromagnetism
Goorjian, Peter M.
1990-01-01
A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.
A priori motion models for four-dimensional reconstruction in gated cardiac SPECT
International Nuclear Information System (INIS)
Lalush, D.S.; Tsui, B.M.W.; Cui, Lin
1996-01-01
We investigate the benefit of incorporating a priori assumptions about cardiac motion in a fully four-dimensional (4D) reconstruction algorithm for gated cardiac SPECT. Previous work has shown that non-motion-specific 4D Gibbs priors enforcing smoothing in time and space can control noise while preserving resolution. In this paper, we evaluate methods for incorporating known heart motion in the Gibbs prior model. The new model is derived by assigning motion vectors to each 4D voxel, defining the movement of that volume of activity into the neighboring time frames. Weights for the Gibbs cliques are computed based on these open-quotes most likelyclose quotes motion vectors. To evaluate, we employ the mathematical cardiac-torso (MCAT) phantom with a new dynamic heart model that simulates the beating and twisting motion of the heart. Sixteen realistically-simulated gated datasets were generated, with noise simulated to emulate a real Tl-201 gated SPECT study. Reconstructions were performed using several different reconstruction algorithms, all modeling nonuniform attenuation and three-dimensional detector response. These include ML-EM with 4D filtering, 4D MAP-EM without prior motion assumption, and 4D MAP-EM with prior motion assumptions. The prior motion assumptions included both the correct motion model and incorrect models. Results show that reconstructions using the 4D prior model can smooth noise and preserve time-domain resolution more effectively than 4D linear filters. We conclude that modeling of motion in 4D reconstruction algorithms can be a powerful tool for smoothing noise and preserving temporal resolution in gated cardiac studies
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
Adachi, Kohei
2013-01-01
Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one,…
Recent Advancements in Lightning Jump Algorithm Work
Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.
2010-01-01
In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).
New adaptive sampling method in particle image velocimetry
International Nuclear Information System (INIS)
Yu, Kaikai; Xu, Jinglei; Tang, Lan; Mo, Jianwei
2015-01-01
This study proposes a new adaptive method to enable the number of interrogation windows and their positions in a particle image velocimetry (PIV) image interrogation algorithm to become self-adapted according to the seeding density. The proposed method can relax the constraint of uniform sampling rate and uniform window size commonly adopted in the traditional PIV algorithm. In addition, the positions of the sampling points are redistributed on the basis of the spring force generated by the sampling points. The advantages include control of the number of interrogation windows according to the local seeding density and smoother distribution of sampling points. The reliability of the adaptive sampling method is illustrated by processing synthetic and experimental images. The synthetic example attests to the advantages of the sampling method. Compared with that of the uniform interrogation technique in the experimental application, the spatial resolution is locally enhanced when using the proposed sampling method. (technical design note)
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
An Enhanced K-Means Algorithm for Water Quality Analysis of The Haihe River in China.
Zou, Hui; Zou, Zhihong; Wang, Xiaojing
2015-11-12
The increase and the complexity of data caused by the uncertain environment is today's reality. In order to identify water quality effectively and reliably, this paper presents a modified fast clustering algorithm for water quality analysis. The algorithm has adopted a varying weights K-means cluster algorithm to analyze water monitoring data. The varying weights scheme was the best weighting indicator selected by a modified indicator weight self-adjustment algorithm based on K-means, which is named MIWAS-K-means. The new clustering algorithm avoids the margin of the iteration not being calculated in some cases. With the fast clustering analysis, we can identify the quality of water samples. The algorithm is applied in water quality analysis of the Haihe River (China) data obtained by the monitoring network over a period of eight years (2006-2013) with four indicators at seven different sites (2078 samples). Both the theoretical and simulated results demonstrate that the algorithm is efficient and reliable for water quality analysis of the Haihe River. In addition, the algorithm can be applied to more complex data matrices with high dimensionality.
Directory of Open Access Journals (Sweden)
Kai Yang
2016-01-01
Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.
Improved SURF Algorithm and Its Application in Seabed Relief Image Matching
Directory of Open Access Journals (Sweden)
Zhang Hong-Mei
2017-01-01
Full Text Available The matching based on seabed relief image is widely used in underwater relief matching navigation and target recognition, etc. However, being influenced by various factors, some conventional matching algorithms are difficult to obtain an ideal result in the matching of seabed relief image. SURF(Speeded Up Robust Features algorithm is based on feature points pair to achieve matching, and can get good results in the seabed relief image matching. However, in practical applications, the traditional SURF algorithm is easy to get false matching, especially when the area’s features are similar or not obvious, the problem is more seriously. In order to improve the robustness of the algorithm, this paper proposes an improved matching algorithm, which combines the SURF, and RANSAC (Random Sample Consensus algorithms. The new algorithm integrates the two algorithms advantages, firstly, the SURF algorithm is applied to detect and extract the feature points then to pre-match. Secondly, RANSAC algorithm is utilized to eliminate mismatching points, and then the accurate matching is accomplished with the correct matching points. The experimental results show that the improved algorithm overcomes the mismatching problem effectively and have better precision and faster speed than the traditional SURF algorithm.
Billings, Jake
2017-01-01
A new variation of blockchain proof of work algorithm is proposed to incentivize the timely execution of image processing algorithms. A sample image processing algorithm is proposed to determine interesting images using analysis of the entropy of pixel subsets within images. The efficacy of the image processing algorithm is examined using two small sets of training and test data. The interesting image algorithm is then integrated into a simplified blockchain mining proof of work algorithm bas...
Comparison of Boltzmann and Gibbs entropies for the analysis of single-chain phase transitions
Shakirov, T.; Zablotskiy, S.; Böker, A.; Ivanov, V.; Paul, W.
2017-03-01
In the last 10 years, flat histogram Monte Carlo simulations have contributed strongly to our understanding of the phase behavior of simple generic models of polymers. These simulations result in an estimate for the density of states of a model system. To connect this result with thermodynamics, one has to relate the density of states to the microcanonical entropy. In a series of publications, Dunkel, Hilbert and Hänggi argued that it would lead to a more consistent thermodynamic description of small systems, when one uses the Gibbs definition of entropy instead of the Boltzmann one. The latter is the logarithm of the density of states at a certain energy, the former is the logarithm of the integral of the density of states over all energies smaller than or equal to this energy. We will compare the predictions using these two definitions for two polymer models, a coarse-grained model of a flexible-semiflexible multiblock copolymer and a coarse-grained model of the protein poly-alanine. Additionally, it is important to note that while Monte Carlo techniques are normally concerned with the configurational energy only, the microcanonical ensemble is defined for the complete energy. We will show how taking the kinetic energy into account alters the predictions from the analysis. Finally, the microcanonical ensemble is supposed to represent a closed mechanical N-particle system. But due to Galilei invariance such a system has two additional conservation laws, in general: momentum and angular momentum. We will also show, how taking these conservation laws into account alters the results.
Essential algorithms a practical approach to computer algorithms
Stephens, Rod
2013-01-01
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s
Directory of Open Access Journals (Sweden)
Yoichi Hayashi
2016-01-01
Full Text Available Historically, the assessment of credit risk has proved to be both highly important and extremely difficult. Currently, financial institutions rely on the use of computer-generated credit scores for risk assessment. However, automated risk evaluations are currently imperfect, and the loss of vast amounts of capital could be prevented by improving the performance of computerized credit assessments. A number of approaches have been developed for the computation of credit scores over the last several decades, but these methods have been considered too complex without good interpretability and have therefore not been widely adopted. Therefore, in this study, we provide the first comprehensive comparison of results regarding the assessment of credit risk obtained using 10 runs of 10-fold cross validation of the Re-RX algorithm family, including the Re-RX algorithm, the Re-RX algorithm with both discrete and continuous attributes (Continuous Re-RX, the Re-RX algorithm with J48graft, the Re-RX algorithm with a trained neural network (Sampling Re-RX, NeuroLinear, NeuroLinear+GRG, and three unique rule extraction techniques involving support vector machines and Minerva from four real-life, two-class mixed credit-risk datasets. We also discuss the roles of various newly-extended types of the Re-RX algorithm and high performance classifiers from a Pareto optimal perspective. Our findings suggest that Continuous Re-RX, Re-RX with J48graft, and Sampling Re-RX comprise a powerful management tool that allows the creation of advanced, accurate, concise and interpretable decision support systems for credit risk evaluation. In addition, from a Pareto optimal perspective, the Re-RX algorithm family has superior features in relation to the comprehensibility of extracted rules and the potential for credit scoring with Big Data.
Chen, Bin; Peng, Xiuming; Xie, Tiansheng; Jin, Changzhong; Liu, Fumin; Wu, Nanping
2017-07-01
Currently, there are three algorithms for screening of syphilis: traditional algorithm, reverse algorithm and European Centre for Disease Prevention and Control (ECDC) algorithm. To date, there is not a generally recognized diagnostic algorithm. When syphilis meets HIV, the situation is even more complex. To evaluate their screening performance and impact on the seroprevalence of syphilis in HIV-infected individuals, we conducted a cross-sectional study included 865 serum samples from HIV-infected patients in a tertiary hospital. Every sample (one per patient) was tested with toluidine red unheated serum test (TRUST), T. pallidum particle agglutination assay (TPPA), and Treponema pallidum enzyme immunoassay (TP-EIA) according to the manufacturer's instructions. The results of syphilis serological testing were interpreted following different algorithms respectively. We directly compared the traditional syphilis screening algorithm with the reverse syphilis screening algorithm in this unique population. The reverse algorithm achieved remarkable higher seroprevalence of syphilis than the traditional algorithm (24.9% vs. 14.2%, p algorithm, the traditional algorithm also had a missed serodiagnosis rate of 42.8%. The total percentages of agreement and corresponding kappa values of tradition and ECDC algorithm compared with those of reverse algorithm were as follows: 89.4%,0.668; 99.8%, 0.994. There was a very good strength of agreement between the reverse and the ECDC algorithm. Our results supported the reverse (or ECDC) algorithm in screening of syphilis in HIV-infected populations. In addition, our study demonstrated that screening of HIV-populations using different algorithms may result in a statistically different seroprevalence of syphilis.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Wei, Yongjie; Ge, Baozhen; Wei, Yaolin
2009-03-20
In general, model-independent algorithms are sensitive to noise during laser particle size measurement. An improved conjugate gradient algorithm (ICGA) that can be used to invert particle size distribution (PSD) from diffraction data is presented. By use of the ICGA to invert simulated data with multiplicative or additive noise, we determined that additive noise is the main factor that induces distorted results. Thus the ICGA is amended by introduction of an iteration step-adjusting parameter and is used experimentally on simulated data and some samples. The experimental results show that the sensitivity of the ICGA to noise is reduced and the inverted results are in accord with the real PSD.
Efficient triangulation of Poisson-disk sampled point sets
Guo, Jianwei
2014-05-06
In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.
Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm
Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.
2017-12-01
Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.
Approximated affine projection algorithm for feedback cancellation in hearing aids.
Lee, Sangmin; Kim, In-Young; Park, Young-Cheol
2007-09-01
We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.
An efficient and robust algorithm for parallel groupwise registration of bone surfaces
van de Giessen, Martijn; Vos, Frans M.; Grimbergen, Cornelis A.; van Vliet, Lucas J.; Streekstra, Geert J.
2012-01-01
In this paper a novel groupwise registration algorithm is proposed for the unbiased registration of a large number of densely sampled point clouds. The method fits an evolving mean shape to each of the example point clouds thereby minimizing the total deformation. The registration algorithm
Directory of Open Access Journals (Sweden)
Angela Biaggio
2005-04-01
Full Text Available Sessenta estudantes universitários (30 homens e 30 mulheres, de João Pessoa, e 60 estudantes de Porto Alegre, igualmente distribuídos, foram comparados a uma amostra semelhante da Noruega - 120 estudantes universitários (60 homens e 60 mulheres. Exceto por uma aparente diferença na orientação cultural entre as mulheres brasileiras, comparações através do teste de moralidade de justiça de Gibbs, do teste ECI da ética do cuidado, do inventário de papéis sexuais de Bem, e do teste de orientação cultural de Triandis mostraram que todas as diferenças foram entre a amostra da Noruega e as amostras do Brasil como um bloco. Os brasileiros estabeleceram uma diferenciação em relação aos papéis sexuais que não foi feita pelos noruegueses, e obtiveram escores mais altos na orientação cultural para o coletivismo. Os noruegueses mostraram mais altos escores no ECI, o que pode ser decorrente de um viés cultural no teste. Não houve diferenças, entre o Brasil e a Noruega, nem na orientação cultural para o individualismo, nem no teste de Gibbs. De uma forma geral, os homens obtiveram escores mais altos na medida do individualismo total e as mulheres no coletivismo vertical. As mulheres de João Pessoa obtiveram escores mais hedonísticos e individualistas do que as mulheres de Porto Alegre, que obtiveram escores mais tradicionais.Thirty female and 30 male university students each from Joao Pessoa and Porto Alegre were compared to a comparable Norwegian sample of 60 female and 60 male students. Except for a suggestion of differences in women's cultural orientation, comparisons on Gibbs' test of justice morality, the ECI test for ethic of care, Bem's sex role inventory, and Triandis' test for cultural orientations showed that all differences were between the Norwegian sample and the Brazilian samples as a unit. Brazilians showed a differentiation of sex roles, which was not shown in Norwegians, and higher scores on the collectivism
QUEST: Eliminating Online Supervised Learning for Efficient Classification Algorithms
Directory of Open Access Journals (Sweden)
Ardjan Zwartjes
2016-10-01
Full Text Available In this work, we introduce QUEST (QUantile Estimation after Supervised Training, an adaptive classification algorithm for Wireless Sensor Networks (WSNs that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting raw sensor data puts high demands on the battery, reducing network life time. By merely transmitting partial results or classifications based on the sampled data, the amount of traffic on the network can be significantly reduced. Such classifications can be made by learning based algorithms using sampled data. An important issue, however, is the training phase of these learning based algorithms. Training a deployed sensor network requires a lot of communication and an impractical amount of human involvement. QUEST is a hybrid algorithm that combines supervised learning in a controlled environment with unsupervised learning on the location of deployment. Using the SITEX02 dataset, we demonstrate that the presented solution works with a performance penalty of less than 10% in 90% of the tests. Under some circumstances, it even outperforms a network of classifiers completely trained with supervised learning. As a result, the need for on-site supervised learning and communication for training is completely eliminated by our solution.
Results of Evolution Supervised by Genetic Algorithms
Directory of Open Access Journals (Sweden)
Lorentz JÄNTSCHI
2010-09-01
Full Text Available The efficiency of a genetic algorithm is frequently assessed using a series of operators of evolution like crossover operators, mutation operators or other dynamic parameters. The present paper aimed to review the main results of evolution supervised by genetic algorithms used to identify solutions to agricultural and horticultural hard problems and to discuss the results of using a genetic algorithms on structure-activity relationships in terms of behavior of evolution supervised by genetic algorithms. A genetic algorithm had been developed and implemented in order to identify the optimal solution in term of estimation power of a multiple linear regression approach for structure-activity relationships. Three survival and three selection strategies (proportional, deterministic and tournament were investigated in order to identify the best survival-selection strategy able to lead to the model with higher estimation power. The Molecular Descriptors Family for structure characterization of a sample of 206 polychlorinated biphenyls with measured octanol-water partition coefficients was used as case study. Evolution using different selection and survival strategies proved to create populations of genotypes living in the evolution space with different diversity and variability. Under a series of criteria of comparisons these populations proved to be grouped and the groups were showed to be statistically different one to each other. The conclusions about genetic algorithm evolution according to a number of criteria were also highlighted.
A Two-Pass Exact Algorithm for Selection on Parallel Disk Systems.
Mi, Tian; Rajasekaran, Sanguthevar
2013-07-01
Numerous OLAP queries process selection operations of "top N", median, "top 5%", in data warehousing applications. Selection is a well-studied problem that has numerous applications in the management of data and databases since, typically, any complex data query can be reduced to a series of basic operations such as sorting and selection. The parallel selection has also become an important fundamental operation, especially after parallel databases were introduced. In this paper, we present a deterministic algorithm Recursive Sampling Selection (RSS) to solve the exact out-of-core selection problem, which we show needs no more than (2 + ε ) passes ( ε being a very small fraction). We have compared our RSS algorithm with two other algorithms in the literature, namely, the Deterministic Sampling Selection and QuickSelect on the Parallel Disks Systems. Our analysis shows that DSS is a (2 + ε )-pass algorithm when the total number of input elements N is a polynomial in the memory size M (i.e., N = M c for some constant c ). While, our proposed algorithm RSS runs in (2 + ε ) passes without any assumptions. Experimental results indicate that both RSS and DSS outperform QuickSelect on the Parallel Disks Systems. Especially, the proposed algorithm RSS is more scalable and robust to handle big data when the input size is far greater than the core memory size, including the case of N ≫ M c .
An Improved Direction Finding Algorithm Based on Toeplitz Approximation
Directory of Open Access Journals (Sweden)
Qing Wang
2013-01-01
Full Text Available In this paper, a novel direction of arrival (DOA estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments.
Python algorithms mastering basic algorithms in the Python language
Hetland, Magnus Lie
2014-01-01
Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc
Sample size calculation to externally validate scoring systems based on logistic regression models.
Directory of Open Access Journals (Sweden)
Antonio Palazón-Bru
Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.
Predicting sample size required for classification performance
Directory of Open Access Journals (Sweden)
Figueroa Rosa L
2012-02-01
Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.
Paleocurrents in the Charlie-Gibbs Fracture Zone during the Late Quaternary
Bashirova, L. D.; Dorokhova, E.; Sivkov, V.; Andersen, N.; Kuleshova, L. A.; Matul, A.
2017-12-01
The sedimentary processes prevailing in the Charlie-Gibbs Fracture Zone (CGFZ) are gravity flows. They rework pelagic sediments and contourites, and hereby mask the paleoceanographic information partly. The aim of this work is to study sediments of the AMK-4515 core taken in eastern part of the CGFZ. The sediment core AMK-4515 (52°03.14" N, 29°00.12" W; 370 cm length, water depth 3590 m) is located in the southern valley of the CGFZ. This natural deep corridor is influenced by both the westward Iceland-Scotland Overflow Water and underlying counterflow from the Newfoundland Basin. An alternation of the calcareous silty clays and hemipelagic clayey muds in the studied section indicates similarity between our core and long cores taking from CGFZ. A sharp facies shift was found at 80 cm depth in the investigated core. Only the upper section (0-80 cm) is valid for paleoreconstruction. Planktonic foraminiferal distribution and sea-surface temperature (SST) derived from these allow for tracing the PF and NAC latitudinal migrations during investigated period. So-called sortable silt mean size (SS) was used as proxy for reconstruction of bottom current intensity. The age model is based on δ18O and AMS 14C dating, as well as ice-rafted debris (IRD) counts and CaCO3 content. Stratigraphic subdivision of this section allows to allocate 2 marine isotope stages (MIS) covering the last 27 ka. We refer sediments below this level (80-370 cm) to upper part of turbidite, which was formed as a result of massive slide in the southern channel of the CGFZ. Sandy particles were deposited first, underlying silts and clays. This short-term event occurred so quickly that pelagic sedimentation played no role and was not reflected in the grain size distributions. There is evidence for the significant role of gravity flows in sedimentation in the southern channel of the CGFZ. According to our data, the massive sediment slide occurred in the CGFZ about 27 ka. The authors are grateful to RSF